how to implement trigonometric expressions using a programming language? - programming-languages

*Expression:
-(sqrt((a0+a1 cos wt +a2 cos 2wt )^2 +(a1sin wt +a2 sin 2wt)^2
- ـــــــــــــــــــــــــــــــــــــــــــــــــــــــــــــ
- sqrt ((1+b1cos wt+b2 cos 2wt)^2 +(b1 sin wt+b2 sin2 wt )^2
*variables:
- a0=.2
- a1=1.2
- a2=a0=.2
- b1=1.6
- b2=.8
- F=32KHZ
*Question:
I am supposed to use a programming language (not matlab) to implement this exp and observe the o/p signal .... how can I do that and with what language if its even possible?

In C#, you can use System.Math, I believe it has every function you need:
Abs
Exp
Sin
Cos
Sqrt
... a lot of other methods ...
Data type depends on accuarcy you need, you can use decimal data type.
Example:
decimal a0 = 0.2;
decimal a1 = 1.2;
decimal result = Math.Cos(a0) * a1 - Math.Sqrt(a1);
Basically, most programming languages have some sort of math library, which should contain those functions.

What language?
You can do that calculation almost with all programming language available nowadays, like C, PHP, Java, etc.
How to do that? That varying from one language to another, I give you an example with python (console):
>>> import math [enter]
>>> x = 30 [enter]
>>> y = math.cos(x) [enter]
>>> y [enter]
>>> 0.15425144988758405
Note: most programming language implement its trig function as radian not degree, to convert radian to degree in python, use this function: math.cos(math.radian(x))
To visualize in a graph, there is library for python called matplotlib, it is widely used in python

Related

Trying to understand Julia syntax in linear regression code (GLM package)

Total Julia noob here (with basic knowledge of Python). I am trying to do linear regression and things I read suggest the GLM package. Here is some sample code I found here:
using DataFrames, GLM
y = 1:10
df = DataFrame(y = y, x1 = y.^2, x2 = y.^3)
sm = GLM.lm( #formula(y ~ x1 + x2), df )
coef(sm)
Can someone explain the syntax here? What does #formula mean? Docs here say #foo means a
macro which I guess is basically just a function, but where do I find the function/macro formula? Just looking at the use here though, I would have thought it is maybe passing y ~ x1 + x2 (whatever that is) as the formula argument to lm? (similar to keyword arguments = in python?)
Next, what is ~ here? General docs say ~ means negation but I'm not seeing how that makes here.
Is there a place in the GLM docs where all of this is explained? I'm not seeing that. Only seeing a few examples but not a full breakdown of each function and all of its arguments.
You have stumbled upon the #formula language that is defined in the StatsModels.jl package and implemented in many statistics/econometrics related packages across the Julia ecosystem.
As you say, #formula is a macro, which transforms the expression given to it (here y ~ x1 + x2) into some other Julia expression. If you want to find out what happens when a macro gets called in Julia - which I admit can often look like magic to new (and sometimes experienced!) users - the #macroexpand macro can help you. In this case:
julia> #macroexpand #formula(y ~ x1 + x2)
:(StatsModels.Term(:y) ~ StatsModels.Term(:x1) + StatsModels.Term(:x2))
The result above is the expression constructed by the #formula macro. We see that the variables in our formula macro are transformed into StatsModels.Term objects. If we were to use StatsModels directly, we could construct this ourselves by doing:
julia> Term(:y) ~ Term(:x1) + Term(:x2)
FormulaTerm
Response:
y(unknown)
Predictors:
x1(unknown)
x2(unknown)
julia> (Term(:y) ~ Term(:x1) + Term(:x2)) == #formula(y ~ x1 + x2)
true
Now what is going on with ~, which as you say can be used for negation in Julia? What has happened here is that StatsModels has defined methods for ~ (which in Julia is and infix operator, that means essentially it is a function that can be written in between its arguments rather than having to be called with its arguments in brackets:
julia> (Term(:y) ~ Term(:x)) == ~(Term(:y), Term(:x))
true
So writing y::Term ~ x::Term is the same as calling ~(y::Term, x::Term), and this method for calling ~ with terms on the left and right hand side is defined by StatsModels (see method no. 6 below):
julia> methods(~)
# 6 methods for generic function "~":
[1] ~(x::BigInt) in Base.GMP at gmp.jl:542
[2] ~(::Missing) in Base at missing.jl:100
[3] ~(x::Bool) in Base at bool.jl:39
[4] ~(x::Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8}) in Base at int.jl:254
[5] ~(n::Integer) in Base at int.jl:138
[6] ~(lhs::Union{AbstractTerm, Tuple{Vararg{AbstractTerm,N}} where N}, rhs::Union{AbstractTerm, Tuple{Vararg{AbstractTerm,N}} where N}) in StatsModels at /home/nils/.julia/packages/StatsModels/pMxlJ/src/terms.jl:397
Note that you also find the general negation meaning here (method 3 above, which defines the behaviour for calling ~ on a boolean argument and is in Base Julia).
I agree that the GLM.jl docs maybe aren't the most comprehensive in the world, but one of the reasons for that is that the whole machinery behind #formula actually isn't a GLM.jl thing - so do check out the StatsModels docs linked above which are quite good I think.

Numerical differentiation using Cauchy (CIF)

I am trying to create a module with a mathematical class for Taylor series, to have it easily accessible for other projects. Hence I wish to optimize it as far as I can.
For those who are not too familiar with Taylor series, it will be a necessity to be able to differentiate a function in a point many times. Given that the normal definition of the mathematical derivative of a function will require immense precision for higher order derivatives, I've decided to use Cauchy's integral formula instead. With a little bit of work, I've managed to rearrange the formula a little bit, as you can see on this picture: Rearranged formula. This provided me with much more accurate results on higher order derivatives than the traditional definition of the derivative. Here is the function i am currently using to differentiate a function in a point:
def myDerivative(f, x, dTheta, degree):
riemannSum = 0
theta = 0
while theta < 2*np.pi:
functionArgument = np.complex128(x + np.exp(1j*theta))
secondFactor = np.complex128(np.exp(-1j * degree * theta))
riemannSum += f(functionArgument) * secondFactor * dTheta
theta += dTheta
return factorial(degree)/(2*np.pi) * riemannSum.real
I've tested this function in my main function with a carefully thought out mathematical function which I know the derivatives of, namely f(x) = sin(x).
def main():
print(myDerivative(f, 0, 2*np.pi/(4*4096), 16))
pass
These derivatives seems to freak out at around the derivative of degree 16. I've also tried to play around with dTheta, but with no luck. I would like to have higher orders as well, but I fear I've run into some kind of machine precission.
My question is in it's simplest form: What can I do to improve this function in order to get higher order of my derivatives?
I seem to have come up with a solution to the problem. I did this by rearranging Cauchy's integral formula in a different way, by exploiting that the initial contour integral can be an arbitrarily large circle around the point of differentiation. Be aware that it is very important that the function is analytic in the complex plane for this to be valid.
New formula
Also this gives a new function for differentiation:
def myDerivative(f, x, dTheta, degree, contourRadius):
riemannSum = 0
theta = 0
while theta < 2*np.pi:
functionArgument = np.complex128(x + contourRadius*np.exp(1j*theta))
secondFactor = (1/contourRadius)**degree*np.complex128(np.exp(-1j * degree * theta))
riemannSum += f(functionArgument) * secondFactor * dTheta
theta += dTheta
return factorial(degree) * riemannSum.real / (2*np.pi)
This gives me a very accurate differentiation of high orders. For instance I am able to differentiate f(x)=e^x 50 times without a problem.
Well, since you are working with a discrete approximation of the derivative (via dTheta), sooner or later you must run into trouble. I'm surprised you were able to get at least 15 accurate derivatives -- good work! But to get derivatives of all orders, either you have to put a limit on what you're willing to accept and say it's good enough, or else compute the derivatives symbolically. Take a look at Sympy for that. Sympy probably has some functions for computing Taylor series too.

If Then Constraints in non-linear programming

I have several constrains in a No linear problem.
For example:
In m(x+y-n)^2
If x+y-n>=0 Then m=0,
Else m=1.
How can I write this conditional constraint as linear or non-linear constraint?
Well you could write this as [min(x+y-n,0)]^2. Not sure if that will do you any good (this is non-differentiable, and thus difficult for many solvers). We can make the min() expression linear using additional binary variables:
z <= x+y-n
z <= 0
z >= x+y-n - b * M
z >= 0 - (1-b) * M
b in {0,1}
with M a large enough constant. In many cases better reformulations can be applied but that depends on the rest of the model.
If you use a constraint-programming solver, such as Choco Solver, then you can use IfThenElse constraints directly as well as other non linear constraints, such as square.

Loss of precision 'sqrt' Haskell

In the ghci terminal, I was computing some equations with Haskell using the sqrt function.
I notice that I would sometimes lose precision in my sqrt result, when it was supposed to be simplified.
For example,
sqrt 4 * sqrt 4 = 4 -- This works well!
sqrt 2 * sqrt 2 = 2.0000000000000004 -- Not the exact result.
Normally, I would expect a result of 2.
Is there a way to get the right simplification result?
How does that work in Haskell?
There are usable precise number libraries in Haskell. Two that come to mind are cyclotomic and the CReal module in the numbers package. (Cyclotomic numbers don't support all the operations on complex numbers that you might like, but square roots of integers and rationals are in the domain.)
>>> import Data.Complex.Cyclotomic
>>> sqrtInteger 2
e(8) - e(8)^3
>>> toReal $ sqrtInteger 2
Just 1.414213562373095 -- Maybe Double
>>> sqrtInteger 2 * sqrtInteger 2
2
>>> toReal $ sqrtInteger 2 * sqrtInteger 2
Just 2.0
>>> rootsQuadEq 3 2 1
Just (-1/3 + 1/3*e(8) + 1/3*e(8)^3,-1/3 - 1/3*e(8) - 1/3*e(8)^3)
>>> let eq x = 3*x*x + 2*x + 1
>>> eq (-1/3 + 1/3*e(8) + 1/3*e(8)^3)
0
>>> import Data.Number.CReal
>>> sqrt 2 :: CReal
1.4142135623730950488016887242096980785697 -- Show instance cuts off at 40th place
>>> sqrt 2 * sqrt 2 :: CReal
2.0
>>> sin 3 :: CReal
0.1411200080598672221007448028081102798469
>>> sin 3*sin 3 + cos 3*cos 3 :: CReal
1.0
You do not lose precision. You have limited precision.
The square root of 2 is a real number but not a rational number, therefore it's value cannot be represented exactly by any computer (except representing it symbolically, of course).
Even if you define a very large precision type, it will not be able to represent the square root of 2 exactly. You may get more precision, but never enough to represent that value exactly (unless you have a computer with infinite memory, in which case please hire me).
The explanation for these results lies in the type of the values returned by the sqrt function:
> :t sqrt
sqrt :: Floating a => a -> a
The Floating a means that the value returned belongs to the Floating type class.
The values of all types belonging to this class are stored as floating point numbers. These sacrifice precision for the sake of covering a larger range of numbers.
Double precision floating point numbers can cover very large ranges but they have limited precision and cannot encode all possible numbers. The square root of 2 (√2) is one such number:
> sqrt 2
1.4142135623730951
> sqrt 2 + 0.000000000000000001
1.4142135623730951
As you see above, it is impossible for double precision floating point numbers to be precise enough to represent √2 + 0.000000000000000001, it is simply rounded to the closest approximation which can be expressed using floating point encoding.
As mentioned by another poster, √2 is an irrational number which can be simplified to mean that it requires an infinite number of digits to represent correctly. As such it cannot be represented faithfully using floating point numbers. This leads to errors such as the one you noticed when multiplying it with itself.
You can learn about floating points on their wikipedia page: http://en.wikipedia.org/wiki/Floating_point.
I especially recommend that you read the answer to this other Stack Overflow question: Floating Point Limitations and follow the mentioned link, it will help you understand what's going on under the hood.
Note that this is a problem in every language, not just Haskell. One way to get rid of it entirely is to use symbolic computation libraries but they are much slower than the floating point numbers offered by CPUs. For many computations the loss of precision due to floating points is not a problem.

Is there a value with error library in Haskell?

I am looking for a library that provides a 'value with error' (eg x ± y). But searching for "Haskell xyz Error" only gives error handling libraries.
I would expect that such a library would provide common math operations (Num, Floating) where appropriate. The use case would be to get a error estimate from a calculation based on noisy sensor readings.
Update
I did some research and the term "propagation of uncertainty" came up. I found uncertainly-haskell which I'll try out soon. Are there other packages like this?
Have a look at the intervals package.
The Data.Eq.Approximate module seems to be a fit for getting approximate equality.
Data.Eq.Approximate
Contents
Type wrappers
Classes for tolerance type annotations
Absolute tolerance
Relative tolerance
Zero tolerance
Tolerance annotations using Digits
The purpose of this module is to provide newtype wrapper that allows one to effectively override the equality operator of a value so that it > is approximate rather than exact. For example, the type
type ApproximateDouble = AbsolutelyApproximateValue (Digits Five) Double
defines an alias for a wrapper containing Doubles such that two doubles are equal if they are equal to within five decimals of accuracy; for > example, we have that
1 == (1+10^^(-6) :: ApproximateDouble)
evaluates to True. Note that we did not need to wrap the value 1+10^^(-6) since AbsolutelyApproximateValue is an instance of Num. For > convenience, Num as well as many other of the numerical classes such as Real and Floating have all been derived for the wrappers defined in > this package so that one can conveniently use the wrapped values in the same way as one would use the values themselves.
Two kinds of wrappers are provided by this package.
The uncertain package seems to provide what you are looking for:
Some highlights from the readme:
Provides tools to manipulate numbers with inherent
experimental/measurement uncertainty, and propagates them through
functions based on principles from statistics.
Manipulate with error propagation
ghci> let x = 1.52 +/- 0.07
ghci> let y = 781.4 +/- 0.3
ghci> let z = 1.53e-1 `withPrecision` 3
ghci> cosh x
2.4 +/- 0.2
ghci> exp x / z * sin (y ** z)
10.9 +/- 0.9
ghci> pi + 3 * logBase x y
52 +/- 5
Create numbers
ghci> 1.52 +/- 0.07
1.52 +/- 7.0e-2
ghci> fromSamples [12.5, 12.7, 12.6, 12.6, 12.5]
12.58 +/- 7.0e-2
Comparisons
Note that this is very different from other libraries with similar
data types (like from intervals and rounding); these do not
attempt to maintain intervals or simply digit precisions; they instead
are intended to model actual experimental and measurement data with
their uncertainties, and apply functions to the data with the
uncertainties and properly propagating the errors with sound
statistical principles.
For a clear example, take
> (52 +/- 6) + (39 +/- 4)
91.0 +/- 7.0
In a library like intervals, this would result in 91 +/- 10
(that is, a lower bound of 46 + 35 and an upper bound of 58 + 43).
However, with experimental data, errors in two independent samples
tend to "cancel out", and result in an overall aggregate uncertainty
in the sum of approximately 7.

Resources