So I was trying to get e^(pi*I)=-1, but python 3 gives me another, weird result:
print(cmath.exp(cmath.pi * cmath.sqrt(-1)))
Result:
(-1+1.2246467991473532e-16j)
This should in theory return -1, no?
(Partial answer to the revised question.)
In theory, the result should be -1, but in practice the theory is slightly wrong.
The cmath unit uses floating-point variables to do its calculations--one float value for the real part of a complex number and another float value for the imaginary part. Therefore the unit experiences the limitations of floating point math. For more on those limitations, see the canonical question Is floating point math broken?.
In brief, floating point values are usually mere approximations to real values. The value of cmath.pi is not actually pi, it is just the best approximation that will fit into the floating-point unit of many computers. So you are not really calculating e^(pi*I), just an approximation of it. The returned value has the exact, correct real part, -1, which is somewhat surprising to me. The imaginary part "should be" zero, but the actual result agrees with zero to 15 decimal places, or over 15 significant digits compared to the start value. That is the usual precision for floating point.
If you require exact answers, you should not be working with floating point values. Perhaps you should try an algebraic solution, such as the sympy module.
(The following was my original answer, which applied to the previous version of the question, where the result was an error message.)
The error message shows that you did not type what you thought you typed. Instead of cmath.exp on the outside of the expression, you typed math.exp. The math version of the exponential function expects a float value. You gave it a complex value (cmath.pi * cmath.sqrt(-1)) so Python thought you wanted to convert that complex value to float.
When I type the expression you give at the top of your question, with the cmath properly typed, I get the result
(-1+1.2246467991473532e-16j)
which is very close to the desired value of -1.
Found the answer.
First of all, python 3 cannot properly compute irrational numbers, and so e^(pi*I) will not return -1, as per This answer
Secondly, python 3 returns any complex number as a cartesian pair (real + imaginary).
The fix was to extract the real part of the number:
print(cmath.exp(cmath.pi * cmath.sqrt(-1)).real)
Related
This is the Series which I am using,
https://paste.ubuntu.com/p/Wd3czXj9Fc/
>>> sum(series)
185048.7799999991
>>> series.sum()
185048.78000000003
Why is there a difference between those values? Although there's a floating point error associated with both the values but that shouldn't be the reason for this difference.
Why is there a difference between those values? Although there's a floating point error associated with both the values but that shouldn't be the reason for this difference.
As alluded to in your question, it's due to floating point in precision. Your decimal numbers are being approximated so there is a slight different when using Python build-in sum() vs using pandas.Series.sum() (which actually calls numpy.sum().
The sum() built-in isn't intended to be used for accurate floating point arithmetic. " To add floating point values with extended precision, see math.fsum()." Docs
I created a simple function to solve e to the power pi explained here
def e_to_power_pi(number):
return (1 + (1/number)) ** (number * math.pi)
from the look of it, clearly simple piece of code. But look at the output difference of these two values:
Example one:
e_to_power_pi(1000000000000000)
output:
32.71613881872869
Example Two:
e_to_power_pi(10000000000000000)
output:
1.0
upon tear down of the code, I learnt that 1.0 is coming from this portion
1 + (1/number) of the code above.
When I tore it down further, I learnt that 1/10000000000000000 outputs correct answer as it should 0.00000000000000001.
But when I add 1 to result it returns 1.0 instead of 1.00000000000000001.
I presumed that it must be default round off in python that may be changing the value.
I decided to use round(<float>, 64) # where <float> is any computation taking place in code above to try and get 64 digits post decimal result. But still I got stuck with the same result when addition was performed i.e. 1.0.
Can someone guide me or point me to the direction where I can learn or further read about it?
You are using double-precision binary floating-point format, with 53-bit significand precision, which is not quite enough to represent your fraction:
10000000000000001/10000000000000000 = 1.0000000000000001
See IEEE 754 double-precision binary floating-point format: binary64
Mathematica can operate in precisions higher than the architecturally imposed machine precision.
See Wolfram Language: MachinePrecision
The Mathematica screenshot below shows you would need a significand precision higher than 53-bit to obtain a result other than 1.
N numericises the fractional result to the requested precision. Machine precision is the default; higher precision calculations are done in software.
Consider the following terminating decimal numbers.
3.1^2 = 9.61
3.1^4 = 92.3521
3.1^8 = 8528.91037441
The following shows how Mathematica treats these expressions
In[1]:= 3.1^2
Out[1]= 9.61
In[2]:= 3.1^4
Out[2]= 92.352
So far so good, but
In[3]:= 3.1^8
Out[3]= 8528.91
doesn't provide enough precision.
So let's try N[], NumberForm[], and DecimalForm[] with a precision of 12
In[4]:= N[3.1^8,12]
Out[4]= 8528.91
In[5]:= NumberForm[3.1^8,12]
Out[5]= 8528.91037441
In[6]:= DecimalForm[3.1^8,12]
Out[6]= 8528.91037441
In this case DecimialForm[] and NumberForm[] work as expected, but N[] only provided the default precision of 6, even though I asked for 12. So DecimalForm[] or NumberForm[] seem to be the way to go if you want exact results when the inputs are terminating decimals.
Next consider rational numbers with infinite repeating decimals like 1/3.
In[7]:= N[1/3,20]
Out[7]= 0.33333333333333333333
In[9]:= NumberForm[1/3, 20]
Out[9]=
1/3
In[9]:= DecimalForm[1/3, 20]
Out[9]=
1/3
Unlike the previous case, N[] seems to be the proper way to go here, whereas NumberForm[] and DecimalForm[] do not respect precisions.
Finally consider irrational numbers like Sqrt[2] and Pi.
In[10]:= N[Sqrt[2],20]
Out[10]= 1.4142135623730950488
In[11]:= NumberForm[Sqrt[2], 20]
Out[11]=
sqrt(2)
In[12]:= DecimalForm[Sqrt[2], 20]
Out[12]=
sqrt(2)
In[13]:= N[π^12,30]
Out[13]= 924269.181523374186222579170358
In[14]:= NumberForm[Pi^12,30]
Out[14]=
π^12
In[15]:= DecimalForm[Pi^12,30]
Out[15]=
π^12
In these cases N[] works, but NumberForm[] and DecimalForm[] do not. However, note that N[] switches to scientific notation at π^13, even with a larger precision. Is there a way to avoid this switch?
In[16]:= N[π^13,40]
Out[16]= 2.903677270613283404988596199487803130470*10^6
So there doesn't seem to be a consistent way of formulating how to get decimal numbers with requested precisions and at the same time avoiding scientific notation. Sometimes N[] works, othertimes DecimalForm[] or NumberForm[] works, and at othertimes nothing seems to work.
Have I missed something or are there bugs in the system?
It isn't a bug because it is designed purposefully to behave this way. Precision is limited by the precision of your machine, your configuration of Mathematica, and the algorithm and performance constraints of the calculation.
The documentation for N[expr, n] states it attempts to give a result with n‐digit precision. When it cannot give the requested precision it gets as close as it can. DecimalForm and NumberForm work the same way.
https://reference.wolfram.com/language/ref/N.html explains the various cases behind this:
Unless numbers in expr are exact, or of sufficiently high precision, N[expr,n] may not be able to give results with n‐digit precision.
N[expr,n] may internally do computations to more than n digits of precision.
$MaxExtraPrecision specifies the maximum number of extra digits of precision that will ever be used internally.
The precision n is given in decimal digits; it need not be an integer.
n must lie between $MinPrecision and $MaxPrecision. $MaxPrecision can be set to Infinity.
n can be smaller than $MachinePrecision.
N[expr] gives a machine‐precision number, so long as its magnitude is between $MinMachineNumber and $MaxMachineNumber.
N[expr] is equivalent to N[expr,MachinePrecision].
N[0] gives the number 0. with machine precision.
N converts all nonzero numbers to Real or Complex form.
N converts each successive argument of any function it encounters to numerical form, unless the head of the function has an attribute such as NHoldAll.
You can define numerical values of functions using N[f[args]]:=value and N[f[args],n]:=value.
N[expr,{p,a}] attempts to generate a result with precision at most p and accuracy at most a.
N[expr,{Infinity,a}] attempts to generate a result with accuracy a.
N[expr,{Infinity,1}] attempts to find a numerical approximation to the integer part of expr.
I'm a bit confused of the local space coordinate system. Suppose I have a complex object in the local space. I know when I want to put it in the world space I have to multiply it with Scale,Rotate,Translate matrix. But the problem is the local coordinate only ranged from -1.0f to 1.0f, when I want to have vertex like (1/500,1/100,1/100) things will not work, everything will become 0 due to the float accuracy problem.
The only solution to me now is separate them into lots of local space systems and ProjectView each individually to put them together. It seems not the correct way of solving the problem. I've been checked lots of books but none of them mentioned this issue. I really want to know how to solve it.
when I want to have vertex like (1/500,1/100,1/100) things will not work
What makes you think that? The float accuracy problem does not mean something will coerce to 0 if it can't be accurately represented. It just means, it will coerce to the floating point number closest to the intended figure.
It's the very same as writing down, e.g., 3/9 with at most 6 significant decimal digits: 0.33334 – it didn't coerce to 0. And the very same goes for floating point.
Now you may be familiar with scientific notation: x·10^y – this is essentially decimal floating point, a mantissa x and an exponent y which essentially specifies the order of magnitude. In binary floating point it becomes x·2^y. In either case the significant digits are in the mantissa. Your typical floating point number (in OpenGL) has a mantissa of 23 bits, which boils down to an amount of 22 significant binary digits (which are about 7 decimal digits).
I really want to know how to solve it.
The real trouble with floating point numbers is, if you have to mix and merge numbers over a large range of orders of magnitudes. As long as the numbers are of similar order of magnitudes, everything happens with just the mantissa. And that one last change in order of magnitude to the [-1, 1] range will not hurt you; heck this can be done by "normalizing" the floating point value and then simply dropping the exponent.
Recommended read: http://floating-point-gui.de/
Update
One further thing: If you're writing 1/500 in a language like C, then you're performing an integer division and that will of course round down to 0. If you want this to be a floating point operation you either have to write floating point literals or cast to float, i.e.
1./500.
or
(float)1/(float)500
Note that casting one of the operands to float suffices to make this a floating point division.
How can I implement a code in verilog that resolves a exponential equation that has numbers that must be represented as fixed point.
For example I have this equation on C++ and wish to convert to Verilog or VHDL:
double y = 0.1+0.75*(1.0/(1.0+exp((x[i]+40.5)/6.0)));
Where 'y' and 'x' must be fixed point numbers. And 'x' is a vector also.
I looked up for modules and libraries that has fixed point but none of them have exponentials.
Verilog has a real data type that provides simulation-time support for floating-point numbers. It also has an exponentiation operator, e.g., a ** b computes a to the power of b.
However, code written using the real datatype is generally not synthesizable. Instead, in real hardware designs, support for fixed and floating point numbers is generally achieved by implementing arithmetic logic units that implement, e.g., the IEEE floating point standard.
Most of the time, such a design will require at least a couple of cycles even for basic operations like addition and multiplication. More complex algorithms like division, sine, cosine, etc., are generally implemented using algorithms based on approximating polynomials.
If you really want to understand how to represent and manipulate fixed point and floating point numbers, you should probably get a textbook for a mathematics course such as Numerical Methods, or an EE course on Computer Arithmetic.