Very large float in python - python-3.x

I'm trying to construct a neural network for the Mnist database. When computing the softmax function I receive an error to the same ends as "you can't store a float that size"
code is as follows:
def softmax(vector): # REQUIRES a unidimensional numpy array
adjustedVals = [0] * len(vector)
totalExp = np.exp(vector)
print("totalExp equals")
print(totalExp)
totalSum = totalExp.sum()
for i in range(len(vector)):
adjustedVals[i] = (np.exp(vector[i])) / totalSum
return adjustedVals # this throws back an error sometimes?!?!
After inspection, most recommend using the decimal module. However when I've messed around with the values being used in the command line with this module, that is:
from decimal import Decimal
import math
test = Decimal(math.exp(720))
I receive a similar error for any values which are math.exp(>709).
OverflowError: (34, 'Numerical result out of range')
My conclusion is that even decimal cannot handle this number. Does anyone know of another method I could use to represent these very large floats.

There is a technique which makes the softmax function more feasible computationally for a certain kind of value distribution in your vector. Namely, you can subtract the maximum value in the vector (let's call it x_max) from each of its elements. If you recall the softmax formula, such operation doesn't affect the outcome as it reduced to multiplication of the result by e^(x_max) / e^(x_max) = 1. This way the highest intermediate value you get is e^(x_max - x_max) = 1 so you avoid the overflow.
For additional explanation I recommend the following article: https://nolanbconaway.github.io/blog/2017/softmax-numpy

With a value above 709 the function 'math.exp' exceeds the floating point range and throws this overflow error.
If, instead of math.exp, you use numpy.exp for such large exponents you will see that it evaluates to the special value inf (infinity).
All this apart, I wonder why you would want to produce such a big number (not sure you are aware how big it is. Just to give you an idea, the number of atoms in the universe is estimated to be in the range of 10 to the power of 80. The number you are trying to produce is MUCH larger than that).

Related

why is np.exp(x) not equal to np.exp(1)**x

Why is why is np.exp(x) not equal to np.exp(1)**x?
For example:
np.exp(400)
>>>5.221469689764144e+173
np.exp(1)**400
>>>5.221469689764033e+173
np.exp(400)-np.exp(1)**400
>>>1.1093513018771065e+160
This is optimisation of numpy that raise this diff.
Indeed, you have to understand how is calculated the Euler number in math:
e = (1/n)**n with n == inf.
I think numpy stop at a certain order:
You have in the numpy exp documentation here that is not very clear about how the Euler number is calculated.
Because of this order that is not equal to infinity, you have this small difference in the two calculations.
Indeed the value np.exp(400) is calculated using this: (1 + 400/n)**n
>>> (1 + 400/n)**n
5.221642085428121e+173
>>> numpy.exp(400)
5.221469689764144e+173
Here you have n = 1000000000000 wich is very small and raise this difference at 10e-5.
Indeed there is no exact value of the Euler number. Like Pi, you can only have an approched value.
It looks like a rounding issue. In the first case it's internally using a very precise value of e, while in the second you get a less precise value, which when multiplied 400 times the precision issues become more apparent.
The actual result when using the Windows calculator is 5.2214696897641439505887630066496e+173, so you can see your first outcome is fine, while the second is not.
5.2214696897641439505887630066496e+173 // calculator
5.221469689764144e+173 // exp(400)
5.221469689764033e+173 // exp(1)**400
Starting from your result, it looks it's using a value with 15 digits of precision.
2.7182818284590452353602874713527 // e
2.7182818284590450909589085441968 // 400th root of the 2nd result

Which is the error of a value corresponding to the maximum of a function?

This is my problem:
The first input is the observed data of MUSE, which is an astronomical instrument provides cubes, i.e. an image for each wavelength with a certain range. This means that, taken all the wavelengths corresponding to the pixel i,j, I can extract the spectrum for this pixel. Since these images are observed, for each pixel I have an error.
The second input is a spectrum template, i.e. a model of a spectrum. This template is assumed to be without error. I map this spectra at various redshift (this means multiply the wavelenghts for a factor 1+z, where z belong to a certain range).
The core of my code is the cross-correlation between the cube, i.e. the spectra extracted from each pixel, and the template mapped at different redshift. The result is a cross-correlation function for each pixel for each z, let's call this computed function as f(z). Taking, for each pixel, the argmax of f(z), I get the best redshift.
This is a common and widely-used process, indeed, it actually works well.
My question:
Since my input, i.e. the MUSE cube, has an error, I have propagated this error through the cross-correlation, obtaining an error on f(z), i.e. each f_i has a error sigma_i. So, how can I compute the error on z_max, which is the value of z corresponding to the maximum of f?
Maybe a solution could be the implementation of bootstrap method: I can extract, within the error of f, a certain number of function, for each of them I computed the argamx, so i can have an idea about the scatter of z_max.
By the way, I'm using python (3.x) and tensorflow has been used to compute the cross-correlation function.
Thanks!
EDIT
Following #TF_Support suggestion I'm trying to add some code and some figures to better understand the problem. But, before this, maybe it's better a little of math.
With this expression I had computed the cross-correlation:
where S is the spectra, T is the template and N is the normalization coefficient. Since S has an error, I had propagated these errors through the previous relation founding:
where SST_k is the the sum of the template squared and sigma_ij is the error on on S_ij (actually, I should have written sigma_S_ij).
The follow function (implemented with tensorflow 2.1) makes the cross-correlation between one template and the spectra of batch pixels, and computes the error on the cross-correlation function:
#tf.function
def make_xcorr_err1(T, S, sigma_S):
sum_spectra_sq = tf.reduce_sum(tf.square(S), 1) #shape (batch,)
sum_template_sq = tf.reduce_sum(tf.square(T), 0) #shape (Nz, )
norm = tf.sqrt(tf.reshape(sum_spectra_sq, (-1,1))*tf.reshape(sum_template_sq, (1,-1))) #shape (batch, Nz)
xcorr = tf.matmul(S, T, transpose_a = False, transpose_b= False)/norm
foo1 = tf.matmul(sigma_S**2, T**2, transpose_a = False, transpose_b= False)/norm**2
foo2 = xcorr**2 * tf.reshape(sum_template_sq**2, (1,-1)) * tf.reshape(tf.reduce_sum((S*sigma_S)**2, 1), (-1,1))/norm**4
foo3 = - 2 * xcorr * tf.reshape(sum_template_sq, (1,-1)) * tf.matmul(S*(sigma_S)**2, T, transpose_a = False, transpose_b= False)/norm**3
sigma_xcorr = tf.sqrt(tf.maximum(foo1+foo2+foo3, 0.))
Maybe, in order to understand my problem, more important than code is an image representing an output. This is the cross-correlation function for a single pixel, in red the maximum value, let's call z_best, i.e. the best cross-correlated value. The figure also shows the 3 sigma errors (the grey limits are +3sigma -3sigma).
If i zoom-in near the peak, I get this:
As you can see the maximum (as any other value) oscillates within a certain range. I would like to find a way to map this fluctuations of maximum (or the fluctuations around the maximum, or the fluctuations of the whole function) to an error on the value corresponding the maximum, i.e. an error on z_best.

Solving math with integers larger than any available integer data type

In some programming competitions where the numbers are larger than any available integer data type, we often use strings instead.
Question 1:
Given these large numbers, how to calculate e and f in the below expression?
(a/b) + (c/d) = e/f
note: GCD(e,f) = 1, i.e. they must be in minimised form. For example {e,f} = {1,2} rather than {2,4}.
Also, all a,b,c,d are large numbers known to us.
Question 2:
Can someone also suggest a way to find GCD of two big numbers (bigger than any available integer type)?
I would suggest using full bytes or words rather than strings.
It is relatively easy to think in base 256 instead of base 10 and a lot more efficient for the processor to not do multiplication and division by 10 all the time. Ideally, choose a word size that is half the processor's natural word size, as that makes carry easy to implement. Of course thinking in base 64K or 4G is slightly more complex, but even better than base 256.
The only downside is generating the initial big numbers from the ascii input, which you get for free in base 10. Using a larger word size you can make this more efficient by processing a number of digits initially into a single word (eg 9 digits at a time into 4G), then performing a long multiply of that single word into the correct offset in your large integer format.
A compromise might be to run your engine in base 1 billion: This will still be 9 or 81 times more efficient than using base 10!
The simplest way to solve this equation is to multiply a/b * d/d and c/d * b/b so they both have the common denominator b*d.
I think you will then need to prime factorise your big numbers e and f to find any common factors. Remember to search again for the same factor squared.
Of course, that means you have to write a prime generating sieve. You only need to generate factors up to the square root, or half the digits of the min value of e and f.
You could prime factorise b and d to get a lower initial denominator, but you will need to do it again anyway after the addition.
I think that the way to solve this is to separate the problem:
Process the input numbers as an array of characters (ie. std::string)
Make a class where each object can store an std::list (or similar) that represents one of the large numbers, and can do the needed arithmetic with your data
You can then solve your problems normally, without having to worry about your large inputs causing overflow.
Here's a webpage that explains how you can have such an arithmetic class (with sample code in C++ showing addition).
Once you have such an arithmetic class, you no longer need to worry about how to store the data or any overflow.
I get the impression that you already know how to find the GCD when you don't have overflow issues, but just in case, here's an explanation of finding the GCD (with C++ sample code).
As for the specific math problem:
// given formula: a/b + c/d = e/f
// = ( ( a*d + b*c ) / ( b*d ) )
// Define some variables here to save on copying
// (I assume that your class that holds the
// large numbers is called "ARITHMETIC")
ARITHMETIC numerator = a*d + b*c;
ARITHMETIC denominator = b*d;
ARITHMETIC gcd = GCD( numerator , denominator );
// because we know that GCD(e,f) is 1, this implies:
ARITHMETIC e = numerator / gcd;
ARITHMETIC f = denominator / gcd;

numpy.cov returning a matrix which is not positive semi-definite

I'm calculating a covariance matrix from a 2D array using np.cov, and using it to get nearest neighbors with Mahalanobis distance.
c = np.cov(arr)
neigh = NearestNeighbors(100,metric='mahalanobis',metric_params = {'VI':np.linalg.inv(c)})
neigh.fit(dfeatures)
But for some reason, I'm getting
/lib/python3.4/site-packages/sklearn/externals/joblib/parallel.py:131: RuntimeWarning: invalid value encountered in sqrt
and the values of the distance of any query point returns nan.
Instead of passing c to NearestNeighbors, if I pass an identity matrix the NearestNeighbors works as expected. I suspected that c might actually not be positive semidefinite and therefore the values in the sqrt in Mahalanobis distance might get a negative value as input.
I checked the eigenvalue of resulting c and many of them turned out to be negative(and complex) but close to 0.
I'd a few questions:
Is this totally because of the numerical errors(or am I doing something wrong)?
If it is because of numerical errors is there a way to fix it?
Turns out this is in-fact because of numerical error. A workaround to correct this is to add a small number to diagonal element of covariance matrix. The larger this number the closer the distance will be to euclidean distance, so one must be careful while choosing this number.

How do I prevent minimize (via SCIPY) from outputting "optimized" parameters that I have input as guesses?

I am trying to use the minimize function from the scipy module. The full code is too lengthy to post, but the main idea is that there are multiple defined distributions that should be fittable against datasets. The observations per bin are easily calculated from the datasets, whereas the expectations per bin are calculated by a function that uses one argument to specify which distribution should be integrated over bin bounds (where the bin bounds are identical to the histogram bins). There are three functions chisqI where I = 1,2,3 (one for each distribution), each of which inputs specified observations per bin and expectations per bin to output the chi square. Then there are three functions, each of which inputs a chisqI and args to output the minimized function result and optimized parameters. Here, the args are parameters mu and sigma that will be optimized to produce the smallest chi-square. I was able to pass arguments through a chain of functions for one distribution, and am wondering if I need to pass through another arg that specifies which distribution is being dealt with from one function down the chain.
There are different methods that the minimize function can use, like Nelder-Mead or CG. I've been trying to compare results from the different methods to find the one that provides the best fit (where the best fit is defined as the fit that both produces the smallest chi-square or largest p-value when compared to an actual dataset). Interestingly enough, the Nelder-Mead and Powell methods produce the lowest chi square relative to the other methods, but the plotted fit against the histogram of the actual data looks better with other methods. For the code outputs below, the function value is the negative of the p-value that is associated with a chi-square value; this is the minimized result. CHISQ_RED is the reduced chi square value by using the CHISQ_TOT and the degrees of freedom, whereas the first and second elements in the x: array are the optimized parameters mu and sigma for a distribution, respectively.
Running the Nelder-Mead minimization method produces the output below.
final_simplex: (array([[ 6.00002802, 0.60020636],
[ 5.99995429, 0.60018798],
[ 6.0000716 , 0.60011127]]), array([ -5.16845821e-21, -5.16838926e-21, -5.16815050e-21]))
fun: -5.1684582072826815e-21
message: 'Optimization terminated successfully.'
nfev: 47
nit: 24
status: 0
success: True
x: array([ 6.00002802, 0.60020636])
CHISQ_TOT = 259.042420419 CHISQ_RED = 3.36418727816
Running the CG minimization method produces the output below.
fun: -4.0964504680695594e-97
jac: array([ 8.72867710e-94, -3.96555507e-93])
message: 'Optimization terminated successfully.'
nfev: 4
nit: 0
njev: 1
status: 0
success: True
x: array([ 6.01921293, 0.54436257])
CHISQ_TOT = 683.781671477 CHISQ_RED = 8.88028144776
Yet, the fit with a higher chi square value looks like a better fit (same dataset in the histogram).
The problem is that every method of minimization outputs my guess parameters (mu and sigma) as the optimized parameters. The Nelder-Mead method (smaller chi-square, worse-looking fit) has 47 function evaluations and 24 iterations, whereas the CG method (larger chi-square, better-looking fit) has 4 function evaluations and 0 iterations. I tried to change this by adding extra args in the minimization function (where chisq3 is the pre-defined function of mu and sigma being minimized, and parameterguess is [mu_guess, sigma_guess].
minimize( chisq3 , parameterguess , method = 'CG', options={'gtol':1e-50, 'maxiter': 100})
If I change my guess value of mu and sigma by adding 2 to each, then the fits become drastically worse (as the guess value for the optimized parameters is rather decent). I'm not sure if it's relevant, but the data shown in the plots are adapted from a lognormal distribution by taking the logarithm of each value in my dataset to create a "pseudo-" Gaussian shape/distribution (over logarithmic x axes).
I am guessing that the minimize function via scipy is supposed to do many iterations to be truly successful. So I think adding more iterations should decrease the sensitivity of the minimize function to my initial guess of parameters.
Most importantly, is this a common error using the minimize function via scipy? If so, what are some common fixes for this? Also, why would the minimize function do many iterations and function evaluations only to produce the same result as the input?
The problem was that chi square is the calculation equalto the sum of the square of the per-bin difference of expectation values and observed values, all divided by the expectation value. The result was a small number divided by a large number, squared, then continuously summed thousands of times, contributing to zero division problems and round off errors. By minimizing a simpler function, such as chi square without the denominator term, the source of the bug is gone and one can calculate a chi square from the obtained parameter fit.

Resources