I am struggling to understand how scipy.solve_ivp() handles errors in a system of ODE. Lets say I have the following, simple code for a single ODEs, and I think I might be doing things wrong in some way. Lets say my rhs looks something like:
from scipy.integrate import solve_ivp
def rhs_func(t, y):
z = 1.0/( x - y + 1j )
return z
Suppose we call solve_ivp with the following signature:
Z_solution = ivp_adaptive.solve_ivp( fun = rhs_func,
t_span = [100,0],
y0 = y0, #some initial value of 0 for example
method ='RK45',
t_eval = None,
args = some_additional_arguments_to_rhs_func,
dense_output = False,
rtol = 1e-8
atol = 1e-10
)
Now, the absolute and relative tolerances are supposed to fix the error of the caculation. The problem I am having has to do with the "t_eval=None" in this case. Apparently, this choice lets the integrator (in this case of type RK45) to choose the time step according to the specified tolerances above being or not being exceeded, i.e., the steps are not fixed, but somehow taking a larger step in t would mean a solution has been found that lies below the tolerances above (atol=1e-10 , rtol=1e-8). This is particularly useful in problems with large variations of the time scale, where a uniform discretization of t is very inefficient.
My big problem has to do with the following piece of code in scipy.integrate._ivp.solve_ivp() around line 575, with the "t_eval == None" case:
while status is None:
message = solver.step()
if solver.status == 'finished':
status = 0
elif solver.status == 'failed':
status = -1
break
t_old = solver.t_old
t = solver.t
y = solver.y
if dense_output:
sol = solver.dense_output()
interpolants.append(sol)
else:
sol = None
if events is not None:
g_new = [event(t, y) for event in events]
active_events = find_active_events(g, g_new, event_dir)
if active_events.size > 0:
if sol is None:
sol = solver.dense_output()
root_indices, roots, terminate = handle_events(
sol, events, active_events, is_terminal, t_old, t)
for e, te in zip(root_indices, roots):
t_events[e].append(te)
y_events[e].append(sol(te))
if terminate:
status = 1
t = roots[-1]
y = sol(t)
g = g_new
# HERE I HAVE MODIFIED THE FILE BY CALLING AN INTERPOLATION FUNCTION FOR THE SOLUTION
if t_eval is None:
ts.append(t)
#ys.append(y)
# this calls to adapt the solution to a new set of values x over which y(x,t) is
# defined
interp_solution(t,y,solver,args)
y = solver.y
ys.append(y)
where I have defined a function:
def interp_solution( t, y, solver, args ):
import numpy as np
from scipy import interpolate
x_old = args.get_old_grid() # this call just returns an array of the style of
# x_new, and is where y is defined
x_new = np.linspace( -t, t, dim ) # the new array where components of y are
# defined
y_interp = interpolate.interp1d( x_old, y )
y_new = y_interp( x_new )
solver.y = y_new # update the solver y
# finally, we change the maximum allowed step of the integrator if t is below
# some threshold value
if ( t < args.get_threshold() ):
solver.max_step = #some number
return y_new
When I look at the results, it seems that this is very sensitive to the tolerances and the way the integration steps are performed, but somehow I fail to see where errors could come from in this approach -- can anyone explain if this approach is somehow affecting the solution and the associated errors ? How can one implement a similar approach in this fashion? Any help is greatly appreaciated.
Related
full code: https://gist.github.com/QuantVI/79a1c164f3017c6a7a2d860e55cf5d5b
TLDR: sum(a3) gives a number like 770, when it should be more like 270 - as in 270 of 1000 trials where the results of drawing 4 contained (at least) 2 blue and 1 green ball.
I've rewritten both my way of creating the sample output, and my way of comparing the results twice already. Python as a syntax `all(x in a for x n b)` which I used initially, then change to something more deliberate to see if there was a change. I still have 750+ `True` evaluations of each trial. This is why I reassessed how I was selecting without replacement.
I've tested the draw function on its own with different Hats and was sure it worked.
The expected probability when drawing 4balls, without replacement, from a hat containing (blue=3,red=2,green=6), and having the outcome contain (blue=2,green=1) or ['blue','blue','green']
is around 27.2%. In my 1000 trials, I get higher then 700, repeatedly.
Is the error in Hat.draw() or is it in experiment()?
Note: Certain things are commented out, because I am debugging. Thus use sum(a3) as experiment is commented out to return things other than the probability right now.
import copy
import random
# Consider using the modules imported above.
class Hat:
def __init__(self, **kwargs):
self.d = kwargs
self.contents = [
key for key, val in kwargs.items() for num in range(val)
]
def draw(self, num: int) -> list:
if num >= len(self.contents):
return self.contents
else:
indices = random.sample(range(len(self.contents)), num)
chosen = [self.contents[idx] for idx in indices]
#new_contents = [ v for i, v in enumerate(self.contents) if i not in indices]
new_contents = [pair[1] for pair in enumerate(self.contents)
if pair[0] not in indices]
self.contents = new_contents
return chosen
def __repr__(self): return str(self.contents)
def experiment(hat, expected_balls, num_balls_drawn, num_experiments):
trials =[]
for n in range(num_experiments):
copyn = copy.deepcopy(hat)
result = copyn.draw(num_balls_drawn)
trials.append(result)
#trials = [ copy.deepcopy(hat).draw(num_balls_drawn) for n in range(num_experiments) ]
expected_contents = [key for key, val in expected_balls.items() for num in range(val)]
temp_eval = [[o for o in expected_contents if o in trial] for trial in trials]
temp_compare = [ evaled == expected_contents for evaled in temp_eval]
return expected_contents,temp_eval,temp_compare, trials
#evaluations = [ all(x in trial for x in expected_contents) for trial in trials ]
#if evaluations: prob = sum(evaluations)/len(evaluations)
#else: prob = 0
#return prob, expected_contents
#hat3 = Hat(red=5, orange=4, black=1, blue=0, pink=2, striped=9)
#hat4 = Hat(red=1, orange=2, black=3, blue=2)
hat1 = Hat(blue=3,red=2,green=6)
a1,a2,a3,a4 = experiment(hat=hat1, expected_balls={"blue":2,"green":1}, num_balls_drawn=4, num_experiments=1000)
#actual = probability
#expected = 0.272
#self.assertAlmostEqual(actual, expected, delta = 0.01, msg = 'Expected experiment method to return a different probability.')
hat2 = Hat(yellow=5,red=1,green=3,blue=9,test=1)
b1,b2,b3,b4 = experiment(hat=hat2, expected_balls={"yellow":2,"blue":3,"test":1}, num_balls_drawn=20, num_experiments=100)
#actual = probability
#expected = 1.0
#self.assertAlmostEqual(actual, expected, delta = 0.01, msg = 'Expected experiment method to return a different probability.')
The issue is temp_eval = [[o for o in expected_contents if o in trial] for trial in trials]. It will always ad both blue to the list even if only one blue exists in the results of one trial.
However, I couldn't fix the error in a straight-forward way. Instead, my fix created a much lower answer, something less than 0.1, when around 0.27 is (270 of 1000 trials) is what I need.
The roundabout solution was to convert lists like ['red', 'green', 'blue', 'green'] into dictionaries using list on collections.Counter of that list. Then do a key-wose comparison of the values, such as [y[key]<= x.get(key,0) for key in y.keys()]). In this comparison y is the expected_balls variable, and x is the list of the counter object. If x doesn't have one of the keys, we get 0. Zero will be less than the value of any key in expected_balls.
From here we use functols.reduce to turn the output into a single True or False value. Then we map that functionality (compare all keys and get one T/F value) across all trials.
def experiment(hat, expected_balls, num_balls_drawn, num_experiments):
trials =[]
trials = [ copy.deepcopy(hat).draw(num_balls_drawn)
for n in range(num_experiments) ]
trials_kvpairs = [dict(collections.Counter(trial)) for trial in trials]
def contains(contained:dict , container:dict):
each = [container.get(key,0) >= contained[key]
for key in contained.keys()]
return reduce(lambda item0,item1: item0 and item1, each)
trials_success = list(map(lambda t: contains(expected_balls,t), trials_kvpairs))
# expected_contents = [pair[0] for pair in expected_balls.items() for num in range(pair[1])]
# temp_eval = [[o for o in trial if o in expected_contents] for trial in trials]
# temp_compare = [ evaled == expected_contents for evaled in temp_eval]
# if temp_compare: prob = sum(temp_compare)/len(trials)
# else: prob = 0
return 'prob', trials_kvpairs, trials_success
When run using the this experiment(hat=hat1, expected_balls={"blue":2,"green":1}, num_balls_drawn=4, num_experiments=1000) the sum of the third part of the output was 276.
I'm trying to write some code to compute mean, Variance, Standard Deviation, FWHM, and finally evaluate the Gaussian Integral. I've been running into a division by zero error that I can't get past and I would like to know the solution for this ?
Where it's throwing an error I've tried to throw an exception handler as follows
Average = (sum(yvalues)) / (len(yvalues)) try: return (sum(yvalues) / len(yvalues))
expect ZeroDivisionError:
return 0
xvalues = []
yvalues = []
def generate():
for i in range(0,300):
a = rand.uniform((float("-inf") , float("inf")))
b = rand.uniform((float("-inf") , float("inf")))
xvalues.append(i)
### Defining the variable 'y'
y = a * (b + i)
yvalues.append(y) + 1
def mean():
Average = (sum(yvalues))/(len(yvalues))
print("The average is", Average)
return Average
def varience():
# This calculates the SD and the varience
s = []
for i in yvalues:
z = i - mean()
z = (np.abs(i-z))**2
s.append(y)**2
t = mean()
v = numpy.sqrt(t)
print("Answer for Varience is:", v)
return v
Traceback (most recent call last):
File "Tuesday.py", line 42, in <module>
def make_gauss(sigma=varience(), mu=mean(), x = random.uniform((float("inf"))*-1, float("inf"))):
File "Tuesday.py", line 35, in varience
t = mean()
File "Tuesday.py", line 25, in mean
Average = (sum(yvalues))/(len(yvalues))
ZeroDivisionError: division by zero
There are a few things that are not quite right as people noted above.
import random
import numpy as np
def generate():
xvalues, yvalues = [], []
for i in range(0,300):
a = random.uniform(-1000, 1000)
b = random.uniform(-1000, 1000)
xvalues.append(i)
### Defining the variable 'y'
y = a * (b + i)
yvalues.append(y)
return xvalues, yvalues
def mean(yvalues):
return sum(yvalues)/len(yvalues)
def variance(yvalues):
# This calculates the SD and the varience
s = []
yvalues_mean = mean(yvalues)
for y in yvalues:
z = (y - yvalues_mean)**2
s.append(z)
t = mean(s)
return t
def variance2(yvalues):
yvalues_mean = mean(yvalues)
return sum( (y-yvalues_mean)**2 for y in yvalues) / len(yvalues)
# Generate the xvalues and yvalues
xvalues, yvalues = generate()
# Now do the calculation, based on the passed parameters
mean_yvalues = mean(yvalues)
variance_yvalues = variance(yvalues)
variance_yvalues2 = variance2(yvalues)
print('Mean {} variance {} {}'.format(mean_yvalues, variance_yvalues, variance_yvalues2))
# Using Numpy
np_mean = np.mean(yvalues)
np_var = np.var(yvalues)
print('Numpy: Mean {} variance {}'.format(np_mean, np_var))
The way variance was calculated isn't quite right, but given the comment of "SD and variance" you were probably going to calculate both.
The code above gives 2 (well, 3) ways to do what I understand you were trying to do but I changed a few of the methods to clean them up a bit. generate() returns two lists now. mean() returns the mean, etc. The function variance2() gives an alternative way to calculate the variance but using a list comprehension style.
The last couple of lines are an example using numpy which has all of it built in and, if available, is a great way to go.
The one part that wasn't clear was the random.uniform(float("-inf"), float("inf"))) which seems to be an error (?).
You are calling mean before you call generate.
This is obvious since yvalues.append(y) + 1 (in generate) would have caused another error (TypeError) since .append returns None and you can't add 1 to None.
Change yvalues.append(y) + 1 to yvalues.append(y + 1) and then make sure to call generate before you call mean.
Also notice that you have the same error in varience (which should be called variance, btw). s.append(y)**2 should be s.append(y ** 2).
Another error you have is that the stacktrace shows make_gauss(sigma=varience(), mu=mean(), x = random.uniform((float("inf"))*-1, float("inf"))).
I'm pretty sure you don't actually want to call varience and mean on this line, just reference them. So also change that line to make_gauss(sigma=varience, mu=mean, x = random.uniform((float("inf"))*-1, float("inf")))
I am trying to integrate numerically using simpson integration rule for f(x) = 2x from 0 to 1, but keep getting a large error. The desired output is 1 but, the output from python is 1.334. Can someone help me find a solution to this problem?
thank you.
import numpy as np
def f(x):
return 2*x
def simpson(f,a,b,n):
x = np.linspace(a,b,n)
dx = (b-a)/n
for i in np.arange(1,n):
if i % 2 != 0:
y = 4*f(x)
elif i % 2 == 0:
y = 2*f(x)
return (f(a)+sum(y)+f(x)[-1])*dx/3
a = 0
b = 1
n = 1000
ans = simpson(f,a,b,n)
print(ans)
There is everything wrong. x is an array, everytime you call f(x), you are evaluating the function over the whole array. As n is even and n-1 odd, the y in the last loop is 4*f(x) and from its sum something is computed
Then n is the number of segments. The number of points is n+1. A correct implementation is
def simpson(f,a,b,n):
x = np.linspace(a,b,n+1)
y = f(x)
dx = x[1]-x[0]
return (y[0]+4*sum(y[1::2])+2*sum(y[2:-1:2])+y[-1])*dx/3
simpson(lambda x:2*x, 0, 1, 1000)
which then correctly returns 1.000. You might want to add a test if n is even, and increase it by one if that is not the case.
If you really want to keep the loop, you need to actually accumulate the sum inside the loop.
def simpson(f,a,b,n):
dx = (b-a)/n;
res = 0;
for i in range(1,n): res += f(a+i*dx)*(2 if i%2==0 else 4);
return (f(a)+f(b) + res)*dx/3;
simpson(lambda x:2*x, 0, 1, 1000)
But loops are generally slower than vectorized operations, so if you use numpy, use vectorized operations. Or just use directly scipy.integrate.simps.
Hello I am trying to fit a psf to an image. The background should be aproximated by lower order polynomials. If I just take a constant it works fine:
def fitter(image, trueImage,psf,intensity):
p0 = [intensity]
p0.append(np.amin(trueImage)*4**2)
meritFun = lambda p: np.ravel(image-(p[0]*psf+p[1]))
p = least_squares(meritFun,p0,method='trf')
Now I have the issue of how to define the x and y's for my polynomials:
#Does not work!
def fitter(image, trueImage,psf,intensity):
p0 = [intensity]
p0.append(np.amin(trueImage)*4**2)
p0.append(1)
p0.append(1) #some clever initial guess
meritFun = lambda p: np.ravel(image-(p[0]*psf+p[1]+p[2]*x+p[3]*y))
p = least_squares(meritFun,p0,method='trf')
x and y are obviuosly the indices i, j of my image array, but how do I tell that my fit routine?
As Paul Panzer mentioned in the comments, one way of solving this is by using np.ogrid:
def fitter(image, trueImage,psf,intensity):
x = np.ogrid[:image.shape[0]]
y = np.ogrid[:image.shape[1]]
p0 = [intensity]
p0.append(np.amin(trueImage)*4**2)
p0.append(0)
p0.append(0)
meritFun = lambda p: np.ravel(image-(p[0]*psf+p[1]+p[2]*x+p[3]*y))
p = least_squares(meritFun,p0,method='trf')
I have some function for sound processing/ sound processing. And before it was all a single channel. But know i make it less or more multi channel.
At this point i have the feeling i do part of the scrips over and over again.
In this example it are two functions(my original function is longer) but the same happens also in single scripts.
my Two functions
import numpy as np
# def FFT(x, fs, *args, **kwargs):
def FFT(x, fs, output='complex'):
from scipy.fftpack import fft, fftfreq
N = len(x)
X = fft(x) / N
if output is 'complex':
F = np.linspace(0, N) / (N / fs)
return(F, X, [])
elif output is 'ReIm':
F = np.linspace(0, N) / (N / fs)
RE = np.real(X)
IM = np.imag(X)
return(F, RE, IM)
elif output is 'AmPh0':
F = np.linspace(0, (N-1)/2, N/2)
F = F/(N/fs)
# N should be int becouse of nfft
half_spec = np.int(N / 2)
AMP = abs(X[0:half_spec])
PHI = np.arctan(np.real(X[0:half_spec]) / np.imag(X[0:half_spec]))
return(F, AMP, PHI)
elif output is 'AmPh':
half_spec = np.int(N / 2)
F = np.linspace(1, (N-1)/2, N/2 - 1)
F = F/(N/fs)
AMP = abs(X[1:half_spec])
PHI = np.arctan(np.real(X[1:half_spec])/np.imag(X[1:half_spec]))
return(F, AMP, PHI)
def mFFT(x, fs, spectrum='complex'):
fft_shape = np.shape(x)
if len(fft_shape) == 1:
mF, mX1, mX2 = FFT(x, fs, spectrum)
elif len(fft_shape) == 2:
if fft_shape[0] < fft_shape[1]:
pass
elif fft_shape[0] > fft_shape[1]:
x = x.T
fft_shape = np.shape(x)
mF = mX1 = mX2 = []
for channel in range(fft_shape[0]):
si_mF, si_mX1, si_mX2 = FFT(x[channel], fs, spectrum)
if channel == 0:
mF = np.append(mF, si_mF)
mX1 = np.append(mX1, si_mX1)
mX2 = np.append(mX2, si_mX2)
else:
mF = np.vstack((mF, si_mF))
mX1 = np.vstack((mX1, si_mX1))
if si_mX2 == []:
pass
else:
mX2 = np.vstack((mX2, si_mX2))
elif len(fft_shape) > 2:
raise ValueError("Shape of input can't be greather than 2")
return(mF, mX1, mX2)
The second funcion in this case have the problem.
The reason for this checks is best to understand with an example:
I have recorded a sample of 1 second of audio data with 4 microphones.
so i have an ndim array of 4 x 44100 samples.
The FFT works on every even length array. This means that i get an result in both situations (4 x 44100 and 44100 x 4).
For all function after this function i have also 2 data types. or a complex signal or an tuple of two signals (amplitude and phase)... what's create an extra switch/ check in the script.
check type (tuple or complex data)
check direction (ad change it)
Check size / shape
run function and append/ stack this
Are there some methods to make this less repeative i have this situation in at least 10 functions...
Bert,
The problematic I'm understanding is the repeat of the calls you're making to do checks of all sorts. I'm not understanding all but I'm guessing they are made to format your data in a way you'll be able to execute fft on it.
One of the philosophy about computer programming in Python is "It's easier to ask forgiveness than it is to get permission."[1] . This means, you should probably try first and then ask forgiveness (try, except). It's much faster to do it this way then to do a lots of checks on the value. Also, those who are going to use your program should understand how it works pretty easily; make it easy to read without those check by separating the logic business from the technical logic. Don't worry, it's not evident and the fact you're asking is an indicator you're catching something isn't right :).
Here is what I would propose for your case (and it's not the perfect solution!):
def mFFT(x, fs, spectrum='complex'):
#Assume we're correcty align when receiving the data
#:param: x assume that we're multi-channel in the format [channel X soundtrack ]
#also, don't do this:
#mF = mX1 = si_mX2 = []
# see why : https://stackoverflow.com/questions/2402646/python-initializing-multiple-lists-line
mF = []
mX1 = []
mX2 = []
try:
for channel in range(len(x)):
si_mF, si_mX1, si_mX2 = FFT(x[channel], fs, spectrum)
mF.append(si_mF)
mX1.append(si_mX1)
mX2.append(si_mX2)
return (mF, mX1, mX2)
except:
#this is where you would try to point out why it could have failed. One good you had was the check for the orientation of the data and try again;
if np.shape(x)[0] > np.shape(x)[1]:
result = mFFT(x.T,fs,spectrum)
return result
else :
if np.shape(x)[0] > 2:
raise(ValueError("Shape of input isn't supported for greather than 2"))
I gave an example because I believe you expected one, but I'm not giving the perfect answer away ;). The problematic you have is a design problematic and no, there are no easy solution. What I propose to you is to start by assuming that the order is always in this format [ n-th channel X sample size ] (i.e. [ 4 channel X 44100 sample]). That way, you try it out first like this(as in try/except), then maybe as the inverse order.
Another suggestion (and it really depends on your use case), would be to make a data structure class that would manipulate the FFT data to return the complex or the ReIm or the AmPh0 or the AmPh as getters. (so you treat the input data as to be always time and you just give what the users want).
class FFT(object):
def __init__(self,x, fs):
from scipy.fftpack import fft, fftfreq
self.N = len(x)
self.fs = fs
self.X = fft(x) / N
def get_complex(self):
F = np.linspace(0, self.N) / (self.N / self.fs)
return(F, self.X, [])
def get_ReIm(self):
F = np.linspace(0, self.N) / (self.N / self.fs)
RE,IM = np.real(self.X), np.imag(self.X)
return(F, RE, IM)
def get_AmPh0(self):
F = np.linspace(0, (self.N-1)/2, self.N/2)/(self.N/self.fs)
# N should be int because of nfft
half_spec = np.int(self.N / 2)
AMP = abs(self.X[:half_spec])
PHI = np.arctan(np.real(self.X[:half_spec]) / np.imag(self.X[:half_spec]))
return(F, AMP, PHI)
This can then be used to be called depending on the desired output from another class with an eval to get the desire output (but you require to use the same convention across your code ;) ). 2