Solving and Graphing a Nonlinear first-order ODE - python-3.x

So far I've been semi-successful in solving and graphing the nonlinear ode dn/dt = n^2-2n-3 for two initial conditions, n(0)=-5 and n(0)=1, but when I add one last line to the graph with the initial condition n(0)=10, everything gets wacky, and the graph doesn't look like what it's supposed or behave like the other two lines.
the code is:
import numpy as np
import matplotlib.pyplot as plt
import scipy.integrate
#import warnings
#warnings.simplefilter('ignore')
def func(N, t):
return(N**2 - 2*N - 3)
tvec = np.arange(0,11)
s_1 = scipy.integrate.odeint(func, y0=-5,t = tvec)
s_2 = scipy.integrate.odeint(func,y0=1, t=tvec)
s_3 = scipy.integrate.odeint(func, y0 = 10, t=tvec)
%matplotlib inline
plt.plot(tvec,s_1, label="N0=-5")
plt.plot(tvec,s_2, label="N0=1")
plt.plot(tvec, s_3, label="N0=10")
plt.ylim(-5,10)
plt.legend();
the culprit here being s_3.
Any ideas on how to fix this?

Your differential equation has an unstable equilibrium point at N = 3. Any initial condition larger than 3 results in a solution that blows up in finite time. That's the mathematical statement; numerically, the values will become extremely large, and the ODE solver will eventually start generating nonsense. If any of the "nonsense" values happen to end up being less than 3, the "solution" will then converge to the stable equilibrium at N = -1.

Related

Is it possible to describe with 1 parameter when a wave is sinusoidal or square in Python?

I am using scipy and I managed to filter the data with the fft package cutting the high frequencies, but that is only useful to transform the data, instead of that I want to get just 1 parameter after the analysis.
Let's have a look at some simple code to explain what I mean:
from scipy import fftpack
import numpy as np
import pandas as pd
from scipy import signal
t = np.linspace(0, 2*np.pi, 100, endpoint=True)
sq1 = signal.square(np.pi*t)
sin1 = np.sin(np.pi*t)
fft_sq1 = fftpack.dct(sq1,norm="ortho")
fft_sin1 = fftpack.dct(sin1, norm="ortho")
After applying the fast fourier transform (direct cosine) I get fft_sq1 and fft_sin1, which are arrays 100 elements long. Manipulating those coefficientes I can use later the fftpack.idct() and obtain a curve that does not contain noise.
The problem with this is that I get too much frequencies, I get 100 parameters I have to filter and after that I get again the curve.
Instead of that I am interested in a filter that returns me just 1 value:
0 if the curve is completely square
1 if the curve is exactly like a sinusoid
Does something comes to your mind?
Obviously there are infinite curves in between, if the periodic signal is more flat the number will be closer to 0 and if the curve is more round the number will be closer to 1.
Thanks!!

Unexpected solution using JiTCDDE

I'm trying to investigate the behavior of the following Delayed Differential Equation using Python:
y''(t) = -y(t)/τ^2 - 2y'(t)/τ - Nd*f(y(t-T))/τ^2,
where f is a cut-off function which is essentially equal to the identity when the absolute value of its argument is between 1 and 10 and otherwise is equal to 0 (see figure 1), and Nd, τ and T are constants.
For this I'm using the package JiTCDDE. This provides a reasonable solution to the above equation. Nevertheless, when I try to add a noise on the right hand side of the equation, I obtain a solution which stabilize to a non-zero constant after a few oscillations. This is not a mathematical solution of the equation (the only possible constant solution being equal to zero). I don't understand why this problem arises and if it is possible to solve it.
I reproduce my code below. Here, for the sake of simplicity, I substituted the noise with an high-frequency cosine, which is introduced in the system of equation as the initial condition for a dummy variable (the cosine could have been introduced directly in the system, but for a general noise this doesn't seem possible). To simplify further the problem, I removed also the term involving the f function, as the problem arises also without it. Figure 2 shows the plot of the function given by the code.
from jitcdde import jitcdde, y, t
import numpy as np
from matplotlib import pyplot as plt
import math
from chspy import CubicHermiteSpline
# Definition of function f:
def functionf(x):
return x/4*(1+symengine.erf(x**2-Bmin**2))*(1-symengine.erf(x**2-Bmax**2))
#parameters:
τ = 42.9
T = 35.33
Nd = 8.32
# Definition of the initial conditions:
dt = .01 # Time step.
totT = 10000. # Total time.
Nmax = int(totT / dt) # Number of time steps.
Vt = np.linspace(0., totT, Nmax) # Vector of times.
# Definition of the "noise"
X = np.zeros(Nmax)
for i in range(Nmax):
X[i]=math.cos(Vt[i])
past=CubicHermiteSpline(n=3)
for time, datum in zip(Vt,X):
regular_past = [10.,0.]
past.append((
time-totT,
np.hstack((regular_past,datum)),
np.zeros(3)
))
noise= lambda t: y(2,t-totT)
# Integration of the DDE
g = [
y(1),
-y(0)/τ**2-2*y(1)/τ+0.008*noise(t)
]
g.append(0)
DDE = jitcdde(g)
DDE.add_past_points(past)
DDE.adjust_diff()
data = []
for time in np.arange(DDE.t, DDE.t+totT, 1):
data.append( DDE.integrate(time)[0] )
plt.plot(data)
plt.show()
Incidentally, I noticed that even without noise, the solution seems to be discontinuous at the point zero (y is set to be equal to zero for negative times), and I don't understand why.
As the comments unveiled, your problem eventually boiled down to this:
step_on_discontinuities assumes delays that are small with respect to the integration time and performs steps that are placed on those times where the delayed components points to the integration start (0 in your case). This way initial discontinuities are handled.
However, implementing an input with a delayed dummy variable introduces a large delay into the system, totT in your case.
The respective step for step_on_discontinuities would be at totT itself, i.e., after the desired integration time.
Thus when you reach for time in np.arange(DDE.t, DDE.t+totT, 1): in your code, DDE.t is totT.
Therefore you have made a big step before you actually start integrating and observing which may seem like a discontinuity and lead to weird results, in particular you do not see the effect of your input, because it has already “ended” at this point.
To avoid this, use adjust_diff or integrate_blindly instead of step_on_discontinuities.

Is there a logic error with my implementation of odeint?

I am getting the wrong solution and am unsure if odeint is the correct tool for solving this system of ODEs.
I am trying to model a simple first order chemical reaction by solving a system of ODEs. From a logic standpoint my functions are correct and I can solve this problem in MATLAB with little issue. I would like to also be able to do this work in python as well. I think odeint is the tool for the job but I could be wrong. My solution should not converge at independent variable = 10 every time but it always does regardless of inputs.
from matplotlib.pyplot import (plot,grid,xlabel,ylabel,show,legend)
import numpy as np
from scipy.integrate import odeint
wght= np.linspace(0,20)
# reaction is A -> B
def PBR(fun,W):
X,y = fun
P_0=20;#%bar
v_0=5; #%m^3/min
y_A0=1; #unitless
k=.005; #m^3/kg/min
alpha=0.1; #1/kg
epi=.13; #unitless
R=8.314; #J/mol/K
F_A0= .5 ;#mol/min
ra = -k *y*(1-X)/(1+epi*X)
dX = (-ra)/F_A0
dy = -alpha*(1+epi*X)/(2*y)
return [dX,dy]
X0 = 0.0
y0 = 1.0
sol = odeint(PBR, [X0, y0],wght)
plot(wght, sol[:, 0], 'b', label='X')
plot(wght, sol[:, 1], 'g', label='y')
legend(loc='best')
xlabel('W')
grid()
show()
print(sol)
Output graph
Your graphic is not reproducible, in python 3.7, scipy 1.4.1, I get a divergence to practically infinity at around 12.5, and after cleaning the work space, both graphs moving to zero after time 10.
The problem is your division by 2*y in the second equation. In effect that means that y is the square root of some other function, and that type of function behaves badly numerically at small values as the tangent becomes vertical, changing to zero shortly after.
However, one can desingularize the division in a simple way
dy = -alpha*(1+epi*X)*y/(1e-10+2*y*y)
or in a more complicated way that leaves the solution far away from zero really unchanged
dy = -alpha*(1+epi*X)*y/(y*y+max(1e-12,y*y))
resulting in the graph
If that is still not the expected behavior, then something else is different in your ODE system relative to the Matlab version.
That the singular point is at about w=10 depends almost completely on alpha=0.1. As epi is small and X is initially small, the second equation is close to
d(y^2)/dW = -alpha ==> y^2 = 1 - alpha*W
and this hits zero at about W=10 where the solution would have to end as the square of y can not take negative values.

How do I remove the vertical line that arises from plotting discontinuous data?

I have spent a tiresome amount of time trying to figure out, how to remove the vertical line arising when plotting discontinuous data. In my case I'm trying to plot some data, which diverges towards infinity at a given point. I'm using Python 3.6 with matplotlib's pyplot package.
This code produces the same unpleasant flaw:
import matplotlib.pyplot as plt
x = np.arange(100) * 0.09
y = 1 / (x - 5)
plt.figure(1)
plt.plot(x,y)
plt.show
Is there anything I can do to remove that line? What is it, I'm not seeing?
Right now, it feels like I've exhausted my options. I've examined the documentation for matplotlib.pyplot.plot and matplotlib.pyplot.scatter and I am unable to fix this problem, even though it feels like this should be an insanely simple operation (I remember once dealing with this in Maple or MatLab or something similar - there you simply set the argument discont=True to accomplish this).
Any help would be very much appreciated.
Your data is not discontinuous. You have created a single vector (y) and are asking to plot the entire vector. You can plot individual portions of any vector as long as the size of that vector matches the size of the (x) vector, or create separate vectors.
import matplotlib.pyplot as plt
x = np.arange(100) * 0.09
y = 1 / (x - 5)
plt.figure(1)
plt.plot(x[:50],y[:50])
plt.plot(x[60:],y[60:])
plt.show()

Choice of IMODE in gekko optimisation problems

I'm seeing here that imode=3 is equivalent to the steady-state simulation (which I guess imode=2) except that additional degrees of freedom are allowed.
How do I decide to use imode=3 instead of imode=2?
I'm doing optimization using imode=2 where I'm defining variables calculated by solver to meet constraint using m.Var & other using m.Param. What changes I need to do in variables to use imode=3 ?
Niladri,
IMODE 2 is for steady state problems with multiple data points.
Here is an example:
from gekko import GEKKO
import numpy as np
xm = np.array([0,1,2,3,4,5])
ym = np.array([0.1,0.2,0.3,0.5,1.0,0.9])
m = GEKKO()
m.x = m.Param(value=np.linspace(-1,6))
m.y = m.Var()
m.options.IMODE=2
m.cspline(m.x,m.y,xm,ym)
m.solve(disp=False)
This is a Cubic Spline approximation with multiple data points. When you switch to IMODE 3, it is very similar but it only considers one instance of your model. All of the value properties should only have 1 value such as when you optimize the Cubic spline to find the maximum value.
p = GEKKO()
p.x = p.Var(value=1,lb=0,ub=5)
p.y = p.Var()
p.cspline(p.x,p.y,xm,ym)
p.Obj(-p.y)
p.solve(disp=False)
Here is additional information on IMODE:
https://apmonitor.com/wiki/index.php/Main/OptionApmImode
https://apmonitor.com/wiki/index.php/Main/Modes
https://gekko.readthedocs.io/en/latest/imode.html
Best regards,
John Hedengren

Resources