How does rtol and atol in scipy odeint affect the solutions? - python-3.x

I amd trying to solve a coupled ODE using odeint. With Solution = odeint(LzG, p0, t, atol=1e-6, rtol=1e-4 the solutions is as under :atol e-6, rtol e-4
with (atol e-4, rtol e-4) the solutions:
atol e-4, rtol e-4
How do I know which one is the corrct one. Physically both the solutions are possible.
Thank you.

Related

Any regressors in sklearn to deal with separated weights?

Is there anyway to seek help from sklearn if the objective function I want to optimize is somehow similar to the ridge problem but
the weights, originally simply W are separated into 2 subweights U, V s.t. W = U dot V and
the same constraint is on both subweights?
Just like the expressions I attached below. Thanks in advance.
This is equivalent (up to choosing a different lambda) to ordinary ridge regression.
Let w_i = u_i v_i. For fixed W, the squared-error term is fixed, and to minimize the penalty term you end up with u_i, v_i = +- sqrt(w_i). Then the penalty term is 2 lambda sum(w_i^2), and so we're down to ridge with lambda' = 2 lambda.

Solving a system of coupled differential equations with dsolve_system in python (sympy)

I want to solve a system of 4 coupled differential equations with python (sympy):
eqs = [Eq(cP1(t).diff(t), k1*cE1(t)**3), Eq(cE1(t).diff(t), -k1 * cE1(t)**3 + k6 * cE3(t)**2), Eq(cE2(t).diff(t), -k8 * cE2(t)), Eq(cE3(t).diff(t), k8 * cE2(t) - k6 * cE3(t)**2)]
When I try to solve the system with "dsolve_system":
solution = dsolve_system(eqs, ics={cP1(0): 0, cE1(0): cE1_0, cE2(0):cE2_0, cE3(0):cE3_0})
I get the answer: "NotImplementedError:
The system of ODEs passed cannot be solved by dsolve_system."
Does anyone know, whats the problem? Or is there a better way of solving this system of Differential equations in Sympy?
Thanks a lot!
It's generally nice and friendly to show the complete code:
In [18]: cP1, cE1, cE2, cE3 = symbols('cP1, cE1:4', cls=Function)
In [19]: t, k1, k6, k8 = symbols('t, k1, k6, k8')
In [20]: eqs = [Eq(cP1(t).diff(t), k1*cE1(t)**3), Eq(cE1(t).diff(t), -k1 * cE1(t)**3 + k6 * cE3(t)**2),
...: Eq(cE2(t).diff(t), -k8 * cE2(t)), Eq(cE3(t).diff(t), k8 * cE2(t) - k6 * cE3(t)**2)]
...:
In [21]: for eq in eqs:
...: pprint(eq)
...:
d 3
──(cP₁(t)) = k₁⋅cE₁ (t)
dt
d 3 2
──(cE₁(t)) = - k₁⋅cE₁ (t) + k₆⋅cE₃ (t)
dt
d
──(cE₂(t)) = -k₈⋅cE₂(t)
dt
d 2
──(cE₃(t)) = - k₆⋅cE₃ (t) + k₈⋅cE₂(t)
dt
In [22]: dsolve(eqs)
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-22-69ab769a7261> in <module>
----> 1 dsolve(eqs)
~/current/sympy/sympy/sympy/solvers/ode/ode.py in dsolve(eq, func, hint, simplify, ics, xi, eta, x0, n, **kwargs)
609 "number of functions being equal to number of equations")
610 if match['type_of_equation'] is None:
--> 611 raise NotImplementedError
612 else:
613 if match['is_linear'] == True:
NotImplementedError:
This means that dsolve can not yet handle this particular type of system. Note that in general nonlinear systems of ODEs are very unlikely to have analytic solutions (dsolve is for finding analytic solutions, if you want numerical solutions use something like scipy's odeint).
As nonlinear systems go this one is relatively friendly so it might be possible to solve it. Let's see...
Firstly there is a conserved quantity (the sum of all 4 variables) that we can use to eliminate one equation. Actually that doesn't help as much because the first equation is already isolated from the others: if we knew cE1 we could just integrate but the conserved quantity gives it more easily if the other variables are known.
The structure of the system is like:
cE2 ---> cE3 ---> cE1 ---> cP1
implying that it can be solved as a sequence of ODEs where we solve the 3rd equation for cE2 and then the 4th equation for cE3 and then use that for cE1 and so on. So we can reduce this to a problem involving a sequence of mostly nonlinear single ODEs. That also is very unlikely to have an analytic solution but let's give it a try:
In [24]: dsolve(eqs[2], cE2(t))
Out[24]:
-k₈⋅t
cE₂(t) = C₁⋅ℯ
In [25]: cE2sol = dsolve(eqs[2], cE2(t)).rhs
In [26]: eqs[3].subs(cE2(t), cE2sol)
Out[26]:
d -k₈⋅t 2
──(cE₃(t)) = C₁⋅k₈⋅ℯ - k₆⋅cE₃ (t)
dt
At this point in principle we could solve for cE3 here but sympy doesn't have any way of solving this particular nonlinear ODE so dsolve gives a series solution (I don't think that's what you want) and the only other solver that might handle this is lie_group but that actually fails.
Since we can't get an expression for the solution for cE3 we also can't go on to solve for cE1 and cP1. The ODE that fails there is a Riccati equation but this particular type of Ricatti equation is not yet implemented in dsolve. It looks like Wolfram Alpha gives an answer in terms of Bessel functions:
https://www.wolframalpha.com/input/?i=solve+dx%2Fdt+%3D+e%5E-t+-+x%5E2
Judging from that I guess it's unlikely that we would be able to solve the next equation. At that point Wolfram Alpha also gives up and says "classification: System of nonlinear differential equations":
https://www.wolframalpha.com/input/?i=solve+dx%2Fdt+%3D+e%5E-t+-+x%5E2%2C+dy%2Fdt+%3D+-y%5E3+%2B+x%5E2
I suspect that there is no analytic solution for your system but you could try numerical solutions or otherwise a more qualitative analysis.

Is there a logic error with my implementation of odeint?

I am getting the wrong solution and am unsure if odeint is the correct tool for solving this system of ODEs.
I am trying to model a simple first order chemical reaction by solving a system of ODEs. From a logic standpoint my functions are correct and I can solve this problem in MATLAB with little issue. I would like to also be able to do this work in python as well. I think odeint is the tool for the job but I could be wrong. My solution should not converge at independent variable = 10 every time but it always does regardless of inputs.
from matplotlib.pyplot import (plot,grid,xlabel,ylabel,show,legend)
import numpy as np
from scipy.integrate import odeint
wght= np.linspace(0,20)
# reaction is A -> B
def PBR(fun,W):
X,y = fun
P_0=20;#%bar
v_0=5; #%m^3/min
y_A0=1; #unitless
k=.005; #m^3/kg/min
alpha=0.1; #1/kg
epi=.13; #unitless
R=8.314; #J/mol/K
F_A0= .5 ;#mol/min
ra = -k *y*(1-X)/(1+epi*X)
dX = (-ra)/F_A0
dy = -alpha*(1+epi*X)/(2*y)
return [dX,dy]
X0 = 0.0
y0 = 1.0
sol = odeint(PBR, [X0, y0],wght)
plot(wght, sol[:, 0], 'b', label='X')
plot(wght, sol[:, 1], 'g', label='y')
legend(loc='best')
xlabel('W')
grid()
show()
print(sol)
Output graph
Your graphic is not reproducible, in python 3.7, scipy 1.4.1, I get a divergence to practically infinity at around 12.5, and after cleaning the work space, both graphs moving to zero after time 10.
The problem is your division by 2*y in the second equation. In effect that means that y is the square root of some other function, and that type of function behaves badly numerically at small values as the tangent becomes vertical, changing to zero shortly after.
However, one can desingularize the division in a simple way
dy = -alpha*(1+epi*X)*y/(1e-10+2*y*y)
or in a more complicated way that leaves the solution far away from zero really unchanged
dy = -alpha*(1+epi*X)*y/(y*y+max(1e-12,y*y))
resulting in the graph
If that is still not the expected behavior, then something else is different in your ODE system relative to the Matlab version.
That the singular point is at about w=10 depends almost completely on alpha=0.1. As epi is small and X is initially small, the second equation is close to
d(y^2)/dW = -alpha ==> y^2 = 1 - alpha*W
and this hits zero at about W=10 where the solution would have to end as the square of y can not take negative values.

Why sympy gives complex roots when solving cubic equations

I am using the following code to solve a cubic equation.
from sympy.solvers import solve
from sympy import Symbol
x = Symbol('x')
print(solve(-0.0643820896731566*x**3 + 0.334816369385245*x**2 + 1.08104426781115*x - 2.05750838005246,x))
As it is a cubic equation with real coefficients, there cannot be three distinct complex roots. But it gives the following results.
[-3.19296319480108 - 0.e-22*I, 1.43925417946882 + 0.e-20*I, 6.95416726521169 - 0.e-20*I]
Could someone please tell me if something goes wrong. Is there other way to solve the equation and gives real roots?
There is a clear code level and interface level separation between
solvers for equations in the complex domain and the real domain. For
example solving 𝑒𝑥=1 when 𝑥 is to be solved in the complex domain,
returns the set of all solutions, that is {2𝑛𝑖𝜋|𝑛∈ℤ}, whereas if
𝑥 is to be solved in the real domain then only {0} is returned.
https://docs.sympy.org/latest/modules/solvers/solveset.html
Instead of solve() you should be using solveset()
from sympy import var, solveset
x = var('x', real=True)
print(solveset(-0.0643820896731566*x**3 + 0.334816369385245*x**2 + 1.08104426781115*x - 2.05750838005246,x))
{-3.19296319480108, 1.43925417946882, 6.95416726521169}

(Incremental)PCA's Eigenvectors are not transposed but should be?

When we posted a homework assignment about PCA we told the course participants to pick any way of calculating the eigenvectors they found. They found multiple ways: eig, eigh (our favorite was svd). In a later task we told them to use the PCAs from scikit-learn - and were surprised that the results differed a lot more than we expected.
I toyed around a bit and we posted an explanation to the participants that either solution was correct and probably just suffered from numerical instabilities in the algorithms. However, recently I picked that file up again during a discussion with a co-worker and we quickly figured out that there's an interesting subtle change to make to get all results to be almost equivalent: Transpose the eigenvectors obtained from the SVD (and thus from the PCAs).
A bit of code to show this:
def pca_eig(data):
"""Uses numpy.linalg.eig to calculate the PCA."""
data = data.T # data
val, vec = np.linalg.eig(data)
return val, vec
versus
def pca_svd(data):
"""Uses numpy.linalg.svd to calculate the PCA."""
u, s, v = np.linalg.svd(data)
return s ** 2, v
Does not yield the same result. Changing the return of pca_svd to s ** 2, v.T, however, works! It makes perfect sense following the definition by wikipedia: The SVD of X follows X=UΣWT where
the right singular vectors W of X are equivalent to the eigenvectors of XTX
So to get the eigenvectors we need to transposed the output v of np.linalg.eig(...).
Unless there is something else going on? Anyway, the PCA and IncrementalPCA both show wrong results (or eig is wrong? I mean, transposing that yields the same equality), and looking at the code for PCA reveals that they are doing it as I did it initially:
U, S, V = linalg.svd(X, full_matrices=False)
# flip eigenvectors' sign to enforce deterministic output
U, V = svd_flip(U, V)
components_ = V
I created a little gist demonstrating the differences (nbviewer), the first with PCA and IncPCA as they are (also no transposition of the SVD), the second with transposed eigenvectors:
Comparison without transposition of SVD/PCAs (normalized data)
Comparison with transposition of SVD/PCAs (normalized data)
As one can clearly see, in the upper image the results are not really great, while the lower image only differs in some signs, thus mirroring the results here and there.
Is this really wrong and a bug in scikit-learn? More likely I am using the math wrong – but what is right? Can you please help me?
If you look at the documentation, it's pretty clear from the shape that the eigenvectors are in the rows, not the columns.
The point of the sklearn PCA is that you can use the transform method to do the correct transformation.

Resources