Having issues solving pair of nonlinear equations using Python - python-3.x

So I know there have been plenty of questions/answers on this topic, but I haven't been able to locate exactly what is going wrong in my attempts. I have two nonlinear function f(x,y) and g(x,y) and I am trying to solve the system
f(x,y) - g(x,y) = 0
f(x,y) + g(x,y) = c
where c is some positive constant. I have been using the snippet described in the answer to this question: How to solve a pair of nonlinear equations using Python?, but I am facing issues. If I run that snippet for my code, it returns the x and y values such that only the second equation in the system is satisfied, i.e. it returns x and y such that f(x,y) + g(x,y) = c, while for the other equation it holds that f(x,y) - g(x,y) != 0. I get the exact same issues when using the scipy.optimize.root function. I'm quite lost as to what could be causing this issue. Could it mean that there do not exist x, y such that both equations are satisfied?
Thanks in advance for any help!

It is very possible that there is no solution. x + y = 10, x + y = 20 has no solution, for example. This isn't an issue of non-linearity; this is an issue of math. Also, it might be possible, if this can't be solved algebraically, that the first equation has f(x,y) - g(x,y) is approximately zero. If f(x,y)-g(x,y)=0.0001, would you consider this close enough?
For completeness: Check out the math, as noted by #tstanisl. If you add the equations together, you solve f(x,y)=c/2 or g(x,y)=c/2, which is easier.

Related

Is there any method/solver in python to solve embedded derivatives in a ODE equation?

I've got this equation from mathematical model to know the thermal behavior of a battery.
dTsdt = Ts * a+ Ta * b + dTadt * c + d
However, i can't get to solve it due to the nested derivatives.
I need to solve the equation for Ts and Ta.
I tried to define it as follows, but python does not like it and several eŕrors show up.
Im using scipy.integrate and the solver ODEint
Since the model takes data from vectors, it has to be solved for every time step and record the output accordingly.
I also tried assinging the derivatives to a variable v1,v2, and then put everything in an equation without derivatives like the second approach shown as follows.
def Tmodel(z,t,a,b,c,d):
    Ts,Ta= z
    dTsdt = Ts*a+ Ta*b + dTadt*c+ d
    dzdt=[dTsdt]
    return dzdt
z0=[0,0]
# solve ODE
for i in range(0,n-1):
   
    tspan = [t[i],t[i+1]]
    # solve for next step
    z = odeint(Tmodel,z0,tspan,arg=(a[i],b[i],c[i],d[i],))
    # store solution for plotting
    Ts[i] = z[1][0]
    Ta[i] = z[1][1]
    # next initial condition
    z0 = z[1]
def Tmodel(z,t,a,b,c,d):
    Ts,v1,Ta,v2= z
# v1= dTsdt
# v2= dTadt
    v1 = Ts*a+ Ta*b + v2*c+ d
    dzdt=[v1,v2]
    return dzdt
That did not work either.I believe there might be a solver capable of solving that equation or the equation must be decouple in a way and solve accordingly.
Any advice on how to solve such eqtn with python would be appreciate it.
Best regards,
MM
Your difficulty seems to be that you are given Ta in a form with no easy derivative, so you do not know where to take it from. One solution is to avoid this derivative completely and solve the system for y=Ts-c*Ta. Substitute Ts=y+c*Ta in the right side to get
dy/dt = y*a + Ta*(b+c*a) + d
Of course, this requires then a post-processing step Ts=y+c*Ta to get to the requested variable.
If Ta is given as function table, use an interpolation function to get values at any odd time t that is demanded by the ODE solver.
Ta_func = interp1d(Ta_times,Ta_values)
def Tmodel(y,t,a,b,c,d):
Ta= Ta_func(t)
dydt = y*a+ Ta*(b+c*a) + d
return dydt
y[0] = Ts0-c*Ta_func(t[0])
for i in range(len(t)-1):
y[i+1] = odeint(Tmodel,y[i],t[i:i+2],arg=(a[i],b[i],c[i],d[i],))[-1,0]
Ts = y + c*Ta_func(t)

Why my fit for a logarithm function looks so wrong

I'm plotting this dataset and making a logarithmic fit, but, for some reason, the fit seems to be strongly wrong, at some point I got a good enough fit, but then I re ploted and there were that bad fit. At the very beginning there were a 0.0 0.0076 but I changed that to 0.001 0.0076 to avoid the asymptote.
I'm using (not exactly this one for the image above but now I'm testing with this one and there is that bad fit as well) this for the fit
f(x) = a*log(k*x + b)
fit = fit f(x) 'R_B/R_B.txt' via a, k, b
And the output is this
Also, sometimes it says 7 iterations were as is the case shown in the screenshot above, others only 1, and when it did the "correct" fit, it did like 35 iterations or something and got a = 32 if I remember correctly
Edit: here is again the good one, the plot I got is this one. And again, I re ploted and get that weird fit. It's curious that if there is the 0.0 0.0076 when the good fit it's about to be shown, gnuplot says "Undefined value during function evaluation", but that message is not shown when I'm getting the bad one.
Do you know why do I keep getting this inconsistence? Thanks for your help
As I already mentioned in comments the method of fitting antiderivatives is much better than fitting derivatives because the numerical calculus of derivatives is strongly scattered when the data is slightly scatered.
The principle of the method of fitting an integral equation (obtained from the original equation to be fitted) is explained in https://fr.scribd.com/doc/14674814/Regressions-et-equations-integrales . The application to the case of y=a.ln(c.x+b) is shown below.
Numerical calculus :
In order to get even better result (according to some specified criteria of fitting) one can use the above values of the parameters as initial values for iterarive method of nonlinear regression implemented in some convenient software.
NOTE : The integral equation used in the present case is :
NOTE : On the above figure one can compare the result with the method of fitting an integral equation to the result with the method of fitting with derivatives.
Acknowledgements : Alex Sveshnikov did a very good work in applying the method of regression with derivatives. This allows an interesting and enlightening comparison. If the goal is only to compute approximative values of parameters to be used in nonlinear regression software both methods are quite equivalent. Nevertheless the method with integral equation appears preferable in case of scattered data.
UPDATE (After Alex Sveshnikov updated his answer)
The figure below was drawn in using the Alex Sveshnikov's result with further iterative method of fitting.
The two curves are almost indistinguishable. This shows that (in the present case) the method of fitting the integral equation is almost sufficient without further treatment.
Of course this not always so satisfying. This is due to the low scatter of the data.
In ADDITION , answer to a question raised in comments by CosmeticMichu :
The problem here is that the fit algorithm starts with "wrong" approximations for parameters a, k, and b, so during the minimalization it finds a local minimum, not the global one. You can improve the result if you provide the algorithm with starting values, which are close to the optimal ones. For example, let's start with the following parameters:
gnuplot> a=47.5087
gnuplot> k=0.226
gnuplot> b=1.0016
gnuplot> f(x)=a*log(k*x+b)
gnuplot> fit f(x) 'R_B.txt' via a,k,b
....
....
....
After 40 iterations the fit converged.
final sum of squares of residuals : 16.2185
rel. change during last iteration : -7.6943e-06
degrees of freedom (FIT_NDF) : 18
rms of residuals (FIT_STDFIT) = sqrt(WSSR/ndf) : 0.949225
variance of residuals (reduced chisquare) = WSSR/ndf : 0.901027
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = 35.0415 +/- 2.302 (6.57%)
k = 0.372381 +/- 0.0461 (12.38%)
b = 1.07012 +/- 0.02016 (1.884%)
correlation matrix of the fit parameters:
a k b
a 1.000
k -0.994 1.000
b 0.467 -0.531 1.000
The resulting plot is
Now the question is how you can find "good" initial approximations for your parameters? Well, you start with
If you differentiate this equation you get
or
The left-hand side of this equation is some constant 'C', so the expression in the right-hand side should be equal to this constant as well:
In other words, the reciprocal of the derivative of your data should be approximated by a linear function. So, from your data x[i], y[i] you can construct the reciprocal derivatives x[i], (x[i+1]-x[i])/(y[i+1]-y[i]) and the linear fit of these data:
The fit gives the following values:
C*k = 0.0236179
C*b = 0.106268
Now, we need to find the values for a, and C. Let's say, that we want the resulting graph to pass close to the starting and the ending point of our dataset. That means, that we want
a*log(k*x1 + b) = y1
a*log(k*xn + b) = yn
Thus,
a*log((C*k*x1 + C*b)/C) = a*log(C*k*x1 + C*b) - a*log(C) = y1
a*log((C*k*xn + C*b)/C) = a*log(C*k*xn + C*b) - a*log(C) = yn
By subtracting the equations we get the value for a:
a = (yn-y1)/log((C*k*xn + C*b)/(C*k*x1 + C*b)) = 47.51
Then,
log(k*x1+b) = y1/a
k*x1+b = exp(y1/a)
C*k*x1+C*b = C*exp(y1/a)
From this we can calculate C:
C = (C*k*x1+C*b)/exp(y1/a)
and finally find the k and b:
k=0.226
b=1.0016
These are the values used above for finding the better fit.
UPDATE
You can automate the process described above with the following script:
# Name of the file with the data
data='R_B.txt'
# The coordinates of the last data point
xn=NaN
yn=NaN
# The temporary coordinates of a data point used to calculate a derivative
x0=NaN
y0=NaN
linearFit(x)=Ck*x+Cb
fit linearFit(x) data using (xn=$1,dx=$1-x0,x0=$1,$1):(yn=$2,dy=$2-y0,y0=$2,dx/dy) via Ck, Cb
# The coordinates of the first data point
x1=NaN
y1=NaN
plot data using (x1=$1):(y1=$2) every ::0::0
a=(yn-y1)/log((Ck*xn+Cb)/(Ck*x1+Cb))
C=(Ck*x1+Cb)/exp(y1/a)
k=Ck/C
b=Cb/C
f(x)=a*log(k*x+b)
fit f(x) data via a,k,b
plot data, f(x)
pause -1

Approximating gradient using python

I have attempted to solve the following problem.
For the calculation of gradient we are obliged to use an approximate calculation:
I tried to solve it for each vector e of the canonical basis of R^4 and using h = 1e-05 for example.
However, I made an example for R^2, but I'm not sure if my code is correct for this case and I need to change code for the formule in the picture.
def f(x,y):
return np.sin(x)+np.cos(y)
def derivative(func, vx, h):
e = np.array([[1,0],[0,1]]) #Basis canonique of R^2
x = vx[0]
y = vx[1]
dx=(func(x + e[0]*h,y) - func(x,y)) / h #directional derivative in x
dy=(func(x ,y+e[1]*h) - func(x,y)) / h #directional derivative in y
grad = np.array([dx[0],dy[1]])
return grad
vx=np.array([np.pi,1])
derivative(f,vx,h)
Results of this code:
In [150]: derivative(f,vx,h)
Out[150]: array([-1. , -0.84147369])
I am a little confused how to do this problem but I was hoping to get some help with fixing the code I produced so far. Thanks!
Review section 4.6 - Systems of Equations of the text below:
Numerical Methods in Engineering with Python 3 (3rd ed.)

How do I solve this exponential equation on Excel Solver?

100e^0.25*y = 97.5
Solving for y
Using Excel Solver
I tried using empty column entry for y in 'By changing cells' and Set objective function as LHS of above equation (empty column entry in equation included) equal to value of 97.5 in solver.
It gives no solution
How do I do this?
It's a bit ambiguous what you're asking...
Literal math interpretation: 100*(e^0.25)*y = 97.5
Then y = 97.5 / ( 100 * exp(.25)) = .759
My guess of what you want: 100*e^(0.25*y) = 97.5
Then y = ln(97.5/100) / .25 = -.101
Another possibility: (100 * e)^(0.25 * y) = 97.5
Then y = (ln(97.5) / ln(100*e)) / .25 = 3.268
Whatever it is, this doesn't need solver!
You don't really need the solver. Just re-arrange your formula to solve for Y. Since y = b^x is the same as log(b)Y = x (log of Y, with base b)
Your formula above is the same as:
Y = (log(100e)97.5))/.25
(Read aloud, that's log of 97.5, with base 100e, divided by .25
So, Y = 3.268305672
(Bonus points for someone who can tell me how to format this so the Log looks correct)
The question is "How do I solve this exponential equation on Excel Solver?" which is a fair enough question, as it points to trying to understand how to set up solver.
My interpretation of the equation provided is given in this screenshot ...
The solver dialog box is then setup as follows ...
Of note:
This is a non-linear equation and needs GRG Nonlinear. If you choose LP Simplex, it will not pass the linearity test.
Ensure "Make Unconstrained Variables Non-Negative" is not checked.
It provided this result for me ...
A more precise answer can be obtained by decreasing the "Convergence" value on the GRG Non-Linear Options dialog.
A problem this simple can also be solved using Goal Seek.

Gnuplot fit of a nested function

What is the proper way in gnuplot to fit a function f(x) having the next form?
f(x) = A*exp(x - B*f(x))
I tried to fit it as any other function using:
fit f(x) "data.txt" via A,B
and the output is just a sentence saying: "stack overflow"
I don't even know how to look for this topic so any help would be much appreciate it.
How are this kind of functions called? Nested? Recursive? Implicit?
Thanks
This doen't only fail for fitting, also for plotting. You'll have to write down the explicit form of f(x), otherwise gnuplot will loop it until it reaches its recursion limit. One way to do it would be to use a different name:
f(x) = sin(x) # for example
g(x) = A*exp(x - B*f(x))
And now use g(x) to fit, rather than f(x). If you have never declared f(x), then gnuplot doesn't have an expression to work with. In any case, if you want to recursively define a function, you'll at least need to set a recursion limit. Maybe something like this:
f0(x) = x
f1(x) = A*exp(x - B*f0(x))
f2(x) = A*exp(x - B*f1(x))
f3(x) = A*exp(x - B*f2(x))
...
This can be automatically looped:
limit=10
f0(x) = x
do for [i=1:limit] {
j=i-1
eval "f".i."(x) = A*exp(x - B*f".j."(x))"
}
Using the expression above you set the recursion limit with the limit variable. In any case it shall remain a finite number.
That is a recursive function. You need a condition for the recursion to stop, like a maximum number of iterations:
maxiter = 10
f(x, n) = (n > maxiter ? 0 : A*exp(x - B*f(x, n+1)))
fit f(x, 0) "data.txt" via A,B
Of course you must check, which value should be returned when the recursion is stopped (here I used 0)
Thanks for your replies
Discussing with a friend about this problem I found a way around.
First, this kind of functions are call "transcendental functions", that means that the function f(x) is not explicitly solvable, but the variable x could be solved as a function of f(x) and it will have the next form
x = B*f(x) + log(f(x)/A)
Therefore it is possible to define a new function (that is not transcendental)
g(x) = B*x + log(x/A)
From here you can fit the function g(x) to the plot x vs y. Using gnuplot it is possible to do the fitting as
fit g(x) "data.txt" using ($2):($1) via A,B
Hope this will help someone else

Resources