How to symbolically find the steady state using sympy's nonlinsolve - python-3.x

I have a standard problem in economics, which I would like to solve using sympy's nonlinsolve.
I want to symbolically find the steady state of a model and to that end solve a nonlinear system of equations.
I was happy to see sympy has nonlinsolve which is supposed to be able to solve such problems but I am so far unable to do so.
In the following you find the part of the problem which does not solve on it's own:
from sympy.solvers.solveset import nonlinsolve
from sympy import var
var('c k mc n r_k eta w alpha', Rational = True, positive=True)
eq1 = -r_k + alpha * mc * k**(-1 + alpha) * n**(1 - alpha)
eq2 = -w + mc * (1 - alpha) * k**alpha * n**(-alpha)
eq3 = c**-1 * w - n**eta
eq_system = [eq1,eq2,eq3]
nonlinsolve(eq_system , [n,k,w])
alpha and eta are parameters whereas the other symbols are variables. I know they are rational positive numbers but for a reason I do not understand yet, the code throws:
ValueError: alpha: Exponent must be a positive Integer
Neither do I see why alpha must be an integer nor does it help if I declare alpha as such.
Any hint as to how I can get a meaningful solution using sympy?
Many thanks,
Thore

Related

Is there any method/solver in python to solve embedded derivatives in a ODE equation?

I've got this equation from mathematical model to know the thermal behavior of a battery.
dTsdt = Ts * a+ Ta * b + dTadt * c + d
However, i can't get to solve it due to the nested derivatives.
I need to solve the equation for Ts and Ta.
I tried to define it as follows, but python does not like it and several eŕrors show up.
Im using scipy.integrate and the solver ODEint
Since the model takes data from vectors, it has to be solved for every time step and record the output accordingly.
I also tried assinging the derivatives to a variable v1,v2, and then put everything in an equation without derivatives like the second approach shown as follows.
def Tmodel(z,t,a,b,c,d):
    Ts,Ta= z
    dTsdt = Ts*a+ Ta*b + dTadt*c+ d
    dzdt=[dTsdt]
    return dzdt
z0=[0,0]
# solve ODE
for i in range(0,n-1):
   
    tspan = [t[i],t[i+1]]
    # solve for next step
    z = odeint(Tmodel,z0,tspan,arg=(a[i],b[i],c[i],d[i],))
    # store solution for plotting
    Ts[i] = z[1][0]
    Ta[i] = z[1][1]
    # next initial condition
    z0 = z[1]
def Tmodel(z,t,a,b,c,d):
    Ts,v1,Ta,v2= z
# v1= dTsdt
# v2= dTadt
    v1 = Ts*a+ Ta*b + v2*c+ d
    dzdt=[v1,v2]
    return dzdt
That did not work either.I believe there might be a solver capable of solving that equation or the equation must be decouple in a way and solve accordingly.
Any advice on how to solve such eqtn with python would be appreciate it.
Best regards,
MM
Your difficulty seems to be that you are given Ta in a form with no easy derivative, so you do not know where to take it from. One solution is to avoid this derivative completely and solve the system for y=Ts-c*Ta. Substitute Ts=y+c*Ta in the right side to get
dy/dt = y*a + Ta*(b+c*a) + d
Of course, this requires then a post-processing step Ts=y+c*Ta to get to the requested variable.
If Ta is given as function table, use an interpolation function to get values at any odd time t that is demanded by the ODE solver.
Ta_func = interp1d(Ta_times,Ta_values)
def Tmodel(y,t,a,b,c,d):
Ta= Ta_func(t)
dydt = y*a+ Ta*(b+c*a) + d
return dydt
y[0] = Ts0-c*Ta_func(t[0])
for i in range(len(t)-1):
y[i+1] = odeint(Tmodel,y[i],t[i:i+2],arg=(a[i],b[i],c[i],d[i],))[-1,0]
Ts = y + c*Ta_func(t)

Wrong result of sympy integration with symbol limits

from sympy import *
s = Symbol("s")
y = Symbol("y")
raw_function = 1/(150.0-0.5*y)
result = integrate(raw_function, (y, 0, s)
The above snippet gets a wrong result: -2.0*log(0.5*s - 150.0) + 10.0212705881925 + 2.0*I*pi,
but we can know the right result is -2.0*log(-0.5*s + 150.0) + 10.0212705881925, so what's wrong?
Are you sure about the correct result, WolframAlpha says it is the same as Sympy here.
Edit:
This function diverges (and the integral too) around y=300, see its plot here (it diverges the same way as 1/x does but offset to y=300)
Meaning that you are constrained to s < 300 to have a well defined (and finite) integrale. In that range, the value of the integral is equal to what sympy is providing you.

How is KL-divergence in pytorch code related to the formula?

In VAE tutorial, kl-divergence of two Normal Distributions is defined by:
And in many code, such as here, hereand here, the code is implemented as:
KL_loss = -0.5 * torch.sum(1 + logv - mean.pow(2) - logv.exp())
or
def latent_loss(z_mean, z_stddev):
mean_sq = z_mean * z_mean
stddev_sq = z_stddev * z_stddev
return 0.5 * torch.mean(mean_sq + stddev_sq - torch.log(stddev_sq) - 1)
How are they related? why there is not any "tr" or ".transpose()" in code?
The expressions in the code you posted assume X is an uncorrelated multi-variate Gaussian random variable. This is apparent by the lack of cross terms in the determinant of the covariance matrix. Therefore the mean vector and covariance matrix take the forms
Using this we can quickly derive the following equivalent representations for the components of the original expression
Substituting these back into the original expression gives

Sympy Solveset Multi Variable Non-Linear Solutions

I am having a bit of trouble with Sympy's solveset. I am trying to use Sympy to find a solution to an basic circuit analysis question involving three unknown resistors and two equations. I realize that I will have to guess at the value of one of the resistors and then calculate the value of the other two resistors. I am using superposition to solve the circuit.
This is the circuit and superposition I am trying to solve
import sympy
V_outa, V_outb, R_1, R_2, R_3, V_1, V_2 = symbols('V_outa V_outb R_1 R_2 R_3 V_1 V_2')
### Here are a bunch of variable definitions.
R_eq12 = R_2*R_3/(R_2+R_3)
R_eq123 = R_eq12 + R_1
V_outa = V_1 *R_eq12/R_eq123
R_eq13 = R_1*R_3/(R_1+R_3)
R_eq123b = R_2 + R_eq13
V_outb = V_2 * R_eq13/R_eq123b
### Here is my governing equation.
V_out = 0.5* V_outb + (1/6) *V_outa
### Now I can start setting up the equations to solve. This sets the
### coefficient of the V1 term equal to the 1/2 in the governing equation
### and the coefficient of the V2 term equal to 1/6. I have also guessed
### that R_3 is equal to 10 ohms.
eq1 = Eq(1.0/2.0, V_out.coeff(V_2).subs(R_3, 10))
eq2 = Eq(1.0/6.0, V_out.coeff(V_2).subs(R_3, 10))
#### Now when I try to solve eq1 and eq2 with solveset, I get an error.
solveset([eq1, eq2], (R_2, R_3))
And here is the error I get:
ValueError: [Eq(0.500000000000000, 10*R_1/((R_1 + 10)*(10*R_1/(R_1 +
10) + R_2))), Eq(0.166666666666667, 10*R_1/((R_1 + 10)*(10*R_1/(R_1 +
10) + R_2)))] is not a valid SymPy expression
The other thing I don't understand is the set type I get when I try to solve it this way. Could someone also explain what set type this is, and how to make use of it?
expr3 = -1.0/2.0 + V_out.coeff(V_2).subs(R_3, 10)
solveset(expr3, R_2)
A screen shot of the weird set type
Any help would be much appreciated. I know it is a solvable set because Wolfram Alpha had no problems with it.
Thanks!
David

curve fitting with integer inputs Python 3.3

I am using scipy's curvefit module to fit a function and wanted to know if there is a way to tell it the the only possible entries are integers not real numbers? Any ideas as to another way of doing this?
In its general form, an integer programming problem is NP-hard ( see here ). There are some efficient heuristic or approximate algorithm to solve this problem, but none guarantee an exact optimal solution.
In scipy you may implement a grid search over the integer coefficients and use, say, curve_fit over the real parameters for the given integer coefficients. As for grid search. scipy has brute function.
For example if y = a * x + b * x^2 + some-noise where a has to be integer this may work:
Generate some test data with a = 5 and b = -1.5:
coef, n = [5, - 1.5], 50
xs = np.linspace(0, 10, n)[:,np.newaxis]
xs = np.hstack([xs, xs**2])
noise = 2 * np.random.randn(n)
ys = np.dot(xs, coef) + noise
A function which given the integer coefficients fits the real coefficient using curve_fit method:
def optfloat(intcoef, xs, ys):
from scipy.optimize import curve_fit
def poly(xs, floatcoef):
return np.dot(xs, [intcoef, floatcoef])
popt, pcov = curve_fit(poly, xs, ys)
errsqr = np.linalg.norm(poly(xs, popt) - ys)
return dict(errsqr=errsqr, floatcoef=popt)
A function which given the integer coefficients, uses the above function to optimize the float coefficient and returns the error:
def errfun(intcoef, *args):
xs, ys = args
return optfloat(intcoef, xs, ys)['errsqr']
Minimize errfun using scipy.optimize.brute to find optimal integer coefficient and call optfloat with the optimized integer coefficient to find the optimal real coefficient:
from scipy.optimize import brute
grid = [slice(1, 10, 1)] # grid search over 1, 2, ..., 9
# it is important to specify finish=None in below
intcoef = brute(errfun, grid, args=(xs, ys,), finish=None)
floatcoef = optfloat(intcoef, xs, ys)['floatcoef'][0]
Using this method I obtain [5.0, -1.50577] for the optimal coefficients, which is exact for the integer coefficient, and close enough for the real coefficient.
In general, the answer is No: scipy.optimize.curve_fit() and leastsq() that it is based on, and (AFAIK) all the other solvers in scipy.optimize work strictly on floating point numbers.
You could try increasing the value of epsfcn (which has a default value of numpy.finfo('double').eps ~ 2.e-16), which would be used as the initial step to all variables in the problem. The basic issue is that the fitting algorithm will adjust a floating point number, and if you do
int_var = int(float_var)
and the algorithm changes float_var from 1.0 to 1.00000001, it will see no difference in the result and decide that that value does not actually alter the fit metric.
Another approach would be to have a floating point parameter 'tmp_float_var' that is freely adjusted by the fitting algorithm but then in your objective function use
int_var = int(tmp_float_var / numpy.finfo('double').eps)
as the value for your integer variable. That might need a little tweaking, and might be a little unstable, but ought to work.

Resources