I have a non-linear optimization problem which, in Mathematica, could be solved as:
FindMaximum[{(81 x + 19)^0.4 + (80 (1 - x) + 20)^0.6, 0 <= x <= 1}, x]
However, now I am on a computer without Mathematica and I would like to solve a similar problem in Python, using the CVXOPT module. I looked at the examples and found linear programs, quadratic programs, and other kinds of programs, but could not find this simple program.
Can I solve such a program with CVXOPT?
I did not find a solution with cvxopt, but I found a much better alternative - cvxpy:
import cvxpy as cp
x = cp.Variable()
prob = cp.Problem(
cp.Maximize((81*x + 19)**0.6 + (80*(1-x)+20)**0.6),
[0 <= x, x <= 1])
prob.solve() # Returns the optimal value.
print("status:", prob.status)
print("optimal value", prob.value)
print("optimal var", x.value)
Prints:
status: optimal
optimal value 23.27298502822502
optimal var 0.5145387371825181
Related
I've got this equation from mathematical model to know the thermal behavior of a battery.
dTsdt = Ts * a+ Ta * b + dTadt * c + d
However, i can't get to solve it due to the nested derivatives.
I need to solve the equation for Ts and Ta.
I tried to define it as follows, but python does not like it and several eŕrors show up.
Im using scipy.integrate and the solver ODEint
Since the model takes data from vectors, it has to be solved for every time step and record the output accordingly.
I also tried assinging the derivatives to a variable v1,v2, and then put everything in an equation without derivatives like the second approach shown as follows.
def Tmodel(z,t,a,b,c,d):
Ts,Ta= z
dTsdt = Ts*a+ Ta*b + dTadt*c+ d
dzdt=[dTsdt]
return dzdt
z0=[0,0]
# solve ODE
for i in range(0,n-1):
tspan = [t[i],t[i+1]]
# solve for next step
z = odeint(Tmodel,z0,tspan,arg=(a[i],b[i],c[i],d[i],))
# store solution for plotting
Ts[i] = z[1][0]
Ta[i] = z[1][1]
# next initial condition
z0 = z[1]
def Tmodel(z,t,a,b,c,d):
Ts,v1,Ta,v2= z
# v1= dTsdt
# v2= dTadt
v1 = Ts*a+ Ta*b + v2*c+ d
dzdt=[v1,v2]
return dzdt
That did not work either.I believe there might be a solver capable of solving that equation or the equation must be decouple in a way and solve accordingly.
Any advice on how to solve such eqtn with python would be appreciate it.
Best regards,
MM
Your difficulty seems to be that you are given Ta in a form with no easy derivative, so you do not know where to take it from. One solution is to avoid this derivative completely and solve the system for y=Ts-c*Ta. Substitute Ts=y+c*Ta in the right side to get
dy/dt = y*a + Ta*(b+c*a) + d
Of course, this requires then a post-processing step Ts=y+c*Ta to get to the requested variable.
If Ta is given as function table, use an interpolation function to get values at any odd time t that is demanded by the ODE solver.
Ta_func = interp1d(Ta_times,Ta_values)
def Tmodel(y,t,a,b,c,d):
Ta= Ta_func(t)
dydt = y*a+ Ta*(b+c*a) + d
return dydt
y[0] = Ts0-c*Ta_func(t[0])
for i in range(len(t)-1):
y[i+1] = odeint(Tmodel,y[i],t[i:i+2],arg=(a[i],b[i],c[i],d[i],))[-1,0]
Ts = y + c*Ta_func(t)
(TLDR: SymPy's linsolve function is unable to solve the system of linear equations, generated from applying the finite difference method to an ODE BVP, when you pass the equations as a pure Python list, but is able to do so putting said equations list inside SymPy's Matrix function. This could be a bug that needs to be fixed especially considering that in the SymPy documentation, the example that they give you has them passing a list as an argument to linsolve. )
I have a boundary-value-problem ordinary differential equation that I intend to solve using the finite difference method. My ODE, in SymPy representation, is x*y(x).diff(x,2) + y(x).diff(x) + 500 = 0, with y(1)=600 and y(3.5)=25. Entire code is as follows:
import sympy as sp
from numpy import *
from matplotlib.pyplot import *
y = sp.Function('y')
ti = time.time()
steps = 10
ys = [ y(i) for i in range(steps+1) ]
ys[0], ys[-1] = 600, 25
xs = linspace(1, 3.5, steps+1)
dx = xs[1] - xs[0]
eqns = [ xs[i]*(ys[i-1] - 2*ys[i] + ys[i+1])/dx**2 +
( ys[i+1] - ys[i-1] )/2/dx + 500
for i in range(1, steps) ]
ys[1:-1] = sp.linsolve(eqns, ys[1:-1]).args[0]
scatter(xs, ys)
tf = time.time()
print(f'Time elapsed: {tf-ti}')
For ten steps, this works just fine. However, if I go higher than that immediately like 11 steps, SymPy is no longer able to solve the system of linear equations. Trying to plot the results throws a TypeError: can't convert expression to float. Examining the list of y values ys reveals that one of the variables, specifically the second to the last one, wasn't being solved for by sympy's linsolve, but instead the solutions to the other variables are being solved in terms of this unsolved variable. For example, if I have 50 steps, the second-to-the last variable is y(49) and this is featured on the solutions of the other unknowns and not solved for e.g. 2.02061855670103*y(49) - 26.1340206185567. In contrast, in another BVP ODE that I solved, y(x).diff(x,2) + y(x).diff(x) + y(x) - 1 with y(0)=1.5 and y(3)=2.5, it doesn't have any issue whether I want 10, 50, or 200 steps. It solves all the variables just fine, but this seems to be a peculiar exception as I encountered the aforementioned issue with many other ODEs.
SymPy's inconsistency here was quite frustrating. The only consolation is that before I ran into this problem, I have actually solved it a few times already with the varying number of steps that I wanted. I had enclosed the eqns variable inside sympy's Matrix function as in ys[1:-1] = sp.linsolve(sp.Matrix(eqns), ys[1:-1]).args[0] simply because it displayed better that way in the terminal. But for solving in a script file, I thought that wrapping it inside sp.Matrix is unnecessary and I naturally removed it to simplify things.
It is polite when formatting a question for SO (or anywhere else) to provide a complete code example without missing imports etc. Also you should distil the problem down to the minimum case and remove all of the unnecessary details. With that in mind a better way to demonstrate the issue is by actually figuring out what the arguments to linsolve are and presenting them directly e.g.:
from sympy import *
y = Function('y')
eqns = [
-47.52*y(1) + 25.96*y(2) + 13436.0,
25.96*y(1) - 56.32*y(2) + 30.36*y(3) + 500,
30.36*y(2) - 65.12*y(3) + 34.76*y(4) + 500,
34.76*y(3) - 73.92*y(4) + 39.16*y(5) + 500,
39.16*y(4) - 82.72*y(5) + 43.56*y(6) + 500,
43.56*y(5) - 91.52*y(6) + 47.96*y(7) + 500,
47.96*y(6) - 100.32*y(7) + 52.36*y(8) + 500,
52.36*y(7) - 109.12*y(8) + 56.76*y(9) + 500,
56.76*y(8) - 117.92*y(9) + 61.16*y(10) + 500,
61.16*y(9) - 126.72*y(10) + 2139.0
]
syms = [y(1), y(2), y(3), y(4), y(5), y(6), y(7), y(8), y(9), y(10)]
print(linsolve(eqns, syms))
Here you hoped to get a simple numerical solution for each of the unknowns but instead the result returned (from SymPy 1.8) is:
FiniteSet((5.88050359812056*y(10) - 5.77315239260531, 10.7643116711359*y(10) - 528.13328974178, 14.9403214726998*y(10) - 991.258097567359, 9.85496358613721e+15*y(10) - 1.00932650309452e+18, 7.35110502818395*y(10) - 312.312287998229, 5.84605452313345*y(10) - 217.293922525318, 4.47908204606922*y(10) - 141.418192750506, 3.22698120573309*y(10) - 81.4678489766327, 2.07194244604317*y(10) - 34.9738391105298, 1.0*y(10)))
The linsolve function will return a solution involving one or more unknowns if the system does not have a unique solution. Note also that there are some large numbers like 9.85496358613721e+15 which suggests that there might be numerical problems here.
Actually this is a bug in SymPy and it has already been fixed on the master branch:
https://github.com/sympy/sympy/pull/21527
If you install SymPy from git then you can find the following as output instead:
FiniteSet((596.496767861074, 574.326903264955, 538.901024315178, 493.575084012669, 440.573815245681, 381.447789421181, 317.320815173574, 249.033388036155, 177.23053946471, 102.418085492911))
Also note that it is generally better to avoid using floats in SymPy as it is a library that is designed for exact symbolic computation. Solving a system of floating point equations like this can be done much more efficiently using NumPy or some other fixed precision floating point library that can use BLAS/LAPACK routines. To use rational arithmetic with sympy here you just need to change your linspace line to
xs = [sp.Rational(x) for x in linspace(1, 3.5, steps+1)]
which will then work fine with SymPy 1.8 and is in fact faster (at least if you have gmpy2 installed).
def exercise2(N):
count=0
i = N
while ( i > 0 ):
for j in range(0,i):
count = count + 1
i = i//2
How do we know the time complexities for both the while and for loop?
Edit:Many of the users are sending me link to understand the time complexity using Big-OH analysis. I appreciate it but the only languages in CS i understand is python and all those explanations are using java and C++ which makes it hard for me to understand. If anyone could explain time complexity using python it would be great!
The inner loop (for) is running in i, and i is going down from N to 1 in log(N) steps at most. Hence, the time complexity is N + N/2 + N/4 + ... + 1 = N(1 + 1/2 + 1/4 + ... + 1/2^k) = \Theta(N). For the latter equation, you can suppose N = 2^k as we are computing the asymptotic time complexity.
I am trying to recreate a problem in the "Pyomo - Optimization Modeling in Python" book using the pyomo kernel instead of the environ. The problem is on page 163 and called "9.4 A mixing problem with semi-continuous variables." For those without the book, here it is:
The following model illustrates a simple mixing problem with three semi-continuous
variables (x1, x2, x3) which represent quantities that are mixed to meet a volumetric
constraint. In this simple example, the number of sources is minimized:
from pyomo.environ import *
from pyomo.gdp import *
L = [1,2,3]
U = [2,4,6]
index = [0,1,2]
model = ConcreteModel()
model.x = Var(index, within=Reals, bounds=(0,20))
# Each disjunct is a semi-continuous variable
# x[k] == 0 or L[k] <= x[k] <= U[k]
def d_rule(block, k, i):
m = block.model()
if i == 0:
block.c = Constraint(expr=m.x[k] == 0)
else:
block.c = Constraint(expr=L[k] <= m.x[k] <= U[k])
model.d = Disjunct(index, [0,1], rule=d_rule)
# There are three disjunctions
def D_rule(block, k):
model = block.model()
return [model.d[k,0], model.d[k,1]]
model.D = Disjunction(index, rule=D_rule)
# Minimize the number of x variables that are nonzero
model.o = Objective(expr=sum(model.d[k,1].indicator_var for k in index))
# Satisfy a demand that is met by these variables
model.c = Constraint(expr=sum(model.x[k] for k in index) >= 7)
I need to refactor this problem to work in the pyomo kernel, but the kernel is not yet compatible with the pyomo gdp used to transform disjunctive models to linear ones. Has anyone ran into this problem, and if so did you find a good method to solve disjunctive models in the pyomo kernel?
I have a partial rewrite of pyomo.gdp that I could make available on a public github branch (probably working, but lacks testing). However, I am weary of investing more time in rewrites like this, as the better approach would be to re-implement the standard pyomo.environ api on top of kernel, which would make all of the extensions compatible.
With that being said, If there are collaborators willing to share in some of the development and testing, I would be happy help complete the kernel-gdp version I've started. If you want to discuss this further, it would probably be best to open an issue on the Pyomo Github page.
I'm trying to solve this problem:
Suppose I have a set of n coins {a_1, a2, ..., a_n}. A coin with value
1 will always appear. What is the minimum number of coins I
need to reach M?
The constraints are:
1 ≤ n ≤ 25
1 ≤ m ≤ 10^6
1 ≤ a_i ≤ 100
Ok, I know that it's the Change-making problem.
I have tried to solve this problem using Breadth-First Search, Dynamic Programming and Greedly (which is incorrect, since it don't always give best solution). However, I get Time Limit Exceeded (3 seconds).
So I wonder if there's an optimization for this problem.
The description and the constraints called my attention, but I don't know how to use it in my favour:
A coin with value 1 will always appear.
1 ≤ a_i ≤ 100
I saw at wikipedia that this problem can also be solved by "Dynamic programming with the probabilistic convolution tree". But I could not understand anything.
Can you help me?
This problem can be found here: http://goo.gl/nzQJem
Let a_n be the largest coin. Use these two clues:
result is >= ceil(M/a_n),
result configuration has lot of a_n's.
It is best to try with maximum of a_n's and than check if it is better result with less a_n's till it is possible to find better result.
Something like: let R({a_1, ..., a_n}, M) be function that returns result for a given problem. Than R can be implemented:
num_a_n = floor(M/a_n)
best_r = num_a_n + R({a_1, ..., a_(n-1)}, M-a_n*num_a_n)
while num_a_n > 0:
num_a_n = num_a_n - 1
# Check is it possible at all to get better result
if num_a_n + ceil(M-a_n*num_a_n / a_(n-1) ) >= best_r:
return best_r
next_r = num_a_n + R({a_1, ..., a_(n-1)}, M-a_n*num_a_n)
if next_r < best_r:
best_r = next_r
return best_r