For example, how can I simply find the minimum of (x-1)^2 via ortools in Python?
I read the document of ortools, but I cannot find it. I knew it does not belong to linear optimization, but I cannot find a proper type in its document.
Google OR-Tools does not support quadratic programming. This page contains a list of what it supports:
Google Optimization Tools (OR-Tools) is a fast and portable software suite for solving combinatorial optimization problems. The suite contains:
A constraint programming solver.
A simple and unified interface to several linear programming and mixed integer programming solvers, including CBC, CLP, GLOP, GLPK, Gurobi, CPLEX, and SCIP.
Graph algorithms (shortest paths, min cost flow, max flow, linear sum assignment).
Algorithms for the Traveling Salesman Problem and Vehicle Routing Problem.
Bin packing and knapsack algorithms.
The following link clarifies that the mixed integer programming (MIP) support does not include quadratic MIP (MIQP):
https://github.com/google/or-tools/issues/598
You might check out this resource for ideas of how to do QP in Python:
https://scaron.info/blog/quadratic-programming-in-python.html
Related
I am working on a new model which is very sensitive to the interpolation/fit used to describe a certain dataset. I have some success with linear splines and logarithmic fits but I think there is still significant room for improvement. My supervisor suggested I take a look at exponential splines. I have found some books on the theory of exponential spines but no reference to a library or code example to follow.
Is there a library that I am unaware of that supports this feature?
I am trying to solve a optimization problem using Pyomo. For that i need to declare a two dimensional vector and a three dimensional vector . Both of the vectors can store only [0,1].
Sri and Xrij
R=3 V=8 1<=i,j<=V 1<=r<=R
I tried to to do using range in pyomo
model.IDXV = range(v+1)
model.IDXR = range(r+1)
model.x=Var(model.IDXR,model.IDXV,model.IDXV,within=Binary,initialize=0)
model.s=Var(model.IDXR,model.IDXV,within=Binary,initialize=0)
I am using the 'ipopt' solver but after execution the value of X and S is in fraction instead of 0 or 1.
Please help me to do this.
Axel Kemper (in the comments) is correct. ipopt is a nonlinear programming solver and automatically assumes that you intend to relax discrete values.
For linear-discrete problems, there are the cbc and glpk free solvers. gurobi and cplex are the major commercial solvers.
For nonlinear-discrete problems, couenneand bonmin are the free solvers. Several other commercial and academic solvers are also available.
I am working on data mining problems and I have to find similarity between pair of objects. I know what all the statistical distances are, but fails to find any source that define when to use which statistical distance?
My answer is not going to be a plain "use that" because there is not such a thing in statistics.
I found my self in the past using statistical distances such as Mahalanobis, which is a particular case of Bhattacharyya distance when dealing with similar problems. I used KL-divergence when building trees (minimun spanning trees etc).
A main difference between the two is that Bhattacharyya is a metric and KL is not, so you have to take this into account when thinking about what kind of information you want to extract about your data points.
In brief, I would use the Bhattacharyya.
I am intersted, how a modelling tool (in my case OpenModelica and Dymola - modelling language Modelica) solves systems of equations (linear and/or nonlinear). These tools are designed for solving differential algebraic equations. I know a little bit the theory behind transforming a differential algebraic equation sytem into an ODE (keyword "index-reduction"). My questions:
How do these tools solve a system of equations without differential equations? Is the system nevertheless transformed (index reduction) into an ODE?
What if I have a model that has a few algebraic equations and a few ODE - but they are not coupled?
Thank you very much.
OpenModelica will use an equidistant time grid based on the number of output time points (or number of intervals) and solve the algebraic system for each of these time points.
The basics of how equations are transformed into assignments are covered very well in the slide-decks 1-6 of Prof. Cellier's Lecture at the ETH Zurich:
https://www.inf.ethz.ch/personal/fcellier/Lect/MMPS/Refs/mmps_refs.html
You will find further references at the end of every lecture.
The only difference for systems without differential equations is that you don't have state-variables, the rest works the same way.
What's the relationship between the Monte-Carlo Method and Evolutionary Algorithms? On the face of it they seem to be unrelated simulation methods used to solve complex problems. Which kinds of problems is each best suited for? Can they solve the same set of problems? What is the relationship between the two (if there is one)?
"Monte Carlo" is, in my experience, a heavily overloaded term. People seem to use it for any technique that uses a random number generator (global optimization, scenario analysis (Google "Excel Monte Carlo simulation"), stochastic integration (the Pi calculation that everybody uses to demonstrate MC). I believe, because you mentioned evolutionary algorithms in your question, that you are talking about Monte Carlo techniques for mathematical optimization: You have a some sort of fitness function with several input parameters and you want to minimize (or maximize) that function.
If your function is well behaved (there is a single, global minimum that you will arrive at no matter which inputs you start with) then you are best off using a determinate minimization technique such as the conjugate gradient method. Many machine learning classification techniques involve finding parameters that minimize the least squares error for a hyperplane with respect to a training set. The function that is being minimized in this case is a smooth, well behaved, parabaloid in n-dimensional space. Calculate the gradient and roll downhill. Easy peasy.
If, however, your input parameters are discrete (or if your fitness function has discontinuties) then it is no longer possible to calculate gradients accurately. This can happen if your fitness function is calculated using tabular data for one or more variables (if variable X is less than 0.5 use this table else use that table). Alternatively, you may have a program that you got from NASA that is made up of 20 modules written by different teams that you run as a batch job. You supply it with input and it spits out a number (think black box). Depending on the input parameters that you start with you may end up in a false minimum. Global optimization techniques attempt to address these types of problems.
Evolutionary Algorithms form one class of global optimization techniques. Global optimization techniques typically involve some sort of "hill climbing" (accepting a configuration with a higher (worse) fitness function). This hill climbing typically involves some randomness/stochastic-ness/monte-carlo-ness. In general, these techniques are more likely to accept less optimal configurations early on and, as the optimization progresses, they are less likely to accept inferior configurations.
Evolutionary algorithms are loosely based on evolutionary analogies. Simulated annealing is based upon analogies to annealing in metals. Particle swarm techniques are also inspired by biological systems. In all cases you should compare results to a simple random (a.k.a. "monte carlo") sampling of configurations...this will often yield equivalent results.
My advice is to start off using a deterministic gradient-based technique since they generally require far fewer function evaluations than stochastic/monte-carlo techniques. When you hear hoof steps think horses not zebras. Run the optimization from several different starting points and, unless you are dealing with a particularly nasty problem, you should end up with roughly the same minimum. If not, then you might have zebras and should consider using a global optimization method.
well I think Monte Carlo methods is the general name for these methods which
use random numbers in order to solve optimization problems. In this ways,
even the evolutionary algorithms are a type of Monte Carlo methods if they
use random numbers (and in fact they do).
Other Monte Carlo methods are: metropolis, wang-landau, parallel tempering,etc
OTOH, Evolutionary methods use 'techniques' borrowed from nature such as
mutation, cross-over, etc.