How to declare two dimensional and three dimensional vector in Pyomo - python-3.x

I am trying to solve a optimization problem using Pyomo. For that i need to declare a two dimensional vector and a three dimensional vector . Both of the vectors can store only [0,1].
Sri and Xrij
R=3 V=8 1<=i,j<=V 1<=r<=R
I tried to to do using range in pyomo
model.IDXV = range(v+1)
model.IDXR = range(r+1)
model.x=Var(model.IDXR,model.IDXV,model.IDXV,within=Binary,initialize=0)
model.s=Var(model.IDXR,model.IDXV,within=Binary,initialize=0)
I am using the 'ipopt' solver but after execution the value of X and S is in fraction instead of 0 or 1.
Please help me to do this.

Axel Kemper (in the comments) is correct. ipopt is a nonlinear programming solver and automatically assumes that you intend to relax discrete values.
For linear-discrete problems, there are the cbc and glpk free solvers. gurobi and cplex are the major commercial solvers.
For nonlinear-discrete problems, couenneand bonmin are the free solvers. Several other commercial and academic solvers are also available.

Related

Eigenvectors in Julia vs Numpy

I'm currently working to diagonalize a 5000x5000 Hermitian matrix, and I find that when I use Julia's eigen function in the LinearAlgebra module, which produces both the eigenvalues and eigenvectors, I get different results for the eigenvectors compared to when I solve the problem using numpy's np.linalg.eigh function. I believe both of them use BLAS, but I'm not sure what else they may be using that is different.
Has anyone else experienced this/knows what is going on?
numpy.linalg.eigh(a, UPLO='L') is a different algorithm. It assumes the matrix is symmetric and takes the lower triangular matrix (as a default) to more efficiently compute the decomposition.
The equivalent to Julia's LinearAlgebra.eigen() is numpy.linalg.eig. You should get the same result if you turn your matrix in Julia into a Symmetric(A, uplo=:L) matrix before feeding it into LinearAlgebra.eigen().
Check out numpy's docs on eig and eigh. Whilst Julia's standard LinearAlgebra capabilities are here. If you go down to the special matrices sections, it details what special methods it uses depending on the type of special matrix thanks to multiple dispatch.

How to use ortools to solve quadratic programming in Python?

For example, how can I simply find the minimum of (x-1)^2 via ortools in Python?
I read the document of ortools, but I cannot find it. I knew it does not belong to linear optimization, but I cannot find a proper type in its document.
Google OR-Tools does not support quadratic programming. This page contains a list of what it supports:
Google Optimization Tools (OR-Tools) is a fast and portable software suite for solving combinatorial optimization problems. The suite contains:
A constraint programming solver.
A simple and unified interface to several linear programming and mixed integer programming solvers, including CBC, CLP, GLOP, GLPK, Gurobi, CPLEX, and SCIP.
Graph algorithms (shortest paths, min cost flow, max flow, linear sum assignment).
Algorithms for the Traveling Salesman Problem and Vehicle Routing Problem.
Bin packing and knapsack algorithms.
The following link clarifies that the mixed integer programming (MIP) support does not include quadratic MIP (MIQP):
https://github.com/google/or-tools/issues/598
You might check out this resource for ideas of how to do QP in Python:
https://scaron.info/blog/quadratic-programming-in-python.html

Solving (nonlinear) equations in simulation tools

I am intersted, how a modelling tool (in my case OpenModelica and Dymola - modelling language Modelica) solves systems of equations (linear and/or nonlinear). These tools are designed for solving differential algebraic equations. I know a little bit the theory behind transforming a differential algebraic equation sytem into an ODE (keyword "index-reduction"). My questions:
How do these tools solve a system of equations without differential equations? Is the system nevertheless transformed (index reduction) into an ODE?
What if I have a model that has a few algebraic equations and a few ODE - but they are not coupled?
Thank you very much.
OpenModelica will use an equidistant time grid based on the number of output time points (or number of intervals) and solve the algebraic system for each of these time points.
The basics of how equations are transformed into assignments are covered very well in the slide-decks 1-6 of Prof. Cellier's Lecture at the ETH Zurich:
https://www.inf.ethz.ch/personal/fcellier/Lect/MMPS/Refs/mmps_refs.html
You will find further references at the end of every lecture.
The only difference for systems without differential equations is that you don't have state-variables, the rest works the same way.

excel solver and lp_solve API

I have a mixed integer/binary linear programming problem, the free version of excel solver can find a solution that satisfies all the constraints. however, the lp_solve API that I call from C++ could not find a solution. I suspect that excel solver simplifies the problem. So far I locate 2 parameters in excel solver option: mip gap and constraint precision. the former is set to 1%, i set the lp_solve's mip gap to 1%, but I do not know what is the equivalent constraint precision in lp_solve. Anyone can help? Thanks!

Constraints on a Genetic Algorithm

I'm using SolveXL as an add-in for excel to do a multi-objective optimization. SolveXL solves it by applying the Non-dominated sorting genetic algorithm II (NSGA-II). My problem is with the constraints, when I have constraints to not exceed a value (less than 15) for example like this
=IF(C5>15;C5-15;0)
The software give me all solutions satisfying this. But when I want the solutions to be greater than a value (15)
=IF(C5<15;15-C5;0)
I don't get any solution with infeasibility zero. I'm sure I'm computing it right because I tried it with a small example.
Could it not be working because of the complexity of my excel computations?

Resources