How to limit solution domain in modelica - openmodelica

I have a very simple model in OpenModelica.
model doubleSolution
Real x ;
equation
x^2 -4 = 0;
end doubleSolution;
There are two mathematical solutions for this problem x={-2,+2}.
The Openmodelica Solver will provide just one result. In this case +2.
What if I'm intested in the other solution?
Using proper start values e.g. Real x(Start=-7) might help as a workaround, but I'm not sure if this is always a robust solution. I'd prefer if I could directly limit the solution range e.g. by (x < 0). Are such bundary conditions possible?

As you already noticed using a start value is one option. If that is a robust solution depends on how good the start value is. For this example the Newton-Raphson method is used, which highly depends on a good start value.
You can use max and min to give a variable a range where it is valid.
Check for example 4.8.1 Real Type of the Modelcia Language Specification to see what attributes type Real has.
Together with a good start value this should be robust enough and at least give you a warning if the x becomes bigger then 0.0.
model doubleSolution
Real x(max=0, start=-7);
equation
x^2 -4 = 0;
end doubleSolution;
Another option would be to add an assert to the equations:
assert(value >= min and value <= max , "Variable value out of limit");
For the min and max attribute this assert is added automatically.

Related

When to use sum v.s. lpSum using pulp?

In the case study "A Set Partitioning Problem" in the pulp documentation "sum" is used to express the constraints, for example:
#A guest must seated at one and only one table
for guest in guests:
seating_model += sum([x[table] for table in possible_tables
if guest in table]) == 1, "Must_seat_%s"%guest
Whereas "lpSum" seems to be used otherwise. Consider for example the following constraint in the case study A Transportation Problem
for b in Bars:
prob += lpSum([vars[w][b] for w in Warehouses])>=demand[b], "Sum_of_Products_into_Bar%s"%b
Why is "sum" used in the first example? And when to use "sum" v.s. "lpSum"?
You should be using lpSum every time you have at least one pulp variable inside the sum expression. Using sum is not wrong, just more inefficient. We should probably change the docs so they are consistent. Feel free to open an issue or, even better, do a PR so we can correct the Set Partitioning Problem example.

Why would more array accesses perform better?

I'm taking a course on coursera that uses minizinc. In one of the assignments, I was spinning my wheels forever because my model was not performing well enough on a hidden test case. I finally solved it by changing the following types of accesses in my model
from
constraint sum(neg1,neg2 in party where neg1 < neg2)(joint[neg1,neg2]) >= m;
to
constraint sum(i,j in 1..u where i < j)(joint[party[i],party[j]]) >= m;
I dont know what I'm missing, but why would these two perform any differently from eachother? It seems like they should perform similarly with the former being maybe slightly faster, but the performance difference was dramatic. I'm guessing there is some sort of optimization that the former misses out on? Or, am I really missing something and do those lines actually result in different behavior? My intention is to sum the strength of every element in raid.
Misc. Details:
party is an array of enum vars
party's index set is 1..real_u
every element in party should be unique except for a dummy variable.
solver was Gecode
verification of my model was done on a coursera server so I don't know what optimization level their compiler used.
edit: Since minizinc(mz) is a declarative language, I'm realizing that "array accesses" in mz don't necessarily have a direct corollary in an imperative language. However, to me, these two lines mean the same thing semantically. So I guess my question is more "Why are the above lines different semantically in mz?"
edit2: I had to change the example in question, I was toting the line of violating coursera's honor code.
The difference stems from the way in which the where-clause "a < b" is evaluated. When "a" and "b" are parameters, then the compiler can already exclude the irrelevant parts of the sum during compilation. If "a" or "b" is a variable, then this can usually not be decided during compile time and the solver will receive a more complex constraint.
In this case the solver would have gotten a sum over "array[int] of var opt int", meaning that some variables in an array might not actually be present. For most solvers this is rewritten to a sum where every variable is multiplied by a boolean variable, which is true iff the variable is present. You can understand how this is less efficient than an normal sum without multiplications.

How do I avoid repeating long formulas in Excel when working with comparisons?

I know that something like the following
=IF(ISERROR(LONG_FORMULA), 0, LONG_FORMULA)
can be replaced with
=IFERROR(LONG_FORMULA, 0)
However I am looking for an expression to avoid having to type REALLY_LONG_FORMULA twice in
=IF(REALLY_LONG_FORMULA < threshold, 0, REALLY_LONG_FORMULA)
How can I do this?
I was able to come up with the following:
=IFERROR(EXP(LN(REALLY_LONG_FORMULA – threshold)) + threshold, 0)
It works by utilizing the fact that the log of a negative number produces an error and that EXP and LN are inverses of each other.
The biggest benefit of this is that it avoids accidentally introducing errors into your spreadsheet when you change something in one copy of REALLY_LONG_FORMULA without remembering to apply the same change to the other copy of REALLY_LONG_FORMULA in your IF statement.
Greater than comparisons as in
=IF(REALLY_LONG_FORMULA>=threshold,0,REALLY_LONG_FORMULA)
can be replaced with
=IFERROR(threshold-EXP(LN(threshold-REALLY_LONG_FORMULA)),0)
Example below (provided by #Jeeped):
For strict inequality comparisons use SQRT(_)^2 as pointed out by #Tom Sharpe.
If you're comparing against a threshold amount, I would consider checking out ExcelJet's recent blog post about
Replacing Ugly IFs with MAX() or MIN().
Also, the MAX() and MIN() functions are much more intuitive than using lessor known functions like EXP() and LN().
Comparing Ln Exp with SQRT ^2:-
because SQRT(0) gives 0 but ln(0) gives #NUM!
So you can choose which one to use depending whether you want the equality or not.
These also work for negative numbers - in theory.

Modelling Conway's game of life in integer linear programming?

I'm trying to model Conway's game of life rules using integer linear programming, however I'm stuck with one of the rules.
I consider a finite n x n grid. Each cell in the grid is associated with a variable X(i,j) whose value is 0 if the cell is dead and 1 if it is alive.
I'm particularly interested in still lifes, i.e. configurations that, according to the rules, don't change from an instant to the next.
To find them I'm imposing the constraints on the number of neighbours for each cell. So, for an alive cell to remain still it must have 2 or 3 neighbours, and this is easily expressible:
2(1-X(i,j)) + Σ(i,j) >= 2
-5(1 - X(i,j)) + Σ(i,j) <= 3
Where Σ(i, j) is the sum over the neighbours of (i, j) (assume outside of the grid the values are all 0s).
If X(i,j) == 0 then the first addend guarantees that the the constraints are trivially satisfied. When X(i, j) == 1 the constraints guarantee that we have a still life.
The problem is the other rule: for a dead cell to remain dead it must have any number of neighbours different from 3.
However, AFAIK you cannot use != in a constraint.
The closest I've come is:
X(i, j) + |Σ(i, j) - 3| > 0
Which does express what I want, but the problem is that I don't think the absolute value can be used in that way (only absolute values of single variables can be expressed. Or is there a way to express this particular situation?).
I wonder, is there a standard way to express the !=?
I was thinking that maybe I should use multiple inequalities instead of a single one (e.g. for every possible triple/quadruple of neighbours...), but I cannot think of any sensible way to achieve that.
Or maybe there is some way of abusing the optimization function to penalize this situation and thus, obtaining an optimum would yield a correct solution (or state that it's impossible, depending on the value).
Is anyone able to express that constraint using a linear inequality and the variables X(i, j) (plus, eventually, some new variables)?
The standard way to express
is to express it as
by introducing a new binary variable that indicates which inequality holds true.
This is explained here, at section 7.4.
In a nutshell, if we define y such that
then we need to add the constraints
Where
This is the standard way to model this, but in specific applications there might be better ways. Usually, the tighter the bounds on the sum are, the best the solver performance.

Coin change,dynamic programming revisited

I am having tough time in understanding the logic behind this problem,
This is classical Dynamic Programming Problem
Coin Change is the problem of finding the number
of ways of making changes for a particular amount of cents, n,
using a given set of denominations d1,d2,..dm;
I know how the recursion works,Like taking the mth coin or not.But I don't understand what does '+' do between the two states.
For eg
C(N,m)=C(N,m-1)+C(N-dm,m)
^
|
Question might be stupid but I still would like to know so that I can have better understanding.Thanks
Well , you haven't written your state right!
Coin Change:
Let C(i,j) represents the number of ways to form j as a sum using only i coins (from i to 1).
Now to get a recursive function you have to define transitions or change in state or may be just expressing the given expression in the terms of lower values!!
There are 2 independent ways in which I can represent this state that are
1) Lets pick up the i th coin , Then what happens ? I need denomination j-Denomination[i] with i-1 coins if repetition is not allowed .
i.e. C(i,j)= C(i-1,j-Denominations[i])
But wait , we are missing some ways i.e. when we do not take the current coin
2) C(i,j)=C(i-1,j)
Now as both of them are independent and exhaustive , These both of these state makes up the total number of ways!!!
C(i,j)=C(i-1,j)+C(i-1,j-Denominations[i])
I will leave the recurrence when repetition allowed for you!

Resources