How to formulate a constraint in pulp involving indicator variable? - python-3.x

I looked here, here and here
but couldn't generalize solutions to my problem or there were no correct answers.
If I want a binary variable to turn off and on based on another variable selection so that when lpSum(some_var) = 0 then indicator_var must be 0 and when lpSum(some_var) > 0 then indicator_var must be 1 where lpSum(some_var) will never be grater than 5, then if I write:
for j in some_list:
prob += lpSum(some_var[i, j] for i in some_other_list) <= indicator_var[j] * 5
this ensures that indicator_var is 1 if lpSum > 0 which is fine, but it does not guarantee that indicator_var is 0 if lpSum = 0.
Hopefully it's clear what I want to achieve, if not please let me know so I can clarify further with more concrete example.

You didn’t say what type of variable you are summing, but assuming non-negativity, this should work:
prob += lpSum(...) >= indicator_var[...]
Edit: The above should be used in conjunction with the constraint you already have above (2 constraints needed to enforce the inference you want). So also:
prob += lpSum(...) <= indicator_var[...] * 5

Related

Conditional Constraint Solving

How would you approach the following constraint optimization problem:
I have a set of integer variables x[i] that can takes only 4 values in the [1,4] range
There are constraints of the form C <= x[i], x[i] <= C, and x[i] <= x[j]
There are also conditional constraints, but exclusively of the form "if 2 <= x[i] then 3 <= x[j]"
I want to minimize the number of variables that have the value 3
Edit: because I have a large (thousands) number of variables and constraints and performance is critical, I’m looking for a dedicated algorithm, not using a general-purpose constraint solver.
You could encode each variable as a pair of binary variables:
x[i] = 1 + 2*x2[i] + x1[i]
The inequality constraints can now be partly resolved:
1 <= x[i] can be ignored, as always true for any variable
2 <= x[i] implies (x2[i] or x1[i])
3 <= x[i] implies x2[i]
4 <= x[i] implies (x2[i] and x1[i])
1 >= x[i] implies (!x2[i] and !x1[i])
2 >= x[i] implies !x2[i]
3 >= x[i] implies (!x2[i] or !x1[i])
4 >= x[i] can be ignored, as always true for any variable
x[i] <= x[j] implies (!x2[i] or x2[j]) and
(!x1[i] or x2[j] o x1[j]) and
(!x2[i] or !x1[i] or x1[j])
Conditional constraint
if 2 <= x[i] then 3 <= x[j]
translates to
x2[j] or !x1[i]
The encoding shown above can be directly written as Conjunctive Normal Form (CNF) suitable for a SAT solver. Tools like SATInterface or bc2cnf help to automate this translation.
To minimize the number of variables which have value 3, a counting circuit combined with a digital comparator could be constructed/modelled.
Variable x[i] has value 3, if (x2[i] and !x1[i]) is true. These expressions could be inputs of a counter. The counting result could then be compared to some value which is decreased until no more solutions can be found.
Bottom line:
The problem can be solved with a general purpose solver like a SAT solver (CaDiCal, Z3, Cryptominisat) or a constraint solver like Minizinc. I am not aware of a dedicated algorithm which would outperform the general purpose solvers.
Actually, there is a fairly simple and efficient algorithm for this particular problem.
It is enough to maintain and propagate intervals and start propagating the conditional constraints when the lower bounds become >= 2.
At the end, if the interval is exactly [3,4], the optimal solution is to select 4.
More precisely:
initialize l[i]:=1, u[i]:=4
propagate constraints until fixpoint as follows:
Constraint "C<=x[i]": l[i]:=max(l[i],C)
Constraint "x[i]<=C": u[i]:=min(u[i],C)
Constraint "x[i]<=x[j]": l[j]:=max(l[j],l[i]) and u[i]:=min(u[i],u[j])
Constraint 2<=x[i] ==> 3<=x[j]: if 2<=l[i], then l[j]:=max(l[j], 3)
If u[i]<l[i], there is no solution
Otherwise, select:
x[i]=l[i] if u[i]<=3
x[i]=u[i] otherwise
This is correct and optimal because:
any solution is such that l[i]<=x[i]<=u[i], so if u[i]<l[i], there is no solution
otherwise, x[i]=l[i] is clearly a solution (but NOT x[i]=u[i] because it can be that u[i]>=2 but u[j] is not >=3)
bumping all x[i] from 3 to 4 when possible is still a solution because this change doesn't activate any new conditional constraints
what remains are the variables that are forced to be 3 (l[i]=u[i]=3), so we have found a solution with the minimal number of 3
In more details, here is a full proof:
assume that a solution x[i] is such that l[i]<=x[i]<=u[i] and let's prove that this invariant is preserved by application of any propagation rule:
Constraint "x[i]<=x[j]": x[i]<=x[j]<=u[j] and so x[i] is both <=u[i] and <=u[j] and hence <=min(u[i],u[j]). Similarly, l[i]<=x[i]<=x[j] so max(l[i],l[j])<=x[j]
The constraints "x[i]<=C" and "C<=x[i]" are similar
For the constraint "2<=x[i] ==> 3<=x[j]": either l[i]<2 and the propagation rule doesn't apply or 2<=l[i] and then 2<=l[i]<=x[i] implying 3<=x[j]. So 3<=x[j] and l[j]<=x[j] hence max(3,l[j])<=x[j]
as a result, when the fixpoint is reached and no rule can be applied anymore, if any i is such that u[i]<l[i], then there is no solution
otherwise, let's prove that this x[i] is a solution where: x[i]=l[i] if u[i]<=3 and x[i]=u[i] otherwise:
Note that x[i] is either l[i] or u[i], so l[i]<=x[i]<=u[i]
For all constraints "C<=x[i]", at fixpoint, we have l[i]=max(l[i],C), i.e., C<=l[i]<=x[i] and the constraint is satisfied
For all constraints "x[i]<=C", at fixpoint, we similarly have u[i]<=C and x[i]<=u[i]<=C and the constraint is satisfied
For all "x[i]<=x[j]", at fixpoint, we have: u[i] = min(u[i],u[j]) so u[i]<=u[j] and l[j] = max(l[j],l[i]), so l[i]<=l[j]. Then:
If u[j]<=3 then u[i]<=u[j]<=3 so x[i]=l[i]<=l[j]=x[j]
Otherwise, x[j]=u[j] and x[i]<=u[i]<=u[j]=x[j]
For all "2<=x[i] ==> 3<=x[j]": assume 2<=x[i]:
If u[i]<=3, then either:
l[i]<=2 and the fixpoint means l[j]:=max(l[j], 3) so 3<=l[j]<=x[j]
or l[i]=3 and 3=l[i]<=l[j]<=x[j]
If u[i]>3, then 3<u[i]<=u[j] and 3<u[i]<=u[j]=x[j]
Finally the solution is optimal because:
if l[i]=u[i]=3, any solution must have x[i]=3
otherwise, x[i] != 3: if u[i]<=3, then either u[i]=3 and x[i]=l[i]<3 or x[i]<=u[i]<3; and if u[i]>3 then x[i]=u[i]!=3

How to increase number of Cplex solutions?

I have this cplex model that has 1 binay variable (x_i). Now I have 2 questions regarding its cplex solutions (I put them in one post because they are related).
First: For my model I get 26 solutions but I know in reality there are much more solutions.How solutions are being generated in cplex? Is there any way to increase the number of solution?
Second: I want to access all of the solutions with a solution pool but when I am trying to print all the solutions, It prints all the existing variables(obviously I just need the variables that are equal to 1) with their value.
This is my code for the solution pool:
def generate_soln_pool(mdl):
cpx = mdl.get_cplex()
cpx.solnpoolintensity=4
cpx.solnpoolagap=0
cpx.populatelim=100000
try:
cpx.populate_solution_pool()
except CplexSolverError:
print("Exception raised during populate")
return []
numsol = cpx.solution.pool.get_num()
print(numsol)
nb_vars = mdl.number_of_variables
sol_pool = []
for i in range(numsol):
x_i = cpx.solution.pool.get_values(i)
assert len(x_i) == nb_vars
sol = mdl.new_solution()
for k in range(nb_vars):
vk = mdl.get_var_by_index(k)
sol.add_var_value(vk, x_i[k])
sol_pool.append(sol)
return sol_pool
bm=CModel()
pool = generate_soln_pool(bm)
for s, sol in enumerate(pool,start=1):
print(" this is solution #{0} of the pool".format(s))
sol.display()
This is a part of my output:
x_0 = 0
x_1 = 0
x_2 = 0
x_3 = 0
x_4 = 0
x_5 = 0
x_6 = 0
x_7 = 0
x_8 = 0
x_9 = 0
x_10= 0
x_11 = 1
x_12 = 0
x_13 = 0
.
.
.
I guess you took the parameter settings from the example in the documentation? These parameters will make CPLEX enumerate all optimal solutions. In case you want all solutions you have to set the solution pool gap to a very huge value.
CPLEX has many ways to generate solutions, but roughly it follows the standard branch and bound scheme augmented by heuristics.
Of course, a solution has a value for every variable. If you only want certain variables then you can use the various filtering and comprehension types that Python provides. For example, to get indices of the binary variables that are 1 in the solution you can something like this:
indices = [j for j, a in enumerate(cpx.solution.pool.get_values(i)) if a > 0.5]
EDIT: After seeing and running the code we found what the issue is:
The code only sets the absolute gap parameter, it should set the relative gap parameter as well.
The code sets parameters like cpx.solnpoolintensity = 4. This is not the correct way to set parameters. The statement will just create a new property in the object that is ignore by the rest of the code.
The correct way to set up parameters for enumerating (up to) 4000 solutions is
cpx.parameters.mip.pool.intensity.set(4)
cpx.parameters.mip.pool.absgap.set(1e75)
cpx.parameters.mip.pool.relgap.set(1e75)
cpx.parameters.mip.limits.populate.set(4000)

Am I doing this while loop correctly? [duplicate]

This question already has answers here:
How do I plot this logarithm without a "while True" loop?
(2 answers)
Closed 3 years ago.
I am trying to plot the logarithm of twelve tone equal temperament on a scale of hertz.
Is this while loop that breaks in the middle the best way to iterate all of the audible notes in the scale? Could I do the same thing more accurately, or with less code?
I do not want to use a for loop because then the range would be defined arbitrarily, not by the audible range.
When I try to use "note > highest or note < lowest" as the condition for the while loop, it doesn't work. I'm assuming that's because of the scope of where "note" is defined.
highest = 20000
lowest = 20
key = 440
TET = 12
equal_temper = [key]
i = 1
while True:
note = key * (2**(1/TET))**i
if note > highest or note < lowest:
break
equal_temper.append(note)
i += 1
i = 1
while True:
note = key * (2**(1/TET))**-i
if note > highest or note < lowest:
break
equal_temper.append(note)
i += 1
equal_tempered = sorted(equal_temper)
for i in range(len(equal_temper)):
print(equal_tempered[i])
The code returns a list of pitches (in hertz) that are very close to other tables I have looked at, but the higher numbers are further off. Setting a while loop to loop indefinitely seems to work, but I suspect there may be a more elegant way to write the loop.
As it turns out, you actually know the number of iterations! At least you can calculate it by doing some simple math. Then you can use a list comprehension to build your list:
import math
min_I = math.ceil(TET*math.log2(lowest/key))
max_I = math.floor(TET*math.log2(highest/key))
equal_tempered = [key * 2 ** (i / TET) for i in range(min_I, max_I + 1)]
You can use the piano key formula:
freq_n = freq_ref * sqrt(2, 12) ** (n − a)
The reference note is A4, 440 Hz and 49th key on the piano:
def piano_freq(key_no: int) -> float:
ref_tone = 440
ref_no = 49
freq_ratio = 2 ** (1/12)
return ref_tone * freq_ratio ** (key_no - ref_no)
Then you can do things like:
print(piano_freq(40)) # C4 = 261.6255653005985
print([piano_freq(no) for no in range(49, 49+12)]) # A4 .. G#5
Based on: https://en.wikipedia.org/wiki/Piano_key_frequencies

Sympy - Limit with parameter constraint

I try to calculate the limit of a function with a constraint on one of its parameters. Unfortunately, I got stuck with the parameter constraint.
I used the following code where 0 < alpha < 1 should be assumed
import sympy
sympy.init_printing()
K,L,alpha = sympy.symbols("K L alpha")
Y = (K**alpha)*(L**(1-alpha))
sympy.limit(sympy.assumptions.refine(Y.subs(L,1),sympy.Q.positive(1-alpha) & sympy.Q.positive(alpha)),K,0,"-")
Yet, this doesn't work. Is there any possibility to handle assumptions as in Mathematica?
Best and thank you,
Fabian
To my knowledge, the assumptions made by the Assumptions module are not yet understood by the rest of SymPy. But limit can understand an assumption that is imposed at the time a symbol is created:
K, L = sympy.symbols("K L")
alpha = sympy.Symbol("alpha", positive=True)
Y = (K**alpha)*(L**(1-alpha))
sympy.limit(Y.subs(L, 1), K, 0, "-")
The limit now evaluates to 0.
There isn't a way to declare a symbol to be a number between 0 and 1, but one may be able to work around this by declaring a positive symbol, say t, and letting L = t/(1+t).

Coin Change Optimization

I'm trying to solve this problem:
Suppose I have a set of n coins {a_1, a2, ..., a_n}. A coin with value
1 will always appear. What is the minimum number of coins I
need to reach M?
The constraints are:
1 ≤ n ≤ 25
1 ≤ m ≤ 10^6
1 ≤ a_i ≤ 100
Ok, I know that it's the Change-making problem.
I have tried to solve this problem using Breadth-First Search, Dynamic Programming and Greedly (which is incorrect, since it don't always give best solution). However, I get Time Limit Exceeded (3 seconds).
So I wonder if there's an optimization for this problem.
The description and the constraints called my attention, but I don't know how to use it in my favour:
A coin with value 1 will always appear.
1 ≤ a_i ≤ 100
I saw at wikipedia that this problem can also be solved by "Dynamic programming with the probabilistic convolution tree". But I could not understand anything.
Can you help me?
This problem can be found here: http://goo.gl/nzQJem
Let a_n be the largest coin. Use these two clues:
result is >= ceil(M/a_n),
result configuration has lot of a_n's.
It is best to try with maximum of a_n's and than check if it is better result with less a_n's till it is possible to find better result.
Something like: let R({a_1, ..., a_n}, M) be function that returns result for a given problem. Than R can be implemented:
num_a_n = floor(M/a_n)
best_r = num_a_n + R({a_1, ..., a_(n-1)}, M-a_n*num_a_n)
while num_a_n > 0:
num_a_n = num_a_n - 1
# Check is it possible at all to get better result
if num_a_n + ceil(M-a_n*num_a_n / a_(n-1) ) >= best_r:
return best_r
next_r = num_a_n + R({a_1, ..., a_(n-1)}, M-a_n*num_a_n)
if next_r < best_r:
best_r = next_r
return best_r

Resources