How do I know the variable ordering for CheckSatisfied? - python-3.x

I am trying to write some unit tests for my constraints using the CheckSatisfied function. How do I know the variable order of the input vector x?
E.g.
q = prog.NewContinuousVariables(1, 'q')
r = prog.NewContinuousVariables(2, 'r')
formula = le(q, r[0] + r[1])
constraint = prog.AddConstraint(formula)
assert(constraint.evaluator().CheckSatisfied([0.3, 0.5, 1]))
How do I know the which variable 0.3, 0.5, 1 corresponds to?
Is it dependent on how the constraints are added, and if so, how do I know the variable order for constraints added in the myriad of ways?

The order of the variables is stored in the return argument of AddConstraint. If you check constraint.variables(), you would see the variable order. The pseudo code is
constraint = prog.AddConstraint(formula)
print(f"{constraint.variables()}")

Related

Using multivalue DDs to solve multistate reliability quantification

#DCTLib, do you recall this discussion below? You suggested a recursive equation, which was the right approach.
Cudd_PrintMinterm, accessing the individual minterms in the sum of products
Now, I am considering multistate reliability, where we can have either not fail or fail to n-1 different states, with n >= 2. Tulip-dd implements MDDs as described in:
https://github.com/tulip-control/dd/blob/master/doc.md#multi-valued-decision-diagrams-mdd
https://github.com/tulip-control/dd/issues/71
https://github.com/tulip-control/dd/issues/66
In the diagrams in the drawings below, we have defined an MDD declared by:
aut.declare_variable(x=(0,3))
u = aut.add_expr(‘x=i’)
Each value/state of the multi-value variable (MSV) x, x=0, x=1, x=2, or x=3 leads to a specific BDD as shown in the diagrams at the bottom, taking a four-state variable x as example here. The notation is that state 0 represents the normal state and x can fail to different states 1, 2, and 3. The failure probabilities are assigned in table below. In the BDDs below, we (and tulip as well) use the binary coding with two bits x_1 and x_0 to represent each state/value of the MSV. The least significant bit (LSB), i.e., x_0, is always the ancestor. Each of the BDD diagrams below is a representation of a specific value, or state.
To quantify the BDD of a specific state, i.e., the top node, we must know probabilities of binary variables x_0 and x_1 taking different branches (then or else) in the BDD. These branch probabilities are not given directly but need to be calculated according to the BDD structure.
The key here is that the child node probabilities and the branch probabilities of the parent node must be known prior to the calculation of the parent node probability.
In the previous BDD quantification, we knew the probabilities of branches from node x_1 to leaf nodes when calculating node x_1 probability. We did not need to know how node x_1 was connected to node x_0.
Now, for this four-state variable x, we need to know how node x_1 is connected to node x_0, the binary variable representing the least significant bit, to determine the probabilities of branches from node x_1 to leaf nodes. The question is how to implement this?
Here’s one attempt that falls short:
import numpy as np
from omega.symbolic import temporal as trl
aut = trl.Automaton()
# Example of function that returns branch probabilities
def prntr(v, pvars):
assert sum(pvars)==1.0,"Probs must add to 1!"
if (v.high).var == 'x_1':
if (v.low) == aut.true:
return pvars[1] + pvars[3], pvars[1]
else:
return pvars[1] + pvars[3], pvars[3]
if (v.low).var == 'x_1':
if (v.low).negated:
return pvars[1] + pvars[3], pvars[0]
else:
return pvars[1] + pvars[3], pvars[2]
aut.declare_variables(x=(0, 3))
u=aut.add_expr('x = 3')
pvars = np.array([0.1, 0.2, 0.3, 0.4])
prntr(u,pvars)
The key here is that the child node probabilities and the branch probabilities of the parent node must be known prior to the calculation of the parent node probability.
Yes, exactly. In this case, a fully recursive bottom-up computation, like normally done with BDDs, will not work for the reason that you wrote.
However, the approach will start to work again when you treat the variables that together form a state to be a block. So in your recursive function for the probability calculation, whenever you encounter a variable for a block, you treat the node and the successor nodes for the same state component as a block and only recurse when you encounter a node not belonging to the block.
Note that this approach requires that the variables for the state appear continuously in the variable ordering. For the CUDD library, you can constrain the automatic variable reordering to guarantee this.
The following code is a modification of yours implementing this idea:
#!/usr/bin/env python3
import numpy as np
from omega.symbolic import temporal as trl
aut = trl.Automaton()
# Example of function that returns branch probabilities
# Does not yet use result caching and does not yet support assigning probabilities
# to more than one state variable set
def prntr(v, pvars):
assert abs(sum(pvars)-1.0)<=0.0001,"Probs must add to 1!"
if v == aut.true:
return 1.0
elif v == aut.false:
return 0.0
if v.var in ["x_0","x_1"]:
thisSum = 0.0
# Compute probabilities
for i,p in enumerate(pvars):
# Find successor
# Assuming that x_2 always comes after x_1
thisV = v
negate = thisV.negated
if thisV.var == 'x_0':
if i & 1:
thisV = thisV.high
else:
thisV = thisV.low
negate = negate ^ thisV.negated
if thisV.var == 'x_1':
if i & 2:
thisV = thisV.high
else:
thisV = thisV.low
if negate:
thisSum += p*prntr(~thisV, pvars)
else:
thisSum += p*prntr(thisV, pvars)
return thisSum
# TODO: What is the semantics for variables outside of the current block?
return 0.5*prntr(v.high, pvars) + 0.5*prntr(v.low, pvars)
pvars = np.array([0.1, 0.2, 0.3, 0.4])
aut.declare_variables(x=(0, 3))
u= aut.add_expr('x = 0')
print(prntr(u,pvars))
u2 = aut.add_expr('x = 3') | aut.add_expr('x = 2')
print(prntr(u2,pvars))

how can we solve the linear equation with pulp python with range conditonal constraint

I am having objective function with pre-decided objective value, but want to know the values of decision variable for that objective function.
from pulp import LpMaximize, LpProblem, LpStatus, lpSum, LpVariable, LpConstraint
constraints =['0 <= X1<= 150',
'0 <= X2= 1453',
'0 <= X3<= 12',
'0 <= X4<= 149',
,'X1+X2 <= 14'
,'X3+X4 <=1'
,'X1+ X3 <= 6'
,'X2 +X4 <= 9']
for i in range(4):
t = f'X{i+1} = LpVariable('X{i+1}' , cat= \'Integer\')'
exec(t)
model = LpProblem(name = "test" , sense = LpMaximize )
for i in range(0, len(constraints)):
model += (eval(constraints[i]), 'constraint'+str(i))
#objective function
model += lpsum([eval('X1+X2+X3+X4')]
status = model.solve()
for var in model.varibles():
print(f"{var.name}: {var.value()}")
The expected output : X1=6,X2=8,X3=0,X4=1
but getting output X1= -6,X2=20,X3=12,X4= -11, even though I have added the range constraint for decision variable.
can anybody help me on this , how can I get the expected output where variable values should not be negative.
A couple of things...
First, the code you pasted above does not execute, it has a handful of typos that need to be fixed, so I'm not sure how you are getting a solution out of it.
Second, it isn't clear why you are using strings and exec() approach, which is non-standard. Why aren't you coding the constraints directly rather than trying to process strings?
To my knowledge, pulp cannot process two-sided inequalities. Break each of your constraints into single inequalities and try again. Comment back if stuck! :)

When setting a variable in python, why does the order matter?

def calculate_error(m, b, point):
x_point, y_point = point
y = m*x_point + b
distance = abs(y-y_point)
return distance
print(calculate_error(2, 0, (5, 5)))
I ran the code above and it worked. But I do not understand why it doesn't work when I switched the order and tried setting point = x_point, y_point
instead?
Because :
y_point, x_point = point
Is setting two variables to two constants (five and five)
point = x_point, y_point
Is attempting to set one variable to two other variables.

How does this code of list comprehension with multiple variables assigned works

I have a list of strings. I need to parse and convert the string into floats and use that for a calculation.
After multiple attempts, I figured out the easiest way to do this.
List=["1x+1y+0","1x-1y+0","1x+0y-3","0x+1y-0.5"]
I need to extract the numerical coefficients of x and y
I used:
for coef in re.split('x|y', line):
float(coeff)
This was not serving the purpose and then I found out that,
for line in list:
a,b,c = [float(coef) for coef in re.split('x|y', line)]
this code works.
If I do
a=[float(coeff) for coeff in re.split('x|y',lines)]
then a is a list with coefficients of the line
[1.0, 1.0, 0.0]
[1.0, -1.0, 0.0]
[1.0, 0.0, -3.0]
[0.0, 1.0, -0.5]
However, I am struggling to understand the logic. Here we used list comprehension. How can we assign multiple variables in a list comprehension? Is the way it works as follows:
for each string element in the list, it splits the string and converts into float. And then assign the three numbers resulting from the operation to three numbers.
But how is that if we assign only one variable it is a list, but if we assign multiple variables the type is changing?
I am sorry if the question is too basic. Am new to python hence the doubt.
a, b, c = x is called sequence unpacking. It is (almost) equivalent to:
a = x[0]
b = x[1]
c = x[2]
So a,b,c = [float(coef) for coef in re.split('x|y', line)] actually means:
x = [float(coef) for coef in re.split('x|y', line)]
a = x[0]
b = x[1]
c = x[2]
But a = x is not unpacking - it's just normal assignment: it makes a reference x. The difference: in the first case you assign a list to three variables, each "gets" one item of the list. In the second case, you assign a list to one variable and that variable "gets" the whole list. Assigning a list of three numbers to two variables (a, b = [1, 2, 3]) is invalid - you get an error message saying that there are too many values to unpack.

Sympy - Limit with parameter constraint

I try to calculate the limit of a function with a constraint on one of its parameters. Unfortunately, I got stuck with the parameter constraint.
I used the following code where 0 < alpha < 1 should be assumed
import sympy
sympy.init_printing()
K,L,alpha = sympy.symbols("K L alpha")
Y = (K**alpha)*(L**(1-alpha))
sympy.limit(sympy.assumptions.refine(Y.subs(L,1),sympy.Q.positive(1-alpha) & sympy.Q.positive(alpha)),K,0,"-")
Yet, this doesn't work. Is there any possibility to handle assumptions as in Mathematica?
Best and thank you,
Fabian
To my knowledge, the assumptions made by the Assumptions module are not yet understood by the rest of SymPy. But limit can understand an assumption that is imposed at the time a symbol is created:
K, L = sympy.symbols("K L")
alpha = sympy.Symbol("alpha", positive=True)
Y = (K**alpha)*(L**(1-alpha))
sympy.limit(Y.subs(L, 1), K, 0, "-")
The limit now evaluates to 0.
There isn't a way to declare a symbol to be a number between 0 and 1, but one may be able to work around this by declaring a positive symbol, say t, and letting L = t/(1+t).

Resources