multiple models in cplex - python - python-3.x

I want to implement the epsilon constraint method, where I need to have multiple models with similar variables and almost identical constraints. I was wondering how I can define a variable (or constraint) that I can use in all models. For example, please assume that I want to add a binary variable "x" and "con1" to two models ("mdl1" and "mdl2"). I have coded this problem as below, but it is not working. Would you please help me?
from docplex.mp.model import Model
# Model names
mdl1 = Model("OBJ1")
mdl2 = Model("OBJ2")
# set_idx1 is defined here.
# Variables
x = mdl1.binary_var_dict(set_idx1, name="x")
x = mdl2.binary_var_dict(set_idx1, name="x")
Moreover, how should I define constraints to prevent duplicate efforts? Thanks!

In Optimization easy with python
let me share the example clone a linear model
from docplex.mp.model import Model
mdl = Model(name='buses')
nbbus40 = mdl.integer_var(name='nbBus40')
nbbus30 = mdl.integer_var(name='nbBus30')
mdl.add_constraint(nbbus40*40 + nbbus30*30 >= 300, 'kids')
mdl.minimize(nbbus40*500 + nbbus30*400)
mdl.solve(log_output=False,)
mdlclone=mdl.clone()
mdlequal=mdl;
# set upper bound for nbbus40 to 0
nbbus40.ub=0
mdlclone.solve(log_output=False,)
mdlequal.solve(log_output=False,)
print("clone")
for v in mdlclone.iter_integer_vars():
print(v," = ",v.solution_value)
print("= operator")
for v in mdlequal.iter_integer_vars():
print(v," = ",v.solution_value)
which gives
clone
nbBus40 = 6.0
nbBus30 = 2.0
= operator
nbBus40 = 0
nbBus30 = 10.0

About using a modeling artefact in different models: this is not possible, each artefact belong to one parent model. Why would you want to do it?
About the second question, in order to avoid code duplication in writing a complex constraint (and you are right about this), I suggest writing a function that takes (at least) the model as argument, plus other input arguments.
A very simple example:
def new_ct_sum1(mdl, x, y):
# returns a constraint stating x+y ==1
# assumes x,y are in model mdl, otherwise an error is raised.
return mdl.add(x + y == 1)

Related

Using Pyomo Kernel to solve Disjunctive models

I am trying to recreate a problem in the "Pyomo - Optimization Modeling in Python" book using the pyomo kernel instead of the environ. The problem is on page 163 and called "9.4 A mixing problem with semi-continuous variables." For those without the book, here it is:
The following model illustrates a simple mixing problem with three semi-continuous
variables (x1, x2, x3) which represent quantities that are mixed to meet a volumetric
constraint. In this simple example, the number of sources is minimized:
from pyomo.environ import *
from pyomo.gdp import *
L = [1,2,3]
U = [2,4,6]
index = [0,1,2]
model = ConcreteModel()
model.x = Var(index, within=Reals, bounds=(0,20))
# Each disjunct is a semi-continuous variable
# x[k] == 0 or L[k] <= x[k] <= U[k]
def d_rule(block, k, i):
m = block.model()
if i == 0:
block.c = Constraint(expr=m.x[k] == 0)
else:
block.c = Constraint(expr=L[k] <= m.x[k] <= U[k])
model.d = Disjunct(index, [0,1], rule=d_rule)
# There are three disjunctions
def D_rule(block, k):
model = block.model()
return [model.d[k,0], model.d[k,1]]
model.D = Disjunction(index, rule=D_rule)
# Minimize the number of x variables that are nonzero
model.o = Objective(expr=sum(model.d[k,1].indicator_var for k in index))
# Satisfy a demand that is met by these variables
model.c = Constraint(expr=sum(model.x[k] for k in index) >= 7)
I need to refactor this problem to work in the pyomo kernel, but the kernel is not yet compatible with the pyomo gdp used to transform disjunctive models to linear ones. Has anyone ran into this problem, and if so did you find a good method to solve disjunctive models in the pyomo kernel?
I have a partial rewrite of pyomo.gdp that I could make available on a public github branch (probably working, but lacks testing). However, I am weary of investing more time in rewrites like this, as the better approach would be to re-implement the standard pyomo.environ api on top of kernel, which would make all of the extensions compatible.
With that being said, If there are collaborators willing to share in some of the development and testing, I would be happy help complete the kernel-gdp version I've started. If you want to discuss this further, it would probably be best to open an issue on the Pyomo Github page.

How to create a SOC constraints with different Variable vectors's element

I am working on an optimization problem with cvxpy. And I need to create a SOC(second order cone) constraint.
The way described in cvxpy documents is like following:
We use cp.SOC(t, x) to create the SOC constraint ||x||_2 <= t.
where t is the scalar part of the second-order constraint, x is a matrix whose rows/columns are each a cone.
Here is the standard way how cvxpy solve a SOCP problem.
But now i need to extract Variable from different places.
import cvxpy as cvx
Y = cvx.Variable(3)
Z = cvx.Variable(3)
T = cvx.Variable(3)
soc_constraints = []
for in range(3):
t = T[i]
x = np.matrix([Y[i], Z[i]])
soc_constraints += [cvx.SOC(t, x)]
But I get one error here.
AttributeError: 'matrix' object has no attribute 'variables'
I suppose x should be a cvxpy expression. But how can i create a SOC constraint out of different Variable vectors.
Some help would be appreciated.

zeroinflatedpoisson model in python

I want to use python3 to build a zeroinflatedpoisson model. I found in library statsmodel the function statsmodels.discrete.count_model.ZeroInflatePoisson.
I just wonder how to use it. It seems I should do:
ZIFP(Y_train,X_train).fit().
But when I wanted to do prediction using X_test.
It told me the length of X_test doesn't fit X_train.
Or is there another package to fit this model?
Here is the code I used:
X1 = [random.randint(0,1) for i in range(200)]
X2 = [random.randint(1,2) for i in range(200)]
y = np.random.poisson(lam = 2,size = 100).tolist()
for i in range(100):y.append(0)
df['x1'] = x1
df['x2'] = x2
df['y'] = y
df_x = df.iloc[:,:-1]
x_train,x_test,y_train,y_test = train_test_split(df_x,df['y'],test_size = 0.3)
clf = ZeroInflatedPoisson(endog = y_train,exog = x_train).fit()
clf.predict(x_test)
ValueError:operands could not be broadcat together with shapes (140,)(60,)
also tried:
clf.predict(x_test,exog = np.ones(len(x_test)))
ValueError: shapes(60,) and (1,) not aligned: 60 (dim 0) != 1 (dim 0)
This looks like a bug to me.
As far as I can see:
If there are no explanatory variables, exog_infl, specified for the inflation model, then a array of ones is used to model a constant inflation probability.
However, if exog_infl in predict is None, then it uses the model.exog_infl which is an array of ones with the length equal to the training sample.
As work around specifying a 1-D array of ones of correct length in predict should work.
Try:
clf.predict(test_x, exog_infl=np.ones(len(test_x))
I guess the same problem will occur if exposure was used in the model, but is not explicitly specified in predict.
I ran into the same problem, landing me on this thread. As noted by Josef, it seems like you need to provide exog_infl with a 1-D array of ones of correct length to work.
However, the code Josef provided misses the 1-D array-part, so the full line required to generate the required array is actually
clf.predict(test_x, exog_infl=np.ones((len(test_x),1))

tensorflow: creating variables in fn of tf.map_fn returns value error

I have questions regarding variable initialization in map_fn.
I was trying to apply some highway layers separately on each individual element in a tensor, so i figure map_fn might be the best way to do it.
segment_list = tf.reshape(raw_segment_embedding,[batch_size*seqlen,embed_dim])
segment_embedding = tf.map_fn(lambda x: stack_highways(x, hparams), segment_list)
Now the problem is my fn, i.e. stack_highways, create variables, and for some reason tensorflow fails to initialize those variables and give this error.
W = tf.Variable(tf.truncated_normal(W_shape, stddev=0.1), name='weight')
ValueError: Initializer for variable body/model/parallel_0/body/map/while/highway_layer0/weight/ is from inside a control-flow construct, such as a loop or conditional. When creating a variable inside a loop or conditional, use a lambda as the initializer.
I am pretty clueless now, based on the error I suppose it is not about scope but I have no idea how to use a lambda as the initializer (I dont even know what exactly does that mean).
Below are the implementation of stack_highways, any advice would be much appreciated..
def weight_bias(W_shape, b_shape, bias_init=0.1):
"""Fully connected highway layer adopted from
https://github.com/fomorians/highway-fcn/blob/master/main.py
"""
W = tf.Variable(tf.truncated_normal(W_shape, stddev=0.1), name='weight')
b = tf.Variable(tf.constant(bias_init, shape=b_shape), name='bias')
return W, b
def highway_layer(x, size, activation, carry_bias=-1.0):
"""Fully connected highway layer adopted from
https://github.com/fomorians/highway-fcn/blob/master/main.py
"""
W, b = weight_bias([size, size], [size])
with tf.name_scope('transform_gate'):
W_T, b_T = weight_bias([size, size], bias_init=carry_bias)
H = activation(tf.matmul(x, W) + b, name='activation')
T = tf.sigmoid(tf.matmul(x, W_T) + b_T, name='transform_gate')
C = tf.sub(1.0, T, name="carry_gate")
y = tf.add(tf.mul(H, T), tf.mul(x, C), name='y') # y = (H * T) + (x * C)
return y
def stack_highways(x, hparams):
"""Create highway networks, this would not create
a padding layer in the bottom and the top, it would
just be layers of highways.
Args:
x: a raw_segment_embedding
hparams: run hyperparameters
Returns:
y: a segment_embedding
"""
highway_size = hparams.highway_size
activation = hparams.highway_activation #tf.nn.relu
carry_bias_init = hparams.highway_carry_bias
prev_y = None
y = None
for i in range(highway_size):
with tf.name_scope("highway_layer{}".format(i)) as scope:
if i == 0: # first, input layer
prev_y = highway_layer(x, highway_size, activation, carry_bias=carry_bias_init)
elif i == highways - 1: # last, output layer
y = highway_layer(prev_y, highway_size, activation, carry_bias=carry_bias_init)
else: # hidden layers
prev_y = highway_layer(prev_y, highway_size, activation, carry_bias=carry_bias_init)
return y
Warmest Regards,
Colman
TensorFlow provides two main ways of initializing variables:
"lambda" initializers: callables that return the value of initialization. TF provides many nicely packaged ones.
Initialization by tensor values: This is what you are using currently.
The error message is stating that you need to use the first type of initializer when using variables from within a while_loop (which map_fn calls internally). (In general lambda initializers seem more robust to me.)
Additionally in the past, tf.get_variable seems to be preferred over tf.Variable when used from within control flow.
So, I suspect you can resolve your issue by fixing your weight_bias function to something like this:
def weight_bias(W_shape, b_shape, bias_init=0.1):
"""Fully connected highway layer adopted from
https://github.com/fomorians/highway-fcn/blob/master/main.py
"""
W = tf.get_variable("weight", shape=W_shape,
initializer=tf.truncated_normal_initializer(stddev=0.1))
b = tf.get_variable("bias", shape=b_shape,
initializer=tf.constant_inititializer(bias_init))
return W, b
Hope that helps!

How to add a confusion matrix to Theano examples?

I want to make use of Theano's logistic regression classifier, but I would like to make an apples-to-apples comparison with previous studies I've done to see how deep learning stacks up. I recognize this is probably a fairly simple task if I was more proficient in Theano, but this is what I have so far. From the tutorials on the website, I have the following code:
def errors(self, y):
# check if y has same dimension of y_pred
if y.ndim != self.y_pred.ndim:
raise TypeError(
'y should have the same shape as self.y_pred',
('y', y.type, 'y_pred', self.y_pred.type)
)
# check if y is of the correct datatype
if y.dtype.startswith('int'):
# the T.neq operator returns a vector of 0s and 1s, where 1
# represents a mistake in prediction
return T.mean(T.neq(self.y_pred, y))
I'm pretty sure this is where I need to add the functionality, but I'm not certain how to go about it. What I need is either access to y_pred and y for each and every run (to update my confusion matrix in python) or to have the C++ code handle the confusion matrix and return it at some point along the way. I don't think I can do the former, and I'm unsure how to do the latter. I've done some messing around with an update function along the lines of:
def confuMat(self, y):
x=T.vector('x')
classes = T.scalar('n_classes')
onehot = T.eq(x.dimshuffle(0,'x'),T.arange(classes).dimshuffle('x',0))
oneHot = theano.function([x,classes],onehot)
yMat = T.matrix('y')
yPredMat = T.matrix('y_pred')
confMat = T.dot(yMat.T,yPredMat)
confusionMatrix = theano.function(inputs=[yMat,yPredMat],outputs=confMat)
def confusion_matrix(x,y,n_class):
return confusionMatrix(oneHot(x,n_class),oneHot(y,n_class))
t = np.asarray(confusion_matrix(y,self.y_pred,self.n_out))
print (t)
But I'm not completely clear on how to get this to interface with the function in question and give me a numpy array I can work with.
I'm quite new to Theano, so hopefully this is an easy fix for one of you. I'd like to use this classifer as my output layer in a number of configurations, so I could use the confusion matrix with other architectures.
I suggest using a brute force sort of a way. You need an output for a prediction first. Create a function for it.
prediction = theano.function(
inputs = [index],
outputs = MLPlayers.predicts,
givens={
x: test_set_x[index * batch_size: (index + 1) * batch_size]})
In your test loop, gather the predictions...
labels = labels + test_set_y.eval().tolist()
for mini_batch in xrange(n_test_batches):
wrong = wrong + int(test_model(mini_batch))
predictions = predictions + prediction(mini_batch).tolist()
Now create confusion matrix this way:
correct = 0
confusion = numpy.zeros((outs,outs), dtype = int)
for index in xrange(len(predictions)):
if labels[index] is predictions[index]:
correct = correct + 1
confusion[int(predictions[index]),int(labels[index])] = confusion[int(predictions[index]),int(labels[index])] + 1
You can find this kind of an implementation in this repository.

Resources