I have this cplex model that has 1 binay variable (x_i). Now I have 2 questions regarding its cplex solutions (I put them in one post because they are related).
First: For my model I get 26 solutions but I know in reality there are much more solutions.How solutions are being generated in cplex? Is there any way to increase the number of solution?
Second: I want to access all of the solutions with a solution pool but when I am trying to print all the solutions, It prints all the existing variables(obviously I just need the variables that are equal to 1) with their value.
This is my code for the solution pool:
def generate_soln_pool(mdl):
cpx = mdl.get_cplex()
cpx.solnpoolintensity=4
cpx.solnpoolagap=0
cpx.populatelim=100000
try:
cpx.populate_solution_pool()
except CplexSolverError:
print("Exception raised during populate")
return []
numsol = cpx.solution.pool.get_num()
print(numsol)
nb_vars = mdl.number_of_variables
sol_pool = []
for i in range(numsol):
x_i = cpx.solution.pool.get_values(i)
assert len(x_i) == nb_vars
sol = mdl.new_solution()
for k in range(nb_vars):
vk = mdl.get_var_by_index(k)
sol.add_var_value(vk, x_i[k])
sol_pool.append(sol)
return sol_pool
bm=CModel()
pool = generate_soln_pool(bm)
for s, sol in enumerate(pool,start=1):
print(" this is solution #{0} of the pool".format(s))
sol.display()
This is a part of my output:
x_0 = 0
x_1 = 0
x_2 = 0
x_3 = 0
x_4 = 0
x_5 = 0
x_6 = 0
x_7 = 0
x_8 = 0
x_9 = 0
x_10= 0
x_11 = 1
x_12 = 0
x_13 = 0
.
.
.
I guess you took the parameter settings from the example in the documentation? These parameters will make CPLEX enumerate all optimal solutions. In case you want all solutions you have to set the solution pool gap to a very huge value.
CPLEX has many ways to generate solutions, but roughly it follows the standard branch and bound scheme augmented by heuristics.
Of course, a solution has a value for every variable. If you only want certain variables then you can use the various filtering and comprehension types that Python provides. For example, to get indices of the binary variables that are 1 in the solution you can something like this:
indices = [j for j, a in enumerate(cpx.solution.pool.get_values(i)) if a > 0.5]
EDIT: After seeing and running the code we found what the issue is:
The code only sets the absolute gap parameter, it should set the relative gap parameter as well.
The code sets parameters like cpx.solnpoolintensity = 4. This is not the correct way to set parameters. The statement will just create a new property in the object that is ignore by the rest of the code.
The correct way to set up parameters for enumerating (up to) 4000 solutions is
cpx.parameters.mip.pool.intensity.set(4)
cpx.parameters.mip.pool.absgap.set(1e75)
cpx.parameters.mip.pool.relgap.set(1e75)
cpx.parameters.mip.limits.populate.set(4000)
Related
I have code that reads data and grabs specific data from an object's fields.
How can I eliminate the quadruple for loop here? Its performance seems quite slow.
data = readnek(filename) # read in data
bigNum=200000
for myNodeVal in range(0, 7): # all 6 elements.
cs_coords = np.ones((bigNum, 2)) # initialize data
counter = 0
for iel in range(bigNum):
for ix in range(0,7):
for iy in range(0,7):
z = data.elem[iel].pos[2, myNodeVal, iy, ix]
x = data.elem[iel].pos[0, myNodeVal, iy, ix]
y = data.elem[iel].pos[1, myNodeVal, iy, ix]
cs_coords[counter, 0:2] = [x, y]
counter += 1
You can remove the two innermost loops using a transposed view that is reshaped so to build a block of 49 [x, y] values then assigned to cs_coords in a vectorized way. The access to z can be removed for better performance (since the Python interpreter optimize nearly nothing). Here is an (untested) example:
data = readnek(filename) # read in data
bigNum=200000
for myNodeVal in range(0, 7): # all 6 elements.
cs_coords = np.ones((bigNum, 2)) # initialize data
counter = 0
for iel in range(bigNum):
arr = data.elem[iel].pos
view_x = arr[0, myNodeVal, 0:7, 0:7].T
view_y = arr[1, myNodeVal, 0:7, 0:7].T
cs_coords[counter:counter+49] = np.hstack([view_x.reshape(-1, 1), view_y.reshape(-1, 1)])
counter += 49
Note that the initial code is probably flawed since cs_coords.shape[0] is bigNum and counter will be bigNum * 49. You certainly need to use the shape (bigNum*49, 2) instead so to avoid out of bound errors.
Note the above code is still far from being optimal since it will create many small arrays and Numpy is not optimized to deal with very small arrays (CPython neither). It is hard to do much better without more information on data. Using Numba or Cython can certainly help a lot to speed up this code. Still, even with such tool, the code will not be very efficient since the memory access pattern is inefficient (bad cache locality) and the overall code will be memory-bound.
In the code supplied below I am trying to iterate over 2D numpy array [i][k]
Originally it is a code which was written in Fortran 77 which is older than my grandfather. I am trying to adapt it to python.
(for people interested whatabouts: it is a simple hydraulics transients event solver)
Bear in mind that all variables are introduced in my code which I don't paste here.
H = np.zeros((NS,50))
Q = np.zeros((NS,50))
Here I am assigning the first row values:
for i in range(NS):
H[0][i] = HR-i*R*Q0**2
Q[0][i] = Q0
CVP = .5*Q0**2/H[N]
T = 0
k = 0
TAU = 1
#Interior points:
HP = np.zeros((NS,50))
QP = np.zeros((NS,50))
while T<=Tmax:
T += dt
k += 1
for i in range(1,N):
CP = H[k][i-1]+Q[k][i-1]*(B-R*abs(Q[k][i-1]))
CM = H[k][i+1]-Q[k][i+1]*(B-R*abs(Q[k][i+1]))
HP[k][i-1] = 0.5*(CP+CM)
QP[k][i-1] = (HP[k][i-1]-CM)/B
#Boundary Conditions:
HP[k][0] = HR
QP[k][0] = Q[k][1]+(HP[k][0]-H[k][1]-R*Q[k][1]*abs(Q[k][1]))/B
if T == Tc:
TAU = 0
CV = 0
else:
TAU = (1.-T/Tc)**Em
CV = CVP*TAU**2
CP = H[k][N-1]+Q[k][N-1]*(B-R*abs(Q[k][N-1]))
QP[k][N] = -CV*B+np.sqrt(CV**2*(B**2)+2*CV*CP)
HP[k][N] = CP-B*QP[k][N]
for i in range(NS):
H[k][i] = HP[k][i]
Q[k][i] = QP[k][i]
Remember i is for rows and k is for columns
What I am expecting is that for all k number of columns the values should be calculated until T<=Tmax condition is met. I cannot figure out what my mistake is, I am getting the following errors:
RuntimeWarning: divide by zero encountered in true_divide
CVP = .5*Q0**2/H[N]
RuntimeWarning: invalid value encountered in multiply
QP[N][k] = -CV*B+np.sqrt(CV**2*(B**2)+2*CV*CP)
QP[N][k] = -CV*B+np.sqrt(CV**2*(B**2)+2*CV*CP)
ValueError: setting an array element with a sequence.
Looking at your first iteration:
H = np.zeros((NS,50))
Q = np.zeros((NS,50))
for i in range(NS):
H[0][i] = HR-i*R*Q0**2
Q[0][i] = Q0
The shape of H is (NS,50), but when you iterate over a range(NS) you apply that index to the 2nd dimension. Why? Shouldn't it apply to the dimension with size NS?
In numpy arrays have 'C' order by default. Last dimension is inner most. They can have a F (fortran) order, but let's not go there. Thinking of the 2d array as a table, we typically talk of rows and columns, though they don't have a formal definition in numpy.
Lets assume you want to set the first column to these values:
for i in range(NS):
H[i, 0] = HR - i*R*Q0**2
Q[i, 0] = Q0
But we can do the assignment whole rows or columns at a time. I believe new versions of Fortran also have these 'whole-array' functions.
Q[:, 0] = Q0
H[:, 0] = HR - np.arange(NS) * R * Q0**2
One point of caution when translating to Python. Indexing starts with 0; so does ranges and np.arange(...).
H[0][i] is functionally the same as H[0,i]. But when using slices you have to use the H[:,i] format.
I suspect your other iterations have similar problems, but I'll stop here for now.
Regarding the errors:
The first:
RuntimeWarning: divide by zero encountered in true_divide
CVP = .5*Q0**2/H[N]
You initialize H as zeros so it is normal that it complains of division by zero. Maybe you should add a conditional.
The third:
QP[N][k] = -CV*B+np.sqrt(CV**2*(B**2)+2*CV*CP)
ValueError: setting an array element with a sequence.
You define CVP = .5*Q0**2/H[N] and then CV = CVP*TAU**2 which is a sequence. And then you try to assign a derivate form it to QP[N][K] which is an element. You are trying to insert an array to a value.
For the second error I think it might be related to the third. If you could provide more information I would like to try to understand what happens.
Hope this has helped.
I have a function that I am attempting to minimize for multiple values. For some values it terminates successfully however for others the error
Warning: Maximum number of function evaluations has been exceeded.
Is the error that is given. I am unsure of the role of maxiter and maxfun and how to increase or decrease these in order to successfully get to the minimum. My understanding is that these values are optional so I am unsure of what the default values are.
# create starting parameters, parameters equal to sin(x)
a = 1
k = 0
h = 0
wave_params = [a, k, h]
def wave_func(func_params):
"""This function calculates the difference between a sinewave (sin(x)) and raw_data (different sin wave)
This is the function that will be minimized by modulating a, b, k, and h parameters in order to minimize
the difference between curves."""
a = func_params[0]
b = 1
k = func_params[1]
h = func_params[2]
y_wave = a * np.sin((x_vals-h)/b) + k
error = np.sum((y_wave - raw_data) * (y_wave - raw_data))
return error
wave_optimized = scipy.optimize.fmin(wave_func, wave_params)
You can try using scipy.optimize.minimize with method='Nelder-Mead' https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html.
https://docs.scipy.org/doc/scipy/reference/optimize.minimize-neldermead.html#optimize-minimize-neldermead
Then you can just do
minimum = scipy.optimize.minimize(wave_func, wave_params, method='Nelder-Mead')
n_function_evaluations = minimum.nfev
n_iterations = minimum.nit
or you can customize the search algorithm like this:
minimum = scipy.optimize.minimize(
wave_func, wave_params, method='Nelder-Mead',
options={'maxiter': 10000, 'maxfev': 8000}
)
I don't know anything about fmin, but my guess is that it behaves extremely similarly.
I've encountered some problems when I was trying simulate school's math question,I've test the inner loop independently and the result is what I expect.I have no idea where is the problem and where can I get the resolution.It should be a simple bug.
This is the question:
There is a bag which includes three red balls ,four white balls and five black balls.Take one ball each time.Then what is the probability when red balls were the first color being collected.
And This is my code:(All annotations were not added in my code)
import random as rd
y = 1000 *//total try*
succ = 0 *//success times*
orgbg = ['r','r','r','w','w','w','w','b','b','b','b','b'] *//original bag for each loop initialization*
while (y >= 0):
redball = 0
blackball = 0
whiteball = 0
newbg = orgbg *//every bag for a single try*
while (redball < 3 and whiteball < 4 and blackball < 5):
tknum = rd.randrange(0,len(newbg),1)
tkball = newbg[tknum]
if (tkball == 'r'):
redball = redball + 1
elif (tkball =='w'):
whiteball = whiteball + 1
else:
blackball = blackball + 1
del newbg[tknum]
if (redball == 3):
succ = succ + 1
y = y - 1
print (succ)
This is what the error report says:
ValueError: empty range for randrange() (0,0, 0)
When I turn the code
tknum = rd.randrange(0,len(newbg),1)
into
tknum = rd.randrange(5,len(newbg),1)
The error reoprt says:
ValueError: empty range for randrange() (5,5, 0)
I guess it is the initialization in the outer loop newbg = orgbg doesn't work out,but how can that happen?
Sorry for giving such a length question ,I'm a beginner and this is the first time I ask question on StackOverFlow,you can also give me some suggestion on my code style or method and the way of asking question,next time I will be better,hope you don't mind.
I think that your problem is indeed linked with the initialization in the outer loop newbg = orgbg. To correct your code, you should modify this line with
newbg = deepcopy(orgbg)
and import the corresponding module at the start of your code:
from copy import deepcopy
The explanation of the bug is a bit complicated and is linked with the way that Python handles the memory when copying a list. In fact, there is two possibility for this: a shallow or a deep copy. Here, you made a shallow copy when a deep copy would have been necessary. It is better explained here: https://www.python-course.eu/deep_copy.php or What exactly is the difference between shallow copy, deepcopy and normal assignment operation?
The code below generates two random integers within range specified by argv, tests if the integers match and starts again. At the end it prints some stats about the process.
I've noticed though that increasing the value of argv reduces the percentage of tested possibilities exponentially.
This seems counter intuitive to me so my question is, is this an error in the code or are the numbers real and if so then what am I not thinking about?
#!/usr/bin/python3
import sys
import random
x = int(sys.argv[1])
a = random.randint(0,x)
b = random.randint(0,x)
steps = 1
combos = x**2
while a != b:
a = random.randint(0,x)
b = random.randint(0,x)
steps += 1
percent = (steps / combos) * 100
print()
print()
print('[{} ! {}]'.format(a,b), end=' ')
print('equality!'.upper())
print('steps'.upper(), steps)
print('possble combinations = {}'.format(combos))
print('explored {}% possibilitys'.format(percent))
Thanks
EDIT
For example:
./runscrypt.py 100000
will returm me something like:
[65697 ! 65697] EQUALITY!
STEPS 115867
possble combinations = 10000000000
explored 0.00115867% possibilitys
"explored 0.00115867% possibilitys" <-- This number is too low?
This experiment is really a geometric distribution.
Ie.
Let Y be the random variable of the number of iterations before a match is seen. Then Y is geometrically distributed with parameter 1/x (the probability of generating two matching integers).
The expected value, E[Y] = 1/p where p is the mentioned probability (the proof of this can be found in the link above). So in your case the expected number of iterations is 1/(1/x) = x.
The number of combinations is x^2.
So the expected percentage of explored possibilities is really x/(x^2) = 1/x.
As x approaches infinity, this number approaches 0.
In the case of x=100000, the expected percentage of explored possibilities = 1/100000 = 0.001% which is very close to your numerical result.