Add model information to the Gurobi log - python-3.x

I'm using Gurobi in Python. I'm iterating over a set of nodes and at each iteration, I'm adding a constraint to solve. After solving, it produces the Gurobi log as follows:
Optimize a model with 6 rows, 36 columns and 41 nonzeros
Variable types: 0 continuous, 36 integer (36 binary)
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [2e+01, 9e+01]
Bounds range [1e+00, 1e+00]
RHS range [2e+00, 2e+00]
MIP start did not produce a new incumbent solution
MIP start violates constraint R5 by 2.000000000
Found heuristic solution: objective 347.281
Presolve removed 2 rows and 21 columns
Presolve time: 0.00s
Presolved: 4 rows, 15 columns, 27 nonzeros
Found heuristic solution: objective 336.2791955
Variable types: 0 continuous, 15 integer (15 binary)
Root relaxation: objective 3.043757e+02, 6 iterations, 0.00 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node Time
* 0 0 0 304.3757488 304.37575 0.00% - 0s
Explored 0 nodes (6 simplex iterations) in 0.02 seconds
Thread count was 4 (of 4 available processors)
Solution count 3: 304.376 336.279 339.43
Optimal solution found (tolerance 1.00e-04)
Best objective 3.043757488224e+02, best bound 3.043757488224e+02, gap 0.0000%
But after a certain iteration, my answer is not what I'm expecting. So I wish to print all my model details (objective function, constraints etc) in the Gurobi log at every iteration.How can I do that?
But model.write() prints the objective function and the constraint that we have coded.
Minimize
0 x(0,0) + 75.47184905645283 x(0,1) + 57.55866572463264 x(0,2)
+ 33.97057550292606 x(0,3) + 23.3238075793812 x(0,4)
+ 40.80441152620633 x(0,5) + 75.47184905645283 x(1,0) + 0 x(1,1)
+ 32.7566787083184 x(1,2) + 90.60905032059435 x(1,3)
+ 55.71355310873648 x(1,4) + 40.60788100849391 x(1,5)
+ 57.55866572463264 x(2,0) + 32.7566787083184 x(2,1) + 0 x(2,2)
+ 83.36066218546971 x(2,3) + 46.57252408878007 x(2,4)
+ 41.4004830889689 x(2,5) + 33.97057550292606 x(3,0)
+ 90.60905032059435 x(3,1) + 83.36066218546971 x(3,2) + 0 x(3,3)
+ 37.12142238654117 x(3,4) + 50.00999900019995 x(3,5)
+ 23.3238075793812 x(4,0) + 55.71355310873648 x(4,1)
+ 46.57252408878007 x(4,2) + 37.12142238654117 x(4,3) + 0 x(4,4)
+ 17.69180601295413 x(4,5) + 40.80441152620633 x(5,0)
+ 40.60788100849391 x(5,1) + 41.4004830889689 x(5,2)
+ 50.00999900019995 x(5,3) + 17.69180601295413 x(5,4) + 0 x(5,5)
Subject To
R0: x(0,1) + x(0,2) + x(0,3) + x(0,4) + x(0,5) >= 2
R1: x(1,0) + x(1,2) + x(1,3) + x(1,4) + x(1,5) >= 2
R2: x(1,0) + x(1,3) + x(1,4) + x(2,0) + x(2,3) + x(2,4) + x(5,0) +
x(5,3)+ x(5,4) >= 2
R3: x(3,0) + x(3,1) + x(3,2) + x(3,4) + x(3,5) >= 2
R4: x(0,1) + x(0,2) + x(0,5) + x(3,1) + x(3,2) + x(3,5) + x(4,1) +
x(4,2)+ x(4,5) >= 2
R5: x(0,1) + x(0,2) + x(3,1) + x(3,2) + x(4,1) + x(4,2) + x(5,1) +
x(5,2)>= 2
Bounds
Binaries
x(0,0) x(0,1) x(0,2) x(0,3) x(0,4) x(0,5) x(1,0) x(1,1) x(1,2) x(1,3)
x(1,4) x(1,5) x(2,0) x(2,1) x(2,2) x(2,3) x(2,4) x(2,5) x(3,0) x(3,1)
x(3,2) x(3,3) x(3,4) x(3,5) x(4,0) x(4,1) x(4,2) x(4,3) x(4,4) x(4,5)
x(5,0) x(5,1) x(5,2) x(5,3) x(5,4) x(5,5)
End
What I need in this is to know what is happening at each iteration. That's because one iteration gives me another false answer and so I want to check whether any redundant constraint is adding into the model when solving.
In other words, does "Gurobi callbacks" allow us to access all information that is available in the model? What will it produce?

In other words, does "Gurobi callbacks" allow us to access all
information that is available in the model? What will it produce?
No, you cannot print constraints generated in a callback function.
Most likely, the issue is one of the following:
You are calling the wrong function inside the callback. There are two kinds of constraints you can add: lazy constraints and user cuts. Lazy constraints are necessary for the structure; a solution must satisfy all lazy constraints. However, you use lazy constraints when they are too numerous to add to the model, and you only want to add those that get violated. User cuts are not necessary, but they can help remove fractional solutions and tighten the LP relaxation of a MIP. In your case, it sounds like you have lazy constraints.
You are not adding all violated lazy constraints. As stated in the documentation: "Your callback should be prepared to cut off solutions that violate any
of your lazy constraints, including those that have already been
added." You should not track whether you added a lazy constraint already; you must add it every time you see that it is violated. This is due to the parallel processing of the Gurobi solver.

Related

Are floating point numbers really commutative?

It's said that floating point addition is commutative but not associative.
An example of it being non associative is the following:
(1 + 1e100) + -1e100 = 0, and 1 + (1e100 + -1e100) = 1
But doesn't this also prove that they are not commutative by the following:
1 + 1e100 + -1e100 = 0, and 1e100 + -1e100 + 1 = 1

Formulating a binary sequence with shift in MILP

I would like to know if it's actually possible to encode a (binary) sequence with rotations in MILP/MIP.
Given a binary sequence (0,1,1,0,0,0,0,1) and variables x0,x1,x2,x3,x4,x5,x6,x7,
I want to restrict my MILP program such that it takes up one of the following:
(x0,x1,x2,x3,x4,x5,x6,x7) = (0,1,1,0,0,0,0,1) or
(x7,x0,x1,x2,x3,x4,x5,x6) = (0,1,1,0,0,0,0,1) or
...
(x1,x2,x3,x4,x5,x6,x7,x0) = (0,1,1,0,0,0,0,1)
I understand that the rotation can be easily solved by just extending the sequence. But I find myself creating multiple MILP instances, each instance corresponding to exactly one of the cases. If this is infeasible, why?
There are many approaches one could design and it's not really clear in what context you will use it.
Here is a relatively simple one:
A: Introduce n new binary variables: These describe the "root / first zero" decision
s_x0, s_x1, s_x2, s_x3, s_x4, s_x5, s_x6, s_x7
B: Add a simplex-constraint / make those add up to 1: We do want a unique root!
s_x0 + s_x1 + s_x2 + s_x3 + s_x4 + s_x5 + s_x6 + s_x7 = 1
C: Encode all implications for all possible roots which can be chosen
for: s_x0
logic-form | milp-form
s_x0 -> x0 (1-s_x0) + x0 >= 1
s_x0 -> x1 (1-s_x0) + x1 >= 1
s_x0 -> !x2 (1-s_x0) + (1-x2) >= 1
s_x0 -> !x3 (1-s_x0) + (1-x3) >= 1
s_x0 -> !x4 (1-s_x0) + (1-x4) >= 1
s_x0 -> !x5 (1-s_x0) + (1-x5) >= 1
s_x0 -> x6 (1-s_x0) + x6 >= 1
s_x0 -> !x7 (1-s_x0) + (1-x7) >= 1
for: s_x1
s_x1 -> !x0 (1-s_x1) + (1-x0) >= 1
s_x1 -> x1 (1-s_x1) + x1 >= 1
s_x1 -> x2 (1-s_x1) + x2 >= 1
s_x1 -> !x3 (1-s_x1) + (1-x3) >= 1
s_x1 -> !x4 (1-s_x1) + (1-x4) >= 1
s_x1 -> !x5 (1-s_x1) + (1-x5) >= 1
s_x1 -> !x6 (1-s_x1) + (1-x6) >= 1
s_x1 -> x7 (1-s_x1) + x7 >= 1
for ......
This:
Basically exploits the core structure behind the problem:
We need to chose between n different patterns and must enforce the effects
Will get big (at least for human-consumption)
Is rather simple / easy to understand and implement
But also should provide a nice LP-relaxation
This (non-compact) formulation also exploits some strengths of MILP-solvers (e.g. clique-tables)

Calculate probability of an event not by exclusion

I have some doubt with these kind of problems, example:
"If we asked 20,000 in a stadium to toss a coin 10 times, what it's the probability of at least one person getting 10 heads?"
I took this example from Practical Statistics for Data Scientist.
So, the probability of at least one person getting 10 heads it's calculated using: 1 - P(of nobody in the stadium getting 10 heads).
So we kind of doing an exclude procedure here, first I get the probability of the contrary event I am trying to measure, not the ACTUAL experiment I want to measure: at least one people getting 10 heads.
Why do we do it this way?
How can I calculate the probability of at least someone getting 10 heads but without passing through the probability of no one getting 10 heads?
As #Robert Dodier mentioned in the comments, the reason is that the calculations are simpler. I will use a stadium of 20 people instead of 20000 as an example:
Method 1:
Probability of not getting 10 heads for one individual
= 1 - probability of getting 10 heads
= 1 - 10!/(10!0!)*0.5^10*(1-0.5)^0
= 0.9990234375
Probability of at least one person in the stadium getting 10 heads
= 1 - P(of nobody in the stadium getting 10 heads)
= 1 - 0.9990234375**20 (because all coin tosses are independent)
= 0.019351109194852834
Method 2:
Probability of getting 10 heads for one individual
= 10!/(10!0!)*0.5^10*(1-0.5)^0
= 0.0009765625
Probability of exactly 1, 2, 3, etc. persons in the stadium getting 10 heads:
p1 = 20!/(1!19!)*0.0009765625^1*(1-0.0009765625)^(20-1) = 0.019172021325613825
p2 = 20!/(2!18!)*0.0009765625^2*(1-0.0009765625)^(20-2) = 0.00017803929872270904
p3 = 20!/(3!17!)*0.0009765625^3*(1-0.0009765625)^(20-3) = 1.0442187608370032e-06
p4 = 20!/(4!16!)*0.0009765625^4*(1-0.0009765625)^(20-4) = 4.338152232216289e-09
p5 = 20!/(5!15!)*0.0009765625^5*(1-0.0009765625)^(20-5) = 1.3569977656981548e-11
p6 = 20!/(6!14!)*0.0009765625^6*(1-0.0009765625)^(20-6) = 3.316221323798032e-14
p7 = 20!/(7!13!)*0.0009765625^7*(1-0.0009765625)^(20-7) = 6.483326146232712e-17
p8 = 20!/(8!12!)*0.0009765625^8*(1-0.0009765625)^(20-8) = 1.029853859983202e-19
p9 = 20!/(9!11!)*0.0009765625^9*(1-0.0009765625)^(20-9) = 1.342266353839299e-22
p10 = 20!/(10!10!)*0.0009765625^10*(1-0.0009765625)^(20-10) = 1.443297154665913e-25
p11 = 20!/(11!9!)*0.0009765625^11*(1-0.0009765625)^(20-11) = 1.2825887804726853e-28
p12 = 20!/(12!8!)*0.0009765625^12*(1-0.0009765625)^(20-12) = 9.403143551852531e-32
p13 = 20!/(13!7!)*0.0009765625^13*(1-0.0009765625)^(20-13) = 5.656451493707817e-35
p14 = 20!/(14!6!)*0.0009765625^14*(1-0.0009765625)^(20-14) = 2.7646390487330485e-38
p15 = 20!/(15!5!)*0.0009765625^15*(1-0.0009765625)^(20-15) = 1.0809927854283668e-41
p16 = 20!/(16!4!)*0.0009765625^16*(1-0.0009765625)^(20-16) = 3.3021529369146104e-45
p17 = 20!/(17!3!)*0.0009765625^17*(1-0.0009765625)^(20-17) = 7.59508466888531e-49
p18 = 20!/(18!2!)*0.0009765625^18*(1-0.0009765625)^(20-18) = 1.2373875315877011e-52
p19 = 20!/(19!1!)*0.0009765625^19*(1-0.0009765625)^(20-19) = 1.2732289258503896e-56
p20 = 20!/(20!0!)*0.0009765625^20*(1-0.0009765625)^(20-20) = 6.223015277861142e-61
Probability of at least one person in the stadium getting 10 heads
= p1 + p2 + p3 + p4 + p5 + p6 + p7 + p8 + p9 + p10 +
p11 + p12 + p13 + p14 + p15 + p16 + p17 + p18 + p19 + p20
= 0.01935110919485281
So the result is the same (the tiny difference is due to floating point precision), but as you can see the first calculation is slightly simpler for 20 people, never mind for 20000 ;)

Open Scene Graph - Usage of DrawElementsUInt: Drawing a cloth without duplicating vertices

I am currently working on simulating a cloth like material and then displaying the results via Open Scene Graph.
I've gotten the setup to display something cloth like, by just dumping all the vertices into 1 Vec3Array and then displaying them with a standard Point based DrawArrays. However I am looking into adding the faces between the vertices so that a further part of my application can visually see the cloth.
This is currently what I am attempting as for the PrimitiveSet
// create and add a DrawArray Primitive (see include/osg/Primitive). The first
// parameter passed to the DrawArrays constructor is the Primitive::Mode which
// in this case is POINTS (which has the same value GL_POINTS), the second
// parameter is the index position into the vertex array of the first point
// to draw, and the third parameter is the number of points to draw.
unsigned int k = CLOTH_SIZE_X;
unsigned int n = CLOTH_SIZE_Y;
osg::ref_ptr<osg::DrawElementsUInt> indices = new osg::DrawElementsUInt(GL_QUADS, (k) * (n));
for (uint y_i = 0; y_i < n - 1; y_i++) {
for (uint x_i = 0; x_i < k - 1; x_i++) {
(*indices)[y_i * k + x_i] = y_i * k + x_i;
(*indices)[y_i * (k + 1) + x_i] = y_i * (k + 1) + x_i;
(*indices)[y_i * (k + 1) + x_i + 1] = y_i * (k + 1) + x_i + 1;
(*indices)[y_i * k + x_i] = y_i * k + x_i + 1;
}
}
geom->addPrimitiveSet(indices.get());
This does however cause memory corruption when running, and I am not fluent enough in Assembly code to decipher what it is trying to do wrong when CLion gives me the disassembled code.
My thought was that I would iterate over each of the faces of my cloth and then select the 4 indices of the vertices that belong to it. The vertices are inputted from top left to bottom right in order. So:
0 1 2 3 ... k-1
k k+1 k+2 k+3 ... 2k-1
2k 2k+1 2k+2 2k+3 ... 3k-1
...
Has anyone come across this specific use-case before and does he/she perhaps have a solution for my problem? Any help would be greatly appreciated.
You might want to look into using DrawArrays with QUAD_STRIP (or TRIANGLE_STRIP because quads are frowned upon these days). There's an example here:
http://openscenegraph.sourceforge.net/documentation/OpenSceneGraph/examples/osggeometry/osggeometry.cpp
It's slightly less efficient than Elements/indices, but it's also less complicated to manage the relationship between the two related containers (the vertices and the indices).
If you really want to do the Elements/indices route, we'd probably need to see more repro code to see what's going on.

when compressing a sas dataset increases its size?

I had written a code which creates SAS dataset with compress=yes option. That said the resultant datasets is getting compressed with an increased size as seen in log
1374 +proc sql;
1375 + create table seg.KRG_EO_PVS_CUST_PROD_&op_cyc.
1376 + (
1377 + COMPRESS = YES
1378 + ) as
1379 + select
^L32 The SAS System 02:15 Thursday, August 20, 2015
1380 + W6DFFTE1.DIB_CUST_ID length = 8
1381 + format = 15.
1382 + informat = 15.
1383 + label = 'The logical customer id',
1384 + W6DFFTE1.DIB_PROD_ID length = 8
1385 + format = 15.
1386 + informat = 15.
1387 + label = 'The product id',
1388 + case when W5TM24S0.OFFER_FLAG = "1" then "1" else "0" end as OFFER_FLAG length = 1,
1389 + sum(W6DFFTE1.TOT_QUANTITY ) as TOT_QUANTITY length = 8
1390 + format = 10.
1391 + informat = 5.
1392 + label = 'Number of items purchased'
1393 + from
1394 + work.W6DFFTE1 left join
1395 + work.W5TM24S0
1396 + on
1397 + (
1398 + W5TM24S0.DIB_STORE_ID = W6DFFTE1.DIB_STORE_ID
1399 + and W5TM24S0.DIB_SCAN_ID = W6DFFTE1.DIB_SCAN_ID
1400 + )
1401 + group by
1402 + W6DFFTE1.DIB_CUST_ID,
1403 + W6DFFTE1.DIB_PROD_ID,
1404 + W5TM24S0.OFFER_FLAG
1405 + ;
NOTE: Compressing data set SEG.KRG_EO_PVS_CUST_PROD_20150701 increased size by 43.27 percent.
Compressed is 1961732 pages; un-compressed would require 1369265 pages.
NOTE: Table SEG.KRG_EO_PVS_CUST_PROD_20150701 created, with 346423801 rows and 4 columns.
I just want to know what are the probable reasons for this to happen
SAS compression is pretty primitive and compress=yes just lets SAS save disk space by not writing actual bytes of data for unused length in character variables. It looks like your data is three numeric variables, plus a one-character-long variable. This is not much to work with, plus it would have to add whatever formatting overhead is involved with a compressed file.
If you need to compress files for medium or long term storage, you're much better off using a separate zip or tar utility.
EDIT: I don't mean to disparage SAS compression. I believe the designers were more concerned with preserving relatively fast disk access than with with providing actual zip-style compression.

Resources