Minizinc error:model inconsistency detected - constraint-programming

I'm developing a MiniZinc model for the traveling repairman problem and I'm obtaining the error of model inconsistency detected. I not really keen on modelling with Minizinc and I don't understand the reason of the problem. I would be grateful for any hints on how to solve it.
Here is the model:
int:n; %number of nodes
int:v; %number of repairmen
set of int : F=1..v; %set of repairmen
set of int :custom=1..n; %set of customers
set of int :nodes=1..n+2*v; %set of nodes with the v start
nodes and the v dummy end nodes
set of int :nStart=n+1..n+v; %set of start nodes
set of int :nEnd= n+v+1..n+2*v; %set of end nodes
array[1..n+1,1..n+1] of int:time;
array [nodes] of var F : vehicle;
array [nodes] of var int: arrivalTime;
array [nodes] of var nodes:succ; %successor
array[nodes] of var nodes:pred; %predecessor
constraint forall (i in nStart)(
arrivalTime[i]=0);
% successors of end nodes are start nodes
constraint forall(i in (n+v+1..n+2*v-1)) (
succ[i] = i-v+1
);
constraint succ[n+2*v] = n+1;
constraint redundant_constraint (
forall(i in nodes)
(succ[pred[i]]=i)
/\
forall (i in nodes)(
pred[succ[i]]=i)
);
%vehicle
constraint forall (i in nStart)(
vehicle[i]=i-n);
constraint forall(i in nEnd)(
vehicle[i]= i-n-v);
constraint forall (i in nodes)(
vehicle[succ[i]]=vehicle[i]);
constraint redundant_constraint (
forall (i in nodes)(
vehicle[pred[i]]=vehicle[i]));
%time constraints
array[nodes,nodes] of int:t=array2d(nodes,nodes,[
if i<= n /\ j<=n
then time[i+1,j+1]
elseif i>= n+1 /\ j<=n
then time [1,j+1]
elseif i<=n /\ j>=n+1
then time [i+1,1]
else time[1,1]
endif |i,j in nodes]);
constraint forall (i in nStart) (
arrivalTime [succ[i]] >= t[i,succ[i]] + arrivalTime[i]);
constraint forall (i in custom) (
t[i,succ[i]] + arrivalTime [i] <=arrivalTime [succ[i]]);
constraint subcircuit(succ);
constraint latency = sum (i in nodes) (arrivalTime [i]);
solve minimize latency;

Related

The answer doesn't meet the constraints

I'm applying a MILP about vehicle routing in opl.
One important constraint is that, each trip can be served at maximum once by one vehicle. (each trip has a profit for serving it)
So I'd like opl to maximize total profit and show me which trips are served.
However, the result shows that certain trips are served by all the vehicles.
So I think there are something wrong about the code.
I've tried different ways to write my constraints, (eg. remove "k in K" from "for all" bracket), but the results are the same.
[Model file]
{int}J=...; // trip request
{int}K=...; // Car
/* parameters */
float Pj[J]=...; // profit earned by fullfilling modified rental request j
/* Expression of Desicion Variables */
dexpr float profit = sum(j in J, k in K) Pj[j]*x[j][k];
/Objective function/
maximize profit;
/constraints/
constraint ct1 [J][K];
subject to {
forall (j in J, k in K)
ct1 [j][k]: sum(i in J) x[i][k] <= 1;
}
// end subject to.
/* +++ PRINT OUTPUT +++ PRINT OUTPUT +++ PRINT OUTPUT +++ */
execute printOutput{
writeln();
write("profit= ", profit);
writeln(); //number of trips j served
write("number of trips served = ")
for (var j in J){
for(var k in K){
write( x[j][k], "\t")
}
}
}
I guess the constraint ct1 is relaxed since it is soft.
CPLEX will relax soft constraints if needed
If you turn
ct1 [j][k]: sum(i in J) x[i][k] <= 1;
into
sum(i in J) x[i][k] <= 1;
then the constraint will get hard.

Is possible to define a random limit for a loop in JAGS?

I am trying to implement a Weibull proportional hazards model with a cure fraction following the approach outlined by Hui, Ibrahim and Sinha (1999) - A New Bayesian Model for Survival Data with a Surviving Fraction. However, I am not sure if it is possible to define a random limit for a looping in JAGS.
library(R2OpenBUGS)
library(rjags)
set.seed(1234)
censored <- c(1, 1)
time_mod <- c(NA, NA)
time_cens <- c(5, 7)
tau <- 4
design_matrix <- rbind(c(1, 0, 0, 0), c(1, 0.2, 0.2, 0.04))
jfun <- function() {
for(i in 1:nobs) {
censored[i] ~ dinterval(time_mod[i], time_cens[i])
time_mod[i] <- ifelse(N[i] == 0, tau, min(Z))
for (k in 1:N[i]){
Z[k] ~ dweib(1, 1)
}
N[i] ~ dpois(fc[i])
fc[i] <- exp(inprod(design_matrix[i, ], beta))
}
beta[1] ~ dnorm(0, 10)
beta[2] ~ dnorm(0, 10)
beta[3] ~ dnorm(0, 10)
beta[4] ~ dnorm(0, 10)
}
inits <- function() {
time_init <- rep(NA, length(time_mod))
time_init[which(!status)] <- time_cens[which(!status)] + 1
out <- list(beta = rnorm(4, 0, 10),
time_mod = time_init,
N = rpois(length(time_mod), 5))
return(out)
}
data_base <- list('time_mod' = time_mod, 'time_cens' = time_cens,
'censored' = censored, 'design_matrix' = design_matrix,
'tau' = tau,
'nobs' = length(time_cens[!is.na(time_cens)]))
tc1 <- textConnection("jmod", "w")
write.model(jfun, tc1)
close(tc1)
# Calling JAGS
tc2 <- textConnection(jmod)
j <- jags.model(tc2,
data = data_base,
inits = inits(),
n.chains = 1,
n.adapt = 1000)
I observed the below error:
Error in jags.model(tc2, data = data_base, inits = inits(), n.chains = 1, :
RUNTIME ERROR:
Compilation error on line 6.
Unknown variable N
Either supply values for this variable with the data
or define it on the left hand side of a relation.
I am not entirely certain, but I am pretty sure that you cannot declare a random number of nodes in BUGS in general, so it would not be a specific JAGS' quirk.
Nevertheless, you can get a way around that.
Since BUGS is a declarative language instead of a procedural one, it is enough to declare an arbitrary but deterministic number of nodes (let's say "large enough") and then associate only a random number of them with a distribution and with observed data, leaving the remaining nodes deterministic.
Once you have observed the maximum value of N[i] (let's say N.max), you can pass it as a parameter to JAGS and then change this code of yours:
for (k in 1:N[i]){
Z[k] ~ dweib(1, 1)
}
into this:
for (k in 1:N.max){
if (k <= N[i]){
Z[k] ~ dweib(1, 1)
} else {
Z[k] <- 0
}
}
I hope this will do the trick in your case. So please give feedback latter about it.
Needless to say, if you have some non-zero, observed data associated to a deterministic Z[k], then all hell breaks loose inside Jags...

Efficient predicate for palindrome in Minizinc

To help me learning Minizinc, I am trying to solve an easy problem. My code finds an answer but I am surprised that it takes about 10 seconds to run for such an easy problem.
The problem is "What is the smallest palindromic integer > 10, so that the sum of its digits is > 10 and palindromic too ?".
And I want the code to do large assumptions only: answer has 8 digits at most.
My code is (the toNum predicate comes from hakank website):
predicate toNum(array[int] of var int: a, var int: n, int: base) =
let { int: len = length(a) }
in
n = sum(i in 1..len) (
ceil(pow(int2float(base), int2float(len-i))) * a[i]
)
/\ forall(i in 1..len) (a[i] >= 0 /\ a[i] < base)
;
predicate toNum10(array[int] of var 0..9: a, var int: n) = toNum(a, n, 10);
predicate palindrome_array(array[int] of var int: t) =
let { int: l = length(t), var 1..l: d } in (
forall(j in 1..d-1) (t[j] = 0) /\
t[d] != 0 /\
forall(j in d..(l+d-1) div 2) (t[j] = t[l+d-j])
)
;
predicate palindrome_int(var int: n) =
let { int: size = ceil(log10(int2float(ub(n))))+1,
array[1..size] of var 0..9: digits } in (
toNum10(digits, n) /\
palindrome_array(digits)
)
;
var int: n;
array[1..8] of var 0..9: t;
constraint toNum10(t, n);
constraint palindrome_int(n);
constraint n>10;
var int: s = sum(t);
constraint palindrome_int(s);
constraint s>10;
constraint alldifferent([n, s]);
solve minimize n;
The complete version has the following additional constraints:
var int: s2 = sum(i in 1..8) (t[i]*t[i]);
constraint palindrome_int(s2);
constraint s2 > 10;
var int: s3 = sum(i in 1..8) (t[i]*t[i]*t[i]);
constraint palindrome_int(s3);
constraint s3 > 10;
constraint alldifferent([n, s, s2, s3]);
What's wrong/slow with my code ?
Try to replace "solve minimize n;" with the following labeling strategy:
solve :: int_search(t, first_fail, indomain_min, complete) minimize n;
On my machine, it then takes < 0.1s.

How to calculate a geometric cross field inside an arbitrary polygon?

I'm having troubles finding a way to calculate a "cross-field" inside an arbitrary polygon.
A Cross field, as defined by one paper is the smoothest field that is tangential to the domain boundary (in this case the polygon) I find it a lot in quad re-topology papers but surprisingly not even in Wikipedia I can find the definition of a Cross field.
I have images but since I'm new here the system said I need at least 10 reputation points to upload images.
Any ideas?
I think it could be something along the lines of an interpolation? given an inner point determine the distance to each edge and integrate or weight sum the tangent and perpendicular vector of every edge by the distance? (or any other factor in fact)
But other simpler approaches may exist?
Thanks in advance!
//I've come up with something like this (for the 3D case), very raw, educational purposes
float ditance2segment(Vector3D p, Vector3D p0, Vector3D p1){
Vector3D v = p1 - p0;
Vector3D w = p - p0;
float c1 = v.Dot(w);
if (c1 <= 0)
return (p - p1).Length();
float c2 = v.Dot(v);
if (c2 <= c1)
return (p - p1).Length();
float b = c1 / c2;
Vector3D pb = p0 + b*v;
return (p - pb).Length();
}
void CrossFieldInterpolation(List<Vector3D>& Contour, List<Vector3D>& ContourN, Vector3D p, Vector3D& crossU, Vector3D& crossV){
int N = Contour.Amount();
for (int i=0; i < N; i++){
Vector3D u = Contour[(i + 1) % N] - Contour[i];
Vector3D n = 0.5*(ContourN[(i + 1) % N] + ContourN[i]);
Vector3D v = -Vector3D::Cross(u,n); //perpendicular vector
u = Vector3D::Normalize(u);
n = Vector3D::Normalize(n);
v = Vector3D::Normalize(v);
float dist = ditance2segment(p, Contour[i], Contour[(i+1)%N]);
crossU += u / (1+dist); //to avoid infinity at points over the segment
crossV += v / (1+dist);
}
crossU = Vector3D::Normalize(crossU);
crossV = Vector3D::Normalize(crossV);
}
You can check the OpenSource Graphite software that I'm developping, it implements the "Periodic Global Parameterization" algorithm [1] that was developed in my research team. You may be also interested in the following research articles with algorithms that we developed more recently [2],[3]
Graphite website:
http://alice.loria.fr/software/graphite
How to use Periodic Global Parameterization:
http://alice.loria.fr/WIKI/index.php/Graphite/PGP
[1] http://alice.loria.fr/index.php/publications.html?Paper=TOG_pgp%402006
[2] http://alice.loria.fr/index.php/publications.html?Paper=DGF#2008
[3] http://alice.loria.fr/index.php/publications.html?redirect=0&Paper=DFD#2008&Author=vallet

calculating the sum of nodes in a single verticle line of a binary tree

For a binary tree i want to get the sum of all nodes that fall in a single verticle line. I want the sum of nodes in each verticle node
A
/ \
B C
/ \ / \
D E F G
/ \
H I
IF you look at above tee
line 0 A E F so sum = A+E+F
line -1 B I so sum = B +I
line 1 C so sum = C
line 2 G so sum = G
I implemented following algorithm
Map<Integer,Integere> mp = new HashMap<Integer,Integer>()
calculate(root,0);
void calculate(Node node, int pos){
if(node==null)
return ;
if(mp.containsKey(pos) ){
int val = mp.get(pos) + node.data;
mp.put(pos,val);
}
else{
mp.put(pos,node.data);
}
calculate(node.left,pos-1);
calculate(node.right,pos+1);
}
I think the above algo is fine.Can
any one confirm?
Also how can i do it without using
HashMap,arraylist or any such
collection datatype of java.One
method is two is 2 arrays one for
storing negative indexes(mapped to
positive) and one for positive
indexs(right side of root).But we
dont know what the size of array
will be.
One approach is to use doubly link
list and add a node on right/left
movement if necessary. Am not
getting how can i implement this
approach? Any other simple/more time
efficient approach?
Is the complexity of the above code
i imolmeted is O(n)? (am not good at
analysing time complexity , so
asking )
C++ code
int vertsum(Node* n, int cur_level, int target_level)
{
if (!n)
return 0;
int sum = 0;
if (cur_level == target_level)
sum = n->value;
return sum +
vertsum(n->left, cur_level-1, target_level) +
vertsum(n->right, cur_level+1, target_level);
}
invocation example:
vertsum(root, 0, 1);
EDIT:
After clarifying the requirements, here the suggested code. Note that this is C++'ish and not exactly using Java's or C++'s standard API for lists, but you should get the idea. I assume that addNodeBefore and addNodeAfter initialize node's data (i.e. ListNode::counter)
void vertsum(TreeNode* n, int level, ListNode& counter)
{
if (!n)
return;
counter.value += n->value;
counter.index = level;
if (! counter.prev)
addNodeBefore(counter);
vertsum(n->left, level-1, counter.prev);
if (! counter.next)
addNodeAfter(counter);
vertsum(n->right, level+1, counter.next);
return;
}
You could visit the binary tree in depth-first postorder, and use an offset to keep track of how far you moved to the left/right with respect to your starting node. Every time you move to the left, you decrement the offset, and every time you move to the right you increment the offset. If your visit procedure is called with an offset of 0, then it means that the node being visited has the same offset of your starting node (i.e. it's in the same column), and so you must add its value.
Pseudocode:
procedure visit (node n, int offset) {
sumleft = 0
sumright = 0
if (n.left != null)
sumleft = visit(n.left, offset - 1)
if (n.right != null)
sumright = visit(n.right, offset + 1)
if (offset == 0)
return n.value + sumleft + sumright
else
return sumleft + sumright;
}
For example, if you call
visit(A, 0)
you will get the following calls:
visit(A, 0) -> E.value + F.value + A.value
visit(B, -1) -> E.value
visit(D, -2) -> 0
visit(H, -3) -> 0
visit(I, +2) -> 0
visit(E, 0) -> E.value
visit(C, +1) -> F.value
visit(F, 0) -> F.value
visit(G, +1) -> 0
Another example, starting from node B:
visit(B, 0)
visit(D, -1)
visit(H, -2)
visit(I, 0) -> here we return I.value
visit(E, +1)
when recursion goes back to the initial call visit(B, 0) we have sumleft = I.value and sumright = 0, so we return the final result B.value + I.value, as expected.
Complexity of O(n), because you visit once all nodes of your tree rooted at the starting node.
After think about the above algorithm, I realize it has a limitation, which becomes evident when we consider a more complex tree like the following:
In this case visit(B, 0) would still return B.value + I.value, but this is not the expected result, because N is also on the same column. The following algorithm should cope with this problem:
procedure visit(node n, int c, int t) {
sumleft = 0;
sumright = 0;
if (n.left != null)
sumleft = visit(n.left, c - 1, t)
if (n.right != null)
sumright = visit(n.right, c + 1, t)
if (c == t)
return n.value + sumleft + sumright;
else
return sumleft + sumright;
}
The idea is essentially the same, but we have now a parameter c which gives the current column, and a parameter t which is the target column. If we want the sum of the elements in the B column, then we can call visit(A, 0, -1), that is we always start our visit from node A (the root's tree), which is at column 0, and our target is column -1. We get the following:
Therefore visit(A, 0, -1) = B + I + N as expected.
Complexity is always O(n), where n is the number of nodes in the tree, because we visit the entire tree with depth-first postorder, and we process each node only once.
If we want to compute the sum of every column, we can use the following algorithm
procedure visit(node n, int c) {
if (n.left != null)
S{c} += n.value;
visit(n.left, c - 1)
visit(n.right, c + 1)
}
and call once visit(A, 0), where A is the root node. Note that S{...} in the algorithm is a map whose keys are the columns numbers (..., -2, -1, 0, 1, 2, ...) and whose values (at the end of the algorithm) are the sums of the values of nodes in that column (S{1} will be the sum of nodes in column 1). We can also use an array, instead of a map, provided that we pay attention to the indexes (arrays have no negative indexes). The algorithm is still O(n), because we traverse the entire tree only once. However, in this case we need additional space to store the sum for all columns (the map, or the array). If I'm not mistaken a binary tree of height h will have 2*h + 1 columns.
What about the following? (Inside your node class, assuming getData returns the integer value, hasRight() and hasLeft() are boolean values indicating whether a right/left branch exists and getRight() and getLeft() return the next node in the right/left branch.
public int calculateVerticalLine(int numLine) {
return calculateVerticalLineRecursive(numLine, 0);
}
protected int calculateVerticalLineRecursive(int numLine, int curPosition) {
int result = 0;
if(numLine == curPosition) result += this.getData();
if(hasRight()) result += getRight().calculateVerticalLineRecursive(numLine, curPosition+1);
if(hasLeft()) result += getLeft().calculateVerticalLineRecursive(numLine, curPosition-1);
return result;
}
public class Solution {
public static int getSum(BinaryTreeNode<Integer> root) {
//Your code goes here.
if(root==null){
return 0;
}
int leftnodesum=getSum(root.left);
int rightnodesum=getSum(root.right);
return root.data+leftnodesum+rightnodesum;
}
}

Resources