Select matching raws in table with Minizinc - constraint-programming

I have a table of Projects with 4 features (name, value, cost, people). I want to describe a model in Minizinc that selects the projects that maximize the total value but with a total budget of 255 and I can choose almost 9 projects between 20.
I write a data file: data.dzn
% name value cost people
data = [
| PRJ0, 6000, 35, 5
| PRJ1, 4000, 34, 3
| PRJ2, 1000, 26, 4
...
| PRJ20, 1200, 18, 2
|];
PRJ = {PRJ0,...,PRJ20};
FEATURE = {name, value, budget, personnel};
max_budget = 225;
max_prj=9;
So my constraints are:
choose_project <= 9 /\ budget<=255 s.t solve maximize tot_value;
How can I select a non-arbitrary number (1 to 9) of decision variables among projects raw in the table?
Until now this is my code: invest.mzn
include "data.dzn";
int: max_budget; %255
int: max_prj; %9
enum FEATURE;
enum PRJ;
array[PRJ,FEATURE] of int: data;
constraint ...
...
solve maximize tot_value;

You can declare an array of Boolean variables, say selected_projects, that encodes whether or not a project PRJ_k is selected or not.
Then you can simply count how many projects in this array are being selected at the same time.
Example:
enum FEATURE = {name, value, budget, personnel};
enum PRJ = { PRJ0, PRJ1 };
array[PRJ,FEATURE] of int: data =
[| PRJ0, 6000, 35, 5
| PRJ1, 4000, 34, 3
|];
array[PRJ] of var bool: selected_projects;
var int: tot_value;
% The total number of selected projects must be in [1, 9]
constraint let {
var int: tot_selected = sum(prj in PRJ) ( selected_projects[prj] )
} in
1 <= tot_selected /\ tot_selected <= 9;
constraint tot_value = sum(prj in PRJ) ( selected_projects[prj] * data[prj, value] );
% ...
% encoding of budget and personnel constraints
% ...
solve maximize tot_value;

Related

Generate 5 Unique Random Numbers in min and max range and sort in Ascending Order in Groovy

I am new to this group and this is my first post.
Can some one please help me to generate 5 Unique numbers in min and max range and then sort them in ascending order using groovy.
note: I am doing this in soapui.
import org.apache.commons.lang.RandomStringUtils
import java.util.Random
Random r = new Random();
def i
List random_num
for (i=1;i<=5;i++)
{
random_num(i) = r.nextInt(9+2)
if(i>0)
{
if(random_num[i]==random_num[i-1])
random_num[i] = r.nextInt(9+2)
}
}
random_num.sort() ;
log.info random_num
thank you
The part about rand between a min and max can be found
here.
For the other constraints:
Unique values: use a Set for your result
Ascending order: use a sorted Set
E.g.:
import java.util.concurrent.ThreadLocalRandom
def n = 5
def min = 5
def max = 42
def result = new TreeSet()
while (result.size()<n) {
result << ThreadLocalRandom.current().nextInt(min, max + 1)
}
println result
// → [7, 8, 29, 37, 42]
Note: make sure max - min > n
Your code is very Java-ish. Groovy was developed, partly to eliminate a lot of the Java ceremony.
This is what I use to generate a random number:
def randNum(minSize, maxSize) {
return Math.abs(new Random().nextInt(maxSize + 1 - minSize) + minSize)
}
You can call that five times, collect the results in a list, and sort the list. To call a routine, you can use:
5.times {randNum(minSize, maxSize)}

How to dynamically set the size of Int?

The model below represents a portion of a model of the binary search algorithm.
One of the key steps in the binary search algorithm is to compute the middle between low and high. It is computed by adding low to high and then dividing by 2.
So, my model has a field low of type Int, high of type Int, and mid of type Int.
When the model is run, the default size of Int is a 4-bit signed integer, with a range of values between -8 and +7. If low has the value 6 and high has the value 6, then 6 + 6 = 12 which is too big and the sum overflows to -4. Divide by 2 yields -2 and mid has a negative value ... ouch!
I want to assert that mid is always positive. But, as described above, it could be negative if Int is not set to a sufficiently large size. Is there a way to dynamically set the size of Int?
I would like my assert to say something like this:
for some size of Int
mid_is_low_plus_high_div_2 =>
Test.mid >= 0
Is there a way to express that?
one sig Test {
low: Int,
high: Int,
mid: Int
} {
low in {i: Int | i >= 0 }
high in {i: Int | i >= 0 }
}
pred mid_is_low_plus_high_div_2 {
Test.mid = div[plus[Test.low, Test.high], 2]
}
assert mid_is_always_positive {
mid_is_low_plus_high_div_2 =>
Test.mid >= 0
}
check mid_is_always_positive

iterating through class members in groovy

I have a class named Vowels which contains 5 variables representing each vowel. User inputs a sentence, some voodoo magic happens and a method counts the vowels a prints them:
[a = 9, e = 5, i = 7, o = 5, u = 6]
After that the user is requested to input a random integer and another method does all 4 basic math operations as below:
input
3
output
a --> 9 * 3 = 27
a --> 9 / 3 = 3
a --> 9 + 3 = 12
a --> 9 - 3 = 6
So far I have managed to make this happen with 20 printlns (1 for each operation for each vowel), but could this be somehow optimized with a loop?
I found some info about reflection, but I don't really understand it.
The members of your object are available by calling getProperties. Every object has a property named class, which you can ignore, and the order is not defined, so 'e' might appear in the properties before 'a'. Example:
class Vowels {
int a, e, i, o, u
}
def v = new Vowels(a: 9, e: 5, i: 7, o: 5, u: 6)
v.properties.each { name, value ->
if (name != 'class') {
println "$name --> $value * 3 == ${value * 3}"
...
}
}

Minimum no. of comparisons to find median of 3 numbers

I was implementing quicksort and I wished to set the pivot to be the median or three numbers. The three numbers being the first element, the middle element, and the last element.
Could I possibly find the median in less no. of comparisons?
median(int a[], int p, int r)
{
int m = (p+r)/2;
if(a[p] < a[m])
{
if(a[p] >= a[r])
return a[p];
else if(a[m] < a[r])
return a[m];
}
else
{
if(a[p] < a[r])
return a[p];
else if(a[m] >= a[r])
return a[m];
}
return a[r];
}
If the concern is only comparisons, then this should be used.
int getMedian(int a, int b , int c) {
int x = a-b;
int y = b-c;
int z = a-c;
if(x*y > 0) return b;
if(x*z > 0) return c;
return a;
}
int32_t FindMedian(const int n1, const int n2, const int n3) {
auto _min = min(n1, min(n2, n3));
auto _max = max(n1, max(n2, n3));
return (n1 + n2 + n3) - _min - _max;
}
You can't do it in one, and you're only using two or three, so I'd say you've got the minimum number of comparisons already.
Rather than just computing the median, you might as well put them in place. Then you can get away with just 3 comparisons all the time, and you've got your pivot closer to being in place.
T median(T a[], int low, int high)
{
int middle = ( low + high ) / 2;
if( a[ middle ].compareTo( a[ low ] ) < 0 )
swap( a, low, middle );
if( a[ high ].compareTo( a[ low ] ) < 0 )
swap( a, low, high );
if( a[ high ].compareTo( a[ middle ] ) < 0 )
swap( a, middle, high );
return a[middle];
}
I know that this is an old thread, but I had to solve exactly this problem on a microcontroller that has very little RAM and does not have a h/w multiplication unit (:)). In the end I found the following works well:
static char medianIndex[] = { 1, 1, 2, 0, 0, 2, 1, 1 };
signed short getMedian(const signed short num[])
{
return num[medianIndex[(num[0] > num[1]) << 2 | (num[1] > num[2]) << 1 | (num[0] > num[2])]];
}
If you're not afraid to get your hands a little dirty with compiler intrinsics you can do it with exactly 0 branches.
The same question was discussed before on:
Fastest way of finding the middle value of a triple?
Though, I have to add that in the context of naive implementation of quicksort, with a lot of elements, reducing the amount of branches when finding the median is not so important because the branch predictor will choke either way when you'll start tossing elements around the the pivot. More sophisticated implementations (which don't branch on the partition operation, and avoid WAW hazards) will benefit from this greatly.
remove max and min value from total sum
int med3(int a, int b, int c)
{
int tot_v = a + b + c ;
int max_v = max(a, max(b, c));
int min_v = min(a, min(b, c));
return tot_v - max_v - min_v
}
There is actually a clever way to isolate the median element from three using a careful analysis of the 6 possible permutations (of low, median, high). In python:
def med(a, start, mid, last):
# put the median of a[start], a[mid], a[last] in the a[start] position
SM = a[start] < a[mid]
SL = a[start] < a[last]
if SM != SL:
return
ML = a[mid] < a[last]
m = mid if SM == ML else last
a[start], a[m] = a[m], a[start]
Half the time you have two comparisons otherwise you have 3 (avg 2.5). And you only swap the median element once when needed (2/3 of the time).
Full python quicksort using this at:
https://github.com/mckoss/labs/blob/master/qs.py
You can write up all the permutations:
1 0 2
1 2 0
0 1 2
2 1 0
0 2 1
2 0 1
Then we want to find the position of the 1. We could do this with two comparisons, if our first comparison could split out a group of equal positions, such as the first two lines.
The issue seems to be that the first two lines are different on any comparison we have available: a<b, a<c, b<c. Hence we have to fully identify the permutation, which requires 3 comparisons in the worst case.
Using a Bitwise XOR operator, the median of three numbers can be found.
def median(a,b,c):
m = max(a,b,c)
n = min(a,b,c)
ans = m^n^a^b^c
return ans

What's the probability that X *consecutive* bits in an array of N bits is set to 1?

I'm trying to code a simple, sufficiently accurate filter for validating a piece of hardware in an RTL simulation. We're simulating the randomness inherent in a chip's flip-flops, by randomly initializing all the flip-flops in the design to either 0 or 1. This corresponds to the chip's flip-flops getting some random value during power-up. We're also randomizing the flops in the reset tree ( where reset tree has no feedback loops ), which means that you can get false glitching on your reset lines.
e.g.
|||
VVV Nth reset-tree flop
+----+ +----+ +----+ / / +----+
reset_in | | 0 | | 1 | | 0 / / | | reset_out
-------->D Q>----->D Q>----->D Q>---- / ... / -->D Q>----
| | | | | | \ \ | |
| | | | | | \ \ | |
+^---+ +^---+ +^---+ / / +^---+
| | | / / |
clk ------+------------+------------+---------/ / ---+
You'll see a 0->1->0 which looks like a reset, but is really a glitch.
I want to build a filter that looks for a certain number of consecutive 1 values to determine whether the reset I just saw was the reset coming from the reset controller or a spurious reset.
I know this is statistics and maybe related to the Poisson distribution, but how do I determine the probability that any X consecutive bits in a set of N bits are 1?
P.S. Yes. I am aware of 4-val RTL simulation. We're doing that also, but some Verilog constructs don't have sufficient pessimism when propagating X's and Z's.
EDIT: The below doesn't answer the question, sorry... Comment clarified that the real problem is about the probability of x consecutive 1s out of n bits, not just the simple thing I assumed.
Had a quick look at this: http://www.mathhelpforum.com/math-help/probability-statistics/64519-probability-consecutive-wins.html which may be what you are looking for - it seems to deal with working out the probability of a run of toin cosses out of a larger population of toin cosses, so sounds similar. But its late and I am tired so I haven't decoded the math :)
OBSOLETE:
It sounds like you are basically dealing with binominal probability - see http://en.wikipedia.org/wiki/Binomial_probability.
I have to admit I haven't done the calculations for about 20 years, so somewhat rusty...
Basically, binominal allows you to "add together" the probability of an event occuring multiple times, where there is only two possible outcomes each time.
Order is significant in your case so it should be as simple as multiplying the probabilites;
For 1 bit it is 50%
For 2 bits it is 50%^2 = 25%
For 3 bits it is 50%^3 = 12.5%
Look at it another way;
1 bit only has 2 possible combinations, one of which is all 1s = 50%
2 bits have 4 possible combinations (10, 01, 11, 00), only one of which is all 1s - so 25%
3 bit have 2^3 = 8 possible combinations, only one of which is all 1s, so 1/8 = 12.5%
So... probability of n bits all being 1 = 1/(2^n).
If you want a quick test to see if a sequence of bits is random based on the longest streak of 1's, you can use the fact that the expected longest streak of 1's in N bits is Θ(log(N)).
Furthermore, the probability that the longest streak exceeds r*log₂(N) bits is at most 1/N^(r-1), and similarly the probability that the longest streak is less than log₂(N)/r bits is at most 1/N^(r-1).
These results are derived in the section on "Streaks" in the chapter on "Counting and Probability" in Introduction to Algorithms
OK, here's what I found:
P = 1 - Q(X)
where
Q(X) = [1 - 1/2(Z)]/[(X + 1 - XZ) x 1/2 x Z^(X+1)]
where
Z = 1 + (1/2)(1/2)^X + (X+1)[(1/2)(1/2)^X]^2 + ...
The link with some of the math is here:
Math Forum
you can do a recursive program (python):
prob (x,n) gives your desired result
import math
def prob(x,n,i=0):
if i == x: return 1
if (x+i) > n: return 0
t = .5 * prob(x,n-1,i+1) + .5 * prob(x,n-1,i)
return t
My approach to this would be to define a FSA that accepts bit patterns of the correct type, and then simulate the pattern for each number of bits. i.e.
State state_map[] = {
0 => { 0 -> 0; 1 -> 1; accepts = false },
1 => { 0 -> 0; 1 -> 2; accepts = false },
2 => { 0 -> 0; 1 -> 3; accepts = false },
3 => { 0 -> 3; 1 -> 3; accepts = true }
};
state[t: 0, s: 0] = 1.0;
state[t: 0, s: 1] = 0.0;
state[t: 0, s: 2] = 0.0;
state[t: 0, s: 3] = 0.0;
for (t = 0; t < N; t++)
for (s = 0; s<NUM_STATES; s++)
state[t: t+1, s: state_map[s].0] += state[t, s] * .5
state[t: t+1, s: state_map[s].1] += state[t, s] * .5
print "Probability: {0}", state[t: N, s: 3],

Resources