Minizinc bitwise operators (or similar efficient operation)? - constraint-programming

I would like to constrain an integer variable to have as value the bitwise XOR of some other integers.
I know I can encode the values in boolean arrays instead of integers and have something like forall (i, j in 1..n) c[i] = a[i] xor b[i] but I would like something more efficient.
Is there any way to use bitwise operators in Minizinc (or directly Flatzinc)?
Or alternatively a global constraint or something I can use to achieve what I want and make sure it is implemented efficiently? I am using Gecode as solver.

The following MiniZinc model demonstrates a function to compute the bitwise XOR of two integer variables:
include "globals.mzn";
int: bits = 15;
set of int: Bits = 0 .. bits-1;
set of int: Domain = 0 .. pow(2, bits) - 1;
var Domain: x;
var Domain: y;
% pre-calculate powers of 2: 1, 2, 4, ...
array[Bits] of Domain: twopow = array1d(Bits, [pow(2, i) | i in Bits]);
% test bit in int
function var int: bit_of(var int: num, Bits: idx) =
((num div twopow[idx]) mod 2);
% function to calculate the bitwise XOR of two ints
function var int: bitxor(var int: x, var int: y) =
sum([twopow[i] * ((bit_of(x, i) + bit_of(y, i)) mod 2) | i in Bits]);
constraint y = 0x05;
constraint bitxor(x, y) = 0xA5;
solve satisfy;
output ["\(x) \(y)"];

Related

Signed float to hexadecimal number

How to convert float to a specific format in hexadecimal:
1 bit for sign, 15 bit for the integer value, and the rest 16 bits for values after the decimal point.
Example output should be ffff587a for -0.6543861, fff31a35 for -12.897631, 006bde10 for 107.8674316, 003bd030 for 59.8132324
I have written a program that can do the unsigned conversion, I am stuck at the signed part. Could anyone guide me on how I can achieve this in a very compact way?
def convert(num):
binary2 = ""
Int = int(num)
fract = num - Int
binary = '{:16b}'.format(Int & 0b1111111111111111)
for i in range (16):
fract *= 2
fract_bit = int(fract)
if fract_bit == 1:
fract -= fract_bit
binary2 += '1'
else:
binary2 += '0'
return int(binary + binary2, 2)
value = 107.867431640625
x = convert(value)
hex(x)
output: 0x6bde10
This is simply the Q16.16 fixed-point format. To convert a floating-point number to this format, simply multiply it by 216 (in Python, 1<<16 or 65536) and convert the product to an integer:
y = int(x * (1<<16))
To show its 32-bit two’s complement representation, add 232 if it is negative and then convert it to hexadecimal:
y = hex(y + (1<<32 if y < 0 else 0))
For example, the following prints “0xfff31a35”:
#!/usr/bin/python
x=-12.897631
y = int(x * (1<<16))
y = hex(y + (1<<32 if y < 0 else 0))
print(y)
This conversion truncates. If you want rounding, you can add .5 inside the int or you can add additional code for other types of rounding. You may also want to add code to handle overflows.

Why does this two-line change break this minizinc set-cover program?

The program below (adapted from http://www.hakank.org/minizinc/set_covering4b.mzn ) is a solution to the set-cover problem (example data provided at end of question). This runs correctly.
int: num_alternatives;
int: num_objects;
par set of int: ALTERNATIVES = 1..num_alternatives;
% costs for the alternatives
array[ALTERNATIVES] of int: costs;
% objects covered by the alternatives
array[ALTERNATIVES] of var set of 1..num_objects: a;
% decision variable: which alternative to choose
array[ALTERNATIVES] of var bool: x;
% the objective to minimize
var int: z = sum(i in 1..num_alternatives) (x[i]*costs[i]);
solve minimize z;
constraint
forall(j in 1..num_objects) (
sum(i in 1..num_alternatives) (x[i] * bool2int(j in a[i])) >= 1
)
;
output
[
"x: " ++ show(x) ++ "\n" ++
"a: " ++ show(a) ++ "\n"
];
However, if I replace the a definition above:
array[ALTERNATIVES] of var set of 1..num_objects: a;
with these two lines that seem to me to be equivalent:
var set of int: OBJECTS = 1..num_objects;
array[ALTERNATIVES] of OBJECTS: a;
...suddenly I get the following error:
MiniZinc: type error: type-inst must be par set but is `var set of
int'
This confuses me. What did I even change? In each case a is an array of sets of ints. The type-instance is a var set of int in each case, but the second one throws an error and the first one doesn't for some reason?
Here's some data which can be dropped in the bottom of the .mzn code file to produce a self-contained, runnable example:
% data
num_alternatives = 10;
costs = [ 19, 16, 18, 13, 15, 19, 15, 17, 16, 15];
num_objects = 8;
% the alternatives and the objects they contain
a = [
{1,6},
{2,6,8},
{1,4,7},
{2,3,5},
{2,5},
{2,3},
{2,3,4},
{4,5,8},
{3,6,8},
{1,6,7}
];
You could write it as follows:
int: num_alternatives;
int: num_objects;
set of int: ALTERNATIVES = 1..num_alternatives;
set of int: OBJECTS = 1..num_objects;
% costs for the alternatives
array[ALTERNATIVES] of int: costs;
% objects covered by the alternatives
array[ALTERNATIVES] of var set of OBJECTS: a;
% decision variable: which alternative to choose
array[ALTERNATIVES] of var bool: x;
% the objective to minimize
var int: z = sum(i in ALTERNATIVES) (x[i]*costs[i]);
solve minimize z;
constraint
forall(j in OBJECTS) (
sum(i in ALTERNATIVES) (x[i] * (j in a[i])) >= 1
)
;
output
[
"x: " ++ show(x) ++ "\n" ++
"a: " ++ show(a) ++ "\n"
];
In your experiment
var set of int: OBJECTS = 1..num_objects;
array[ALTERNATIVES] of OBJECTS: a;
a is an array of integers in the range 1..num_objects.
But you intended an array of sets of integers in that range.

Efficient predicate for palindrome in Minizinc

To help me learning Minizinc, I am trying to solve an easy problem. My code finds an answer but I am surprised that it takes about 10 seconds to run for such an easy problem.
The problem is "What is the smallest palindromic integer > 10, so that the sum of its digits is > 10 and palindromic too ?".
And I want the code to do large assumptions only: answer has 8 digits at most.
My code is (the toNum predicate comes from hakank website):
predicate toNum(array[int] of var int: a, var int: n, int: base) =
let { int: len = length(a) }
in
n = sum(i in 1..len) (
ceil(pow(int2float(base), int2float(len-i))) * a[i]
)
/\ forall(i in 1..len) (a[i] >= 0 /\ a[i] < base)
;
predicate toNum10(array[int] of var 0..9: a, var int: n) = toNum(a, n, 10);
predicate palindrome_array(array[int] of var int: t) =
let { int: l = length(t), var 1..l: d } in (
forall(j in 1..d-1) (t[j] = 0) /\
t[d] != 0 /\
forall(j in d..(l+d-1) div 2) (t[j] = t[l+d-j])
)
;
predicate palindrome_int(var int: n) =
let { int: size = ceil(log10(int2float(ub(n))))+1,
array[1..size] of var 0..9: digits } in (
toNum10(digits, n) /\
palindrome_array(digits)
)
;
var int: n;
array[1..8] of var 0..9: t;
constraint toNum10(t, n);
constraint palindrome_int(n);
constraint n>10;
var int: s = sum(t);
constraint palindrome_int(s);
constraint s>10;
constraint alldifferent([n, s]);
solve minimize n;
The complete version has the following additional constraints:
var int: s2 = sum(i in 1..8) (t[i]*t[i]);
constraint palindrome_int(s2);
constraint s2 > 10;
var int: s3 = sum(i in 1..8) (t[i]*t[i]*t[i]);
constraint palindrome_int(s3);
constraint s3 > 10;
constraint alldifferent([n, s, s2, s3]);
What's wrong/slow with my code ?
Try to replace "solve minimize n;" with the following labeling strategy:
solve :: int_search(t, first_fail, indomain_min, complete) minimize n;
On my machine, it then takes < 0.1s.

Converting decimal to specific scientific notation format

I'm trying to convert decimal to a specific scientific notation format:
-2.802479940 ==> -.2802479940E+01
3.796137791 ==> 0.3796137791E+01
And so on. Basically, No leading zero on negative numbers and a leading zero
on positive numbers. I've been able to print out in the big E scientific notation
but have no idea how to force the leading part to be a '-' or a '0' only.
This is kind of ugly (there aren't any built-in string-formatting options for it as far as I know), but it should do what you want it to do:
import re
nums = [-2.802479940, 3.796137791, -0.012, 0.035, 0, 100, -200]
decimal_digits = 10
for n in nums:
num = ('{:.'+str(decimal_digits-1)+'E}').format(n)
# increment exponent
exp_search = re.search(r'E(\+|-)\d+', num).group()
sign = exp_search[1]
exp = int(exp_search[1:])
new_exp = exp + 1
exponent = 'E'+sign+'{:02}'.format(abs(new_exp))
# move decimal place over
sig_search = re.search(r'-?\d\.\d+', num).group()
stripped = sig_search.replace('.', '')
if stripped[0] == '-':
significand = stripped.replace('-', '-.')
else:
significand = '0.' + stripped
# glue significand and exponent back together
new_num = significand + exponent
print('{:>20}{:>20}'.format(num, new_num))
assert float(num) == float(new_num)
Output:
-2.802479940E+00 -.2802479940E+01
3.796137791E+00 0.3796137791E+01
-1.200000000E-02 -.1200000000E-01
3.500000000E-02 0.3500000000E-01
0.000000000E+00 0.0000000000E+01
1.000000000E+02 0.1000000000E+03
-2.000000000E+02 -.2000000000E+03
This doesn't perform any arithmetic, just string operations, so it shouldn't introduce any issues with floating-point representations or whatever.
Here's C code to do what you want:
void numfmt(double d, char* buf) {
int exp=0;
int cont=1;
char sign = '0';
if (d < 0) {
sign = '-';
d = -d;
}
while(cont) {
if (d >= 1.0) {
exp ++;
d /= 10.0;
} else if (d < 0.1) {
exp --;
d *= 10.0;
} else
cont = 0;
}
sprintf(buf,"%12.10fE%+03d",d,exp);
*buf = sign;
}
To convert any number into scientific notation you need to know two things, the exponent and the decimal part. like this: Number = Dec E exponent (or Number = Dec * 10^exponent)
Log10() function will be really important then, cause it will give you the exact exponent of Number! Then to get the exponent you have this relation:
exponent = log10(Number/Dec)
but not knowing the Dec part, we will simplify into this relation:
exponent = log10(Number)
this will not give the integer exponent, only the exponent for which 1/10^exponent = Number. To get the right integer exponent you will need to get the first lowest integer with the floor() function.
That's why you'll need to change the relation to:
exponent = int(floor(log10(Number))) # using int will convert any floating value into an integer exponent
Then using this exponent you could easily find the decimal part with the relation:
Dec = Number / 10^exponent
Now you have a way to change any number* into scientific notation, if you need to have a leading zero, you then just have to decrement the exponent ^^
* Log(x) is defined on R*+ so if you have a negative number you must convert it into a positive one before passing it as an argument to log10(), and if the number is 0, then you should make an exception (return dec = 0 and exponent = 0)
this is an example of code in python:
Decompose(originalvalue):
calcul = originalvalue;
if (originalvalue != 0.0):
if (originalvalue < 0):
calcul = -calcul;
exponent = int(floor(log10(calcul))); # get the integer exponent
value = originalvalue / 10.0**exponent; # get scientific decimal part
# then to have a leading zero, you just have to verifiy if dec >= 1 or not, and in this case, modify the value and exponent to eliminate it.
if (value >= 1):
value *= 10**-1
exponent += 1
else:
exponent = 0;
value = 0.0;
return [value, exponent];
Here is a solution I came up with, having the same problem.
def converter(number):
number_str = '{:.14e}'.format(number)
number_split = number_str.split('.')
int_part = number_split[0]
exp_part = number_split[1][-3:]
dec_part = number_split[1][:-6]
int_part_numeric = int(int_part)
if number == 0:
return '0.000000000000E+00'
if int_part_numeric == 0:
sign = '0'
new_exp_part = exp_part
new_dec_part = dec_part
elif int_part_numeric < 0:
sign = '-'
new_exp_part = '{:+03d}'.format(int(exp_part) + 1)
new_dec_part = int_part.strip('-') + dec_part[:-1]
elif int_part_numeric > 0:
sign = '0'
new_exp_part = '{:+03d}'.format(int(exp_part) + 1)
new_dec_part = int_part + dec_part[:-1]
return sign + '.' + new_dec_part + 'E' + new_exp_part
It could probably be simplified a bit though...

Audio Delay - wrapping variables?

I'm probably missing something simple - but I can'tseem to get this to work. I have a simple audio delay - with an fixed array size of [65536] - and it works perfectly - the problem is - if I try and make the array size - either shorter or longer - it bombs out!
Here's the working code:
float b[65536]; // initialisation
int p, r0, r1;
float len, d, feedback; // d = decimal part of length
int leni; // integer value of length
if (len > 65534) len = 65534;
else if (len < 1) len = 1;
leni = (int) len; // float to INT conversions are cpu intensive.
d = len - leni;
r0 = p - leni; // loop
r1 = r0 - 1;
if (r0 < 0) r0 += 65536;
if (r1 < 0) r1 += 65536;
output = b[r0] + d * (b[r1] - b[r0]);
b[p] = input + output * feedback;
p++;
if (p > 65535) p = 0;
Now here's the non-working code:
float b[16384]; // initialisation
int p, r0, r1;
float len, d, feedback; // d = decimal part of length
int leni; // integer value of length
if (len > 16382) len = 16382;
else if (len < 1) len = 1;
leni = (int) len; // float to INT conversions are cpu intensive.
d = len - leni;
r0 = p - leni; // loop
r1 = r0 - 1;
if (r0 < 0) r0 += 16384;
if (r1 < 0) r1 += 16384;
output = b[r0] + d * (b[r1] - b[r0]);
b[p] = input + output * feedback;
p++;
if (p > 16383) p = 0;
Can't figure out what to do, am I wrapping it wrong - or is 65536 a special int case?
Thanks in advace
Andrew
Your code is a bit hard to follow -- eg I'm not clear why you have floating point numbers that need to be changed to integers, what the initial values of variables like len are or where the loop occurs. I'm not even sure what language it is (python?) and maybe those things are obvious to someone who knows the language, in which case I apologize for my ignorance.
I can, however, speculate that fact that it works for 65536 probably has something to do with 65536 being a power of two, and somehow that property works around whatever other bugs you have.
I'm not going to try to fix your code, because, as I said, I can't follow it -- again, sorry. Instead, see my answer here, which tells you how to write a delay and an echo. It's in c-like psuedo-code which should be easy to port, and it's very few lines of code:
how to add echo effect on audio file using objective-c

Resources