ECLiPSe CLP produces variable with unexpected range - constraint-programming

I have a question regarding the following code:
:-lib(ic).
buggy_pred(Result, In0, In1, In2, In3, In4, In5, In6, In7) :-
Args = [In0, In1, In2, In3, In4, In5, In6, In7],
Args :: [0..255],
Result :: [0..18446744073709551615], % 64bits wide
% put 8 bytes together to form a 64-bit value
Result #= (In0 + (In1 * 256) + (In2 * 65536) + (In3 * 16777216) + (In4 * 4294967296) +
(In5 * 1099511627776) + (In6 * 281474976710656) + (In7 * 72057594037927936)).
buggy_pred_test :-
buggy_pred(Result, 56, 8, 0, 0, 16, 0, 0, 1),
get_bounds(Result, Lo, Hi),
write(Lo), nl,
write(Hi).
Should not the above code (predicate buggy_pred_test) print two same numbers? In this case it yields two numbers (Lo and Hi respectively) which are different:
72057662757406720
72057662757406800
I cannot figure out what is the cause of that behaviour. I am using ECLiPSE 6.1 #194, x86_64 for linux. Your help is greatly appreciated.

ECLiPSe's lib(ic) constraint solver is designed for handling mixtures of real- and integer-valued variables/constraints. All computations are performed using double floats to represent upper and lower bounds, even integer operations (integrality is simply treated as an additional constraint).
Because double floats have 53 bits of precision, only integers in the range -9007199254740991..9007199254740991 can be represented precisely. Larger integers are approximated by a floating point interval that encloses the true value. This is why you get a non-zero-width interval as a result.
This may sound unsatisfactory, but in practice models involving huge integer domains are rarely efficiently solvable, and are therefore less useful than they may seem. So, the advice would be to model the problem differently, see
here for an example of modeling a problem in two ways.

Related

System Verilog using mask

I can't get the meaning of this code.
I know VHDL and need system verilog. I do not know the meaning of bits [num] = '{4, 4}) or (output logic [width-1:0] mask [num]);
please explain me
module works
#(parameter int num = 4,
parameter int width = 8,
parameter int bits [num] = '{4, 4})
(output logic [width-1:0] mask [num]);
A module is like a VHDL entity, so we have a block called works:
module works
A parameter is like a VHDL generic. Instead of saying generic, in SystemVerilog we just say #. So, we have a block with three parameters (generics), an int (32-bit signed integer like a VHDL integer) with a default value of 4:
#(parameter int num = 4,
an int with a default value of 8:
parameter int width = 8,
and an array of ints of size equal to the value of the parameter num, which will be numbered 0 to num-1:
parameter int bits [num] = '{4, 4})
'{4,4} is an assignment pattern and is the (rough) equivalent of a VHDL aggregate. So, this code is trying to initialise two of the values of this array to to integer 4. The trouble is this code is probably illegal. The array bits can be of any size (depending on the value of the parameter num) and this array is what is called an unpacked array. In SystemVerilog (and in Verilog), both the size and shape of assignments to packed arrays must match (just like in VHDL). This size of either side of this assignment will not match unless the value of num is 2. If you want to initialise all the elements of an unpacked array to the same thing, you can use a key (rather like VHDL others):
parameter int bits [num] = '{default:4})
https://www.edaplayground.com/x/5w8y
This is a port:
(output logic [width-1:0] mask [num]);
whose size is defined by the two parameters, width and num. The output is an array of num (a so-called unpacked dimension) of words of width width (a so-called packed dimension). logic is a type. Variables of type logic can take one of four values: 0, 1, X or Z.
output logic [width-1:0] mask [num]
[width-1:0] mask is a vector of width bits. With a width of 8 this would be an 8-bit vecor: [7:0] mask.
The vector is followed by [num] means it is an array of 'num' vectors. The total is a two-dimensional array of width x num bits.
That syntax is verry common and you will see it often.
I had to look for the '{4,4} pattern (I could not find it in my little System Verilog booklet) and as Matthew says it is an assignment of values to an array. So, my initial interpretation was wrong.
The problem with the existing code is that my Verilog simulator throws an error message when using the default values. num is 4 and '{4,4} has only two elements. This upon start-up I get an error:
ERROR: [VRFC 10-666] expression has 2 elements; expected 4 [...
If I set num to 2 #(.num(2)) the simulator is happy.

Verilog operation unexpected result

I am studying verilog language and faced problems.
integer intA;
...
intA = - 4'd12 / 3; // expression result is 1431655761.
// -4’d12 is effectively a 32-bit reg data type
This snippet from standard and it blew our minds. The standard says that 4d12 - is a 4 bit number 1100.
Then -4d12 = 0100. It's okay now.
To perform the division, we need to bring the number to the same size. 4 to 32 bit. The number of bits -4'd12 - is unsigned, then it should be equal to 32'b0000...0100, but it equal to 32'b1111...10100. Not ok, but next step.
My version of division: -4d12 / 3 = 32'b0000...0100 / 32'b0000...0011 = 1
Standart version: - 4'd12 / 3 = 1431655761
Can anyone tell why? Why 4 bit number keeps extra bits?
You need to read section 11.8.2 Steps for evaluating an expression of the 1800-2012 LRM. They key piece you are missing is that the operand is 4'd12 and that it is sized to 32 bits as an unsigned value before the unary - operator is applied.
If you want the 4-bit value treated as a signed -3, then you need to write
intA = - 4'sd12 / 3 // result is 1
here the parser interprets -'d12 as 32 bits number which is unsigned initially and the negative sign would result in the negation of bits. so the result would be
negation of ('d12)= negation of (28 zeros + 1100)= 28ones+2zeros+2ones =
11111111111111111111111111110011. gives output to 4294967283 . if you divide this number (4294967283) by 3 the answer would be 1,431,655,761.
keep smiling :)

Using " * " for multiplication of binary numbers, only gives me addition, why? (Code here)

I'm learning operations with " + ", " - " and " * ", addition and subtraction works well, but multiplication gives me only additions, link for the code:
http://www.edaplayground.com/x/NvT
I checked the code, can't understand what's going on. I gave enough space (bits) the result variable.
BTW, It's a code intended for fixed-point operations including fractional numbers, but everything is calculated as integers.
Your select signal is only on 1bit.
Then when you set select = 2 it assigns the lower bit of 2(2'b10) i.e. 0.
You should change select declaration by :
input [1:0] select; // In the module
reg [1:0] select; // In the testbench
To avoid such errors I would advise you to use the complete notation of values:
x'tnnn...nnn
where x is the width of the signal, t is the type (d for decimal, h for hexa, b for binary,...) and nnn...nnn the value in the type specified.
For example for the decimal value 2 you will have several notations that will make sense in certain situations:
2'd2 //2 bits decimal
2'h2 //2 bits hexadecimal
2'b10//2 bits binary
For more informations about these notations you can read this pdf.

fixed point integer division ("fractional division") algorithm

The Honeywell DPS8 computer (and others) have/had a "divide fractional" instruction:
"This instruction divides a 71-bit fractional dividend (including sign) by a 36-bit
fractional divisor (including sign) to form a 36-bit fractional quotient (including
sign) and a 36-bit fractional remainder (including sign). Bit 35 of the remainder
corresponds to bit 70 of the dividend. The remainder sign is equal to the dividend
sign unless the remainder is zero."
So, as I understand it, this is integer division with the decimal point way over on the left.
.qqqqq / .ddddd
(I did scaled integer math in FORTH back in the day, but my memories of the techniques are lost in fog of time.)
To implement this instruction in a DPS8 emulator, I believe I need to start by creating two 70 bit numbers: the 71 bit dividend less it's sign bit, and the the 36 bit divisor less its sign bit and shifted 35 bits to the left so that the decimal points line up.
I think I can then form the remainder and quotient (in C) with '%' and '/', but I am unsure if those results need to be normalized (i.e. shifted).
I found an example of a "shift and subtract" algorithm "Computer Arithmetic", slide 10), but I would prefer a more straight forward implementation.
Am I on the right track, or is the solution more nuanced (fixing up the signs and detection of errors have been elided from here; those stages are well documented. The actual division is the issue.). Any pointers to C implementations of this kind of hardware emulation would be particularly helpful.
I do not have the definitive answer, but as a division is a division, you might find it helpful to look at some basic division routines.
Imagine that you have a 32-bit variable and you want an 8-bit fractional part.
You then have an integer part between 0 and 16777215, and a fractional part which is between 0 and 255.
0xiiiiiiff (where i is the integer part, f is the fractional part).
Imagine you have a 24-bit dividend (numerator), say the value 3, and a 24-bit divisor (denominator), say the value 13.
As we quickly will see, 3/13 is greater than zero and less than one. That means our fractional part is nonzero, but our integer part is filled completely with zeros.
So to do the above division using a standard divide function, we'll just bit-shift the dividend by N, thus we will get N bits of precision in our fractional part.
quotient_fp = (dividend_ip << 8) / divisor_ip
So far, so good.
But what if we want the divisor to have a fractional part, then ?
If we just shift the divisor up by 8, then we'll have a problem:
(dividend_ip << 8) / (divisor_ip << 8)
- because we'll obviously lose our fractional part of the quotient (result).
Instead, we'll need to shift the dividend up by as many bits as we shift the fractional part up...
((dividend_ip << 8) << 8) / (divisor_ip << 8)
...That makes it...
(dividend_ip << (dividend_precision + divisor_precision) / (divisor_ip << divisor_precision)
Now, let's put our fractional part math into the picture...
(((dividend_ip << dividend_precision) | dividend_fp) << divisor_precision) / ((divisor_ip << divisor_precision) | divisor_fp)
Our quotient's precision will be the same as dividend_precision, which is 8 bits.
Unfortunately, this eats a lot of bits.
Fortunately, in your case, the integer part is not important, so you'll have a lot of room for the fractional part.
Let's increase the precision to 15 bits; this can be tested using normal 32-bit integers...
(((dividend_ip << 15) | dividend_fp) << 15) / ((divisor_ip << 15) | divisor_fp)
Our quotient will now have a 15-bit precision.
OK, but since you're supplying only the fractional parts and the integer part is always zero anyway, you should be able to just toss the integer part. That makes it....
(((dividend_ip << 16) | dividend_fp) << 16) / ((divisor_ip << 16) | divisor_fp)
... reduced to ...
(dividend_fp << 16) / divisor_fp
... now let's use a 64-bit integer instead, we can get 32 bits of precision in the quotient...
(dividend_fp << 32) / divisor_fp
... some compilers have support for a int128_t (it can be enabled on some platforms for GCC), so you might be able to use that type, in order to get 128 bits easily. I have not tried it, but I've come across info on the Web earlier; search for int128_t, and you might find out how.
If you get the int128_t to work, you could make the dividend 128 bit, the divisor 64 bit and the quotient 64 bit...
quotient_fp = ((dividend_fp << 36) / divisor) >> (64 - 36)
... in order to get 36 bits precision.
Notice that since the result is in the top 36 bits of the quotient, the quotient needs to be shifted down (64 - 36) = 28 bits.
You could even go as high as (128 - 36) = 92 bits precision:
(dividend_fp << 92) / divisor
Now, that you probably (hopefully) have a solution, I would like to recommend that you get familiar with low-level binary divide (again; since you've been there a while ago).
The best sources seem to be how hardware divides binary numbers; such as microcontrollers, CPUs and the like. Assembly language dividers are also good for getting to know the inner workings. Often 32-bit divide routines that use bit-shifting are very good sources.
Through the time, I've come across a very clever implementation for ARM in ARM assembly language. Normally I wouldn't post references or assembly language examples, but considering that the code is very small, I think it would be alright.
Taken from A Fast Hi Precision Fixed Point Divide
r0 is the numerator (dividend)
r2 is the denominator (divisor)
mov r1,#0
adds r0,r0,r0
.rept 32
adcs r1,r2,r1,lsl#1
subcc r1,r1,r2
adcs r0,r0,r0
.endr
r0 is the quotient (result)
r1 is the remainder (rest, modulo result)
The above routine contains the basics for an unsigned divide.
I hope this information will be useful. It may contain errors, as I have not tested any code or example mentioned. I'm confident, though, that it's not all wrong. ;)

How to implement an n-bit adder whose input vectors are represented in octal?

I'm somewhat stumped on this problem:
"Write a verilog module for full addition of n-bit integers. Let the parameter, the number of bits, equal 3. Call this module from a test bench, and in the test bench specify the numbers to be added in the arrays. Assign octal values to the X and Y arrays. The carryin is 0."
And yes, this is homework.
I was able to write the module for the n-bit adder:
module addern(carryin, X, Y, S, carryout, overflow);
parameter n = 3;
input carryin;
input [n-1:0] X, Y;
output reg [n-1:0] S;
output reg carryout, overflow;
always #(X,Y, carryin)
begin
{carryout, S} = X + Y + carryin;
overflow = (X[n-1] & Y[n-1] & ~S[n-1]) | (~X[n-1] & ~Y[n-1] & S[n-1]);
end
endmodule
I understand this component of the problem. However, I'm not sure how to implement the octal number addition. Is there a way in verilog to indicate that the arrays are holding octal values, rather than binary?
Is there anything like a typecast in verilog? For instance, input (octal) [n-1:0] X, Y, and do something likewise in the test bench.
Any constructive input is appreciated.
I'm pretty sure I'm in the same class as you. I think what you need to do is create a hierarchical Verilog module and then assign your values there. That would be your testbench. for example if you want to make X you write input [n-1:0] X = 3'o013, or maybe it's X = 9'o013 if Oli is correct. you don't change n, but it's kind of like BCD where they are in groups and you have a certain amount of bits you can represent before it overflows.
To help solve the problem thik about the question:
Q) How are numbers stored in digital hardware?
A) Binary, in digital logic we can only represent 2 values 1 and 0, but with this we can represent Integer, fixed point or floating point numbers.
Therefore digital numbers are base 2 (two possible values), while being able to represent any number. Other bases such as Octal (base 8) hex (base 16) and decimal (base 10) exist but these are just way of representing numbers, similar to the way binary just represents a number.
A decimal 1, is represented by 1 n all the bases, and when stored as binary they are all the same. An example of some values in verilog and there binary equivalents.
Octal Decimal Hex Binary
3'O7 => 3'd7 => 3'h7 => 3'b111
6'O10 => 6'd8 => 6'h8 => 6'b001000
Octal, Decimal and Hex in verilog are just representations of a binary format, a way of viewing the data. Since the low level electronics has no way of representing any thing other than 0 and 1.
The interesting thing about Octal and Hex is that they have a power of 2 values so they use an exact number of bits so an 9'O123 is the same as treating each Octal place separately and concatenating them together, 9'O123 == {3'O1, 3'O2, 3'O3}. This is also true for hexadecimal values but not decimal (base 10) values, as 10 is not a power of 2 and does not fully occupy the number space.
This does allow 'Octal' ports to be created, which are just 3 bit binary ports:
module octal_concat (
input [2:0] octal_2,
input [2:0] octal_1,
input [2:0] octal_0,
output [8:0] concat
);
assign concat = {octal_2, octal_1, octal_0};
endmodule
octal_concat octal_concat_0 (
.octal_2(3'O1),
.octal_1(3'O2),
.octal_0(3'O3),
.concat() //Drives 9'O123 which is also 9'b001_010_011
);

Resources