We have the following line of code and we know that regF is 16 bits long, regD is 8 bits long and regE is 8 bits long, regC is 3 bits long and assumed unsigned:
regF <= regF + ( ( regD << regC ) & { 16{ regE [ regC ]} }) ;
My question is : will the shift regD << regC assume that the result is 8 bits or will it extended to 16 bits because of the bitwise & with the 16 bit vector?
The shift sub-expression itself has a width of 8 bits; the bit width of a shift is always the bit width of the left operand (see table 5-22 in the 2005 LRM).
However, things get more complicated after that. The shift sub-expression appears as an operand of the & operator. The bit length of the & expression is the bit-length of the largest of the 2 operands; in this case, 16 bits.
This sub-expression now appears as an operand of the + expression; the result width of this expression is again the maximum width of the two operands of the +, which is again 16.
We now have an assignment. This is not technically an operand, but the same rules are used; in this case, the LHS is also 16 bits, so the size of the RHS is unaffected.
We now know that the overall expression size is 16 bits; this size is propagated back down to the operands, except the 'self-determined' operands. The only self-determined operand here is the RHS of the shift expression (regC), which isn't extended.
The signedness of the expressions is now determined. Propagation happens in the same way. The overall effect here, since we have at least one unsigned operand, is that the expression is unsigned, and all operands are coerced to unsigned. So, all (non-self-determined) operands are coerced to unsigned 16-bit before any operation is actually carried out.
So, in other words, the shift sub-expression actually ends up as a 16-bit shift, even though it appears to be 8-bit at first sight. Note that it's not 16-bit because the RHS of the & is 16-bit, but because the entire sizing process - the width propagation up the expression - came up with an answer of 16. If you'd assigned to an 18-bit reg, instead of the 16-bit regF, then your shift would have been extended to 18 bits.
This is all very complicated and non-intuitive, at least if you have any experience of mainstream languages. It's explained (more or less) in sections 5.4 and 5.5 of the 2005 LRM. If you want any advice, then never write expressions like this. Write defensively - break everything down to individual sub-expressions, and then combine the sub-expressions.
Related
My Verilog testbench code defines a module with these parameters:
parameter PHASE_BITS = 32;
parameter real MAX_PHASE = 1 << PHASE_BITS;
I cannot get MAX_PHASE to have the expected value 4294967296 or its approximation; ModelSim shows me 0 instead. This despite the fact that MAX_PHASE is declared real.
I guess there's some integer overflow involved, because it works fine if PHASE_BITS is lowered to 31.
How do I make this parameter be equal to 2 to the power of another parameter?
The problem lies in the right-hand expression itself:
1 << PHASE_BITS
It is evaluated before considering the type of the variable it is stored into. Because 1 is an integer literal and integers in Verilog are signed 32 bits, the << operator (left shift operator) will output an integer of the same type, and will cause an overflow if PHASE_BITS is higher than 31.
We could force 1 to be a real literal instead:
1.0 << PHASE_BITS
But this causes a compile time error, as << is not defined for real values.
Let's use plain 2-power-to-N:
2.0 ** PHASE_BITS
This will yield the desired result, 4.29497e+09.
How many bits does $realtime return in Verilog and Systemverilog?
$realtime does not return bits, it returns an double precision floating point number, which has requires 1 bit for the sign, 11 bits for the exponent, and 52-bits for the mantissa. You cannot access individual bits of a real number, so the total number of bits is irrelevant.
From sutherland hdl quick ref. Pg 40 in the doc, 44 in your pdf viewer
http://www.sutherland-hdl.com/pdfs/verilog_2001_ref_guide.pdf
$time
$stime
$realtime
Returns the current simulation time as a 64-bit vector, a
32-bit integer or a real number, respectively.
The value returned will depend on your timescale. i.e. if timescale is 1ns/1ps, and you have ran for 1us, you will return 1,000.000.
I am studying verilog language and faced problems.
integer intA;
...
intA = - 4'd12 / 3; // expression result is 1431655761.
// -4’d12 is effectively a 32-bit reg data type
This snippet from standard and it blew our minds. The standard says that 4d12 - is a 4 bit number 1100.
Then -4d12 = 0100. It's okay now.
To perform the division, we need to bring the number to the same size. 4 to 32 bit. The number of bits -4'd12 - is unsigned, then it should be equal to 32'b0000...0100, but it equal to 32'b1111...10100. Not ok, but next step.
My version of division: -4d12 / 3 = 32'b0000...0100 / 32'b0000...0011 = 1
Standart version: - 4'd12 / 3 = 1431655761
Can anyone tell why? Why 4 bit number keeps extra bits?
You need to read section 11.8.2 Steps for evaluating an expression of the 1800-2012 LRM. They key piece you are missing is that the operand is 4'd12 and that it is sized to 32 bits as an unsigned value before the unary - operator is applied.
If you want the 4-bit value treated as a signed -3, then you need to write
intA = - 4'sd12 / 3 // result is 1
here the parser interprets -'d12 as 32 bits number which is unsigned initially and the negative sign would result in the negation of bits. so the result would be
negation of ('d12)= negation of (28 zeros + 1100)= 28ones+2zeros+2ones =
11111111111111111111111111110011. gives output to 4294967283 . if you divide this number (4294967283) by 3 the answer would be 1,431,655,761.
keep smiling :)
The Honeywell DPS8 computer (and others) have/had a "divide fractional" instruction:
"This instruction divides a 71-bit fractional dividend (including sign) by a 36-bit
fractional divisor (including sign) to form a 36-bit fractional quotient (including
sign) and a 36-bit fractional remainder (including sign). Bit 35 of the remainder
corresponds to bit 70 of the dividend. The remainder sign is equal to the dividend
sign unless the remainder is zero."
So, as I understand it, this is integer division with the decimal point way over on the left.
.qqqqq / .ddddd
(I did scaled integer math in FORTH back in the day, but my memories of the techniques are lost in fog of time.)
To implement this instruction in a DPS8 emulator, I believe I need to start by creating two 70 bit numbers: the 71 bit dividend less it's sign bit, and the the 36 bit divisor less its sign bit and shifted 35 bits to the left so that the decimal points line up.
I think I can then form the remainder and quotient (in C) with '%' and '/', but I am unsure if those results need to be normalized (i.e. shifted).
I found an example of a "shift and subtract" algorithm "Computer Arithmetic", slide 10), but I would prefer a more straight forward implementation.
Am I on the right track, or is the solution more nuanced (fixing up the signs and detection of errors have been elided from here; those stages are well documented. The actual division is the issue.). Any pointers to C implementations of this kind of hardware emulation would be particularly helpful.
I do not have the definitive answer, but as a division is a division, you might find it helpful to look at some basic division routines.
Imagine that you have a 32-bit variable and you want an 8-bit fractional part.
You then have an integer part between 0 and 16777215, and a fractional part which is between 0 and 255.
0xiiiiiiff (where i is the integer part, f is the fractional part).
Imagine you have a 24-bit dividend (numerator), say the value 3, and a 24-bit divisor (denominator), say the value 13.
As we quickly will see, 3/13 is greater than zero and less than one. That means our fractional part is nonzero, but our integer part is filled completely with zeros.
So to do the above division using a standard divide function, we'll just bit-shift the dividend by N, thus we will get N bits of precision in our fractional part.
quotient_fp = (dividend_ip << 8) / divisor_ip
So far, so good.
But what if we want the divisor to have a fractional part, then ?
If we just shift the divisor up by 8, then we'll have a problem:
(dividend_ip << 8) / (divisor_ip << 8)
- because we'll obviously lose our fractional part of the quotient (result).
Instead, we'll need to shift the dividend up by as many bits as we shift the fractional part up...
((dividend_ip << 8) << 8) / (divisor_ip << 8)
...That makes it...
(dividend_ip << (dividend_precision + divisor_precision) / (divisor_ip << divisor_precision)
Now, let's put our fractional part math into the picture...
(((dividend_ip << dividend_precision) | dividend_fp) << divisor_precision) / ((divisor_ip << divisor_precision) | divisor_fp)
Our quotient's precision will be the same as dividend_precision, which is 8 bits.
Unfortunately, this eats a lot of bits.
Fortunately, in your case, the integer part is not important, so you'll have a lot of room for the fractional part.
Let's increase the precision to 15 bits; this can be tested using normal 32-bit integers...
(((dividend_ip << 15) | dividend_fp) << 15) / ((divisor_ip << 15) | divisor_fp)
Our quotient will now have a 15-bit precision.
OK, but since you're supplying only the fractional parts and the integer part is always zero anyway, you should be able to just toss the integer part. That makes it....
(((dividend_ip << 16) | dividend_fp) << 16) / ((divisor_ip << 16) | divisor_fp)
... reduced to ...
(dividend_fp << 16) / divisor_fp
... now let's use a 64-bit integer instead, we can get 32 bits of precision in the quotient...
(dividend_fp << 32) / divisor_fp
... some compilers have support for a int128_t (it can be enabled on some platforms for GCC), so you might be able to use that type, in order to get 128 bits easily. I have not tried it, but I've come across info on the Web earlier; search for int128_t, and you might find out how.
If you get the int128_t to work, you could make the dividend 128 bit, the divisor 64 bit and the quotient 64 bit...
quotient_fp = ((dividend_fp << 36) / divisor) >> (64 - 36)
... in order to get 36 bits precision.
Notice that since the result is in the top 36 bits of the quotient, the quotient needs to be shifted down (64 - 36) = 28 bits.
You could even go as high as (128 - 36) = 92 bits precision:
(dividend_fp << 92) / divisor
Now, that you probably (hopefully) have a solution, I would like to recommend that you get familiar with low-level binary divide (again; since you've been there a while ago).
The best sources seem to be how hardware divides binary numbers; such as microcontrollers, CPUs and the like. Assembly language dividers are also good for getting to know the inner workings. Often 32-bit divide routines that use bit-shifting are very good sources.
Through the time, I've come across a very clever implementation for ARM in ARM assembly language. Normally I wouldn't post references or assembly language examples, but considering that the code is very small, I think it would be alright.
Taken from A Fast Hi Precision Fixed Point Divide
r0 is the numerator (dividend)
r2 is the denominator (divisor)
mov r1,#0
adds r0,r0,r0
.rept 32
adcs r1,r2,r1,lsl#1
subcc r1,r1,r2
adcs r0,r0,r0
.endr
r0 is the quotient (result)
r1 is the remainder (rest, modulo result)
The above routine contains the basics for an unsigned divide.
I hope this information will be useful. It may contain errors, as I have not tested any code or example mentioned. I'm confident, though, that it's not all wrong. ;)
Going through Elisabeth Hendrickson's test heuristics cheatsheet , I see the following recommendations :
Numbers : 32768 (2^15) 32769 (2^15+ 1) 65536 (2^16) 65537 (2^16 +1) 2147483648 (2^31) 2147483649 (2^31+ 1) 4294967296 (2^32) 4294967297 (2^32+ 1)
Does someone know the reason for testing all theses cases ? My gut feeling goes with the data type the developer may have used ( integer, long, double...)
Similarly, with Strings :
Long (255, 256, 257, 1000, 1024, 2000, 2048 or more characters)
These represent boundaries
Integers
2^15 is at the bounds of signed 16-bit integers
2^16 is at the bounds of unsigned 16-bit integers
2^31 is at the bounds of signed 32-bit integers
2^32 is at the bounds of unsigned 32-bit integers
Testing for values close to common boundaries tests whether overflow is correctly handled (either arithmetic overflow in the case of various integer types, or buffer overflow in the case of long strings that might potentially overflow a buffer).
Strings
255/256 is at the bounds of numbers that can be represented in 8 bits
1024 is at the bounds of numbers that can be represented in 10 bits
2048 is at the bounds of numbers that can be represented in 11 bits
I suspect that the recommendations such as 255, 256, 1000, 1024, 2000, 2048 are based on experience/observation that some developers may allocate a fixed-size buffer that they feel is "big enough no matter what" and fail to check input. That attitude leads to buffer overflow attacks.
These are boundary values close to maximum signed short, maximum unsigned short and same for int. The reason to test them is to find bugs that occur close to the border values of typical data types.
E.g. your code uses signed short and you have a test that exercises something just below and just above the maximum value of such type. If the first test passes and the second one fails, you can easily tell that overflow/truncation on short was the reason.
Those numbers are border cases on either side of the fence (+1, 0, and -1) for "whole and round" computer numbers, which are always powers of 2. Those powers of 2 are also not random and are representing standard choices for integer precision - being 8, 16, 32, and so on bits wide.