Basic questions about Verilog - verilog

Why adding the concatenation operator can rand a postive value from 0~59?
reg [23:0] rand;
rand = {$random} % 60;
Why the concatenation operator {} can make such difference?

from IEEE-1364
17.9.1 $random function. The system function $random provides a mechanism for generating random numbers. The function returns a new 32-bit random number each time it is called. The random number is a signed integer; it can be positive or negative.
Result of concatenation {} is always an unsigned number. This is not spelled explicitly in the standard, but even the example (from the standard) which you cited implies it.
There is no difference between signed and unsigned numbers in their bit representations. Only certain operation care about the signs of the operands, including %.
A 32-bit signed integer can represent values between -2,147,483,648 and 2,147,483,647, therefore the results of $random % 60 will be between -59 and 59.
A 32 bit unsigned integer can represent values between 0 and 4,294,967,295, therefore the result of {$random} % 60 will be from 0 to 59.

Related

Float to Int type conversion in Python for large integers/numbers

Need some help on the below piece of code that I am working on. Why original number in "a" is different from "c" when it goes through a type conversion. Any way we can make "a" and "c" same when it goes through float -> int type conversion?
a = '46700000000987654321'
b = float(a) => 4.670000000098765e+19
c = int(b) => 46700000000987652096
a == c => False
Please read this document about Floating Point Arithmetic: Issues and Limitations :
https://docs.python.org/3/tutorial/floatingpoint.html
for your example:
from decimal import Decimal
a='46700000000987654321'
b=Decimal(a)
print(b) #46700000000987654321
c=int(b)
print(c) #46700000000987654321
Modified version of my answer to another question (reasonably) duped to this one:
This happens because 46700000000987654321 is greater than the integer representational limits of a C double (which is what a Python float is implemented in terms of).
Typically, C doubles are IEEE 754 64 bit binary floating point values, which means they have 53 bits of integer precision (the last consecutive integer values float can represent are 2 ** 53 - 1 followed by 2 ** 53; it can't represent 2 ** 53 + 1). Problem is, 46700000000987654321 requires 66 bits of integer precision to store ((46700000000987654321).bit_length() will provide this information). When a value is too large for the significand (the integer component) alone, the exponent component of the floating point value is used to scale a smaller integer value by powers of 2 to be roughly in the ballpark of the original value, but this means that the representable integers start to skip, first by 2 (as you require >53 bits), then by 4 (for >54 bits), then 8 (>55 bits), then 16 (>56 bits), etc., skipping twice as far between representable values for each additional bit of magnitude you have beyond 53 bits.
In your case, 46700000000987654321, converted to float, has an integer value of 46700000000987652096 (as you noted), having lost precision in the low digits.
If you need arbitrarily precise base-10 floating point math, replace your use of float with decimal.Decimal (conveniently, your initial value is already a string, so you don't risk loss of precision between how you type a float and the actual value stored); the default precision will handle these values, and you can increase it if you need larger values. If you do that (and convert a to an int for the comparison, since a str is never equal to any numeric type), you get the behavior you expected:
from decimal import Decimal as Dec, getcontext
a = "46700000000987654321"
b = Dec(a); print(b) # => 46700000000987654321
c = int(b); print(c) # => 46700000000987654321
print(int(a) == c) # => True
Try it online!
If you echo the Decimals in an interactive interpreter instead of using print, you'd see Decimal('46700000000987654321') instead, which is the repr form of Decimals, but it's numerically 46700000000987654321, and if converted to int, or stringified via any method that doesn't use the repr, e.g. print, it displays as just 46700000000987654321.

In PBKDF2 is INT (i) signed?

Page 11 of RFC 2898 states that for U_1 = PRF (P, S || INT (i)), INT (i) is a four-octet encoding of the integer i, most significant octet first.
Does that mean that i is a signed value and if so what happens on overflow?
Nothing says that it would be signed. The fact that dkLen is capped at (2^32 - 1) * hLen suggests that it's an unsigned integer, and that it cannot roll over from 0xFFFFFFFF (2^32 - 1) to 0x00000000.
Of course, PBKDF2(MD5) wouldn't hit 2^31 until you've asked for 34,359,738,368 bytes. That's an awful lot of bytes.
SHA-1: 42,949,672,960
SHA-2-256 / SHA-3-256: 68,719,476,736
SHA-2-384 / SHA-3-384: 103,079,215,104
SHA-2-512 / SHA-3-512: 137,438,953,472
Since the .NET implementation (in Rfc2898DeriveBytes) is an iterative stream it could be polled for 32GB via a (long) series of calls. Most platforms expose PBKDF2 as a one-shot, so you'd need to give them a memory range of 32GB (or more) to identify if they had an error that far out. So even if most platforms get the sign bit wrong... it doesn't really matter.
PBKDF2 is a KDF (key derivation function), so used for deriving keys. AES-256 is 32 bytes, or 48 if you use the same PBKDF2 to generate an IV (which you really shouldn't). Generating a private key for the ECC curve with a 34,093 digit prime is (if I did my math right) 14,157 bytes. Well below the 32GB mark.
i ranges from 1 to l = CEIL (dkLen / hLen), and dkLen and hLen are positive integers. Therefore, i is strictly positive.
You can, however, store i in a signed, 32-bit integer type without any special handling. If i rolls over (increments from 0x7FFFFFFF to 0xF0000000), it will continue to be encoded correctly, and continue to increment correctly. With two's complement encoding, bitwise results for addition, subtraction, and multiplication are the same as long as all values are treated as either signed or unsigned.

How many bits does $realtime return in Verilog and Systemverilog?

How many bits does $realtime return in Verilog and Systemverilog?
$realtime does not return bits, it returns an double precision floating point number, which has requires 1 bit for the sign, 11 bits for the exponent, and 52-bits for the mantissa. You cannot access individual bits of a real number, so the total number of bits is irrelevant.
From sutherland hdl quick ref. Pg 40 in the doc, 44 in your pdf viewer
http://www.sutherland-hdl.com/pdfs/verilog_2001_ref_guide.pdf
$time
$stime
$realtime
Returns the current simulation time as a 64-bit vector, a
32-bit integer or a real number, respectively.
The value returned will depend on your timescale. i.e. if timescale is 1ns/1ps, and you have ran for 1us, you will return 1,000.000.

Verilog operation unexpected result

I am studying verilog language and faced problems.
integer intA;
...
intA = - 4'd12 / 3; // expression result is 1431655761.
// -4’d12 is effectively a 32-bit reg data type
This snippet from standard and it blew our minds. The standard says that 4d12 - is a 4 bit number 1100.
Then -4d12 = 0100. It's okay now.
To perform the division, we need to bring the number to the same size. 4 to 32 bit. The number of bits -4'd12 - is unsigned, then it should be equal to 32'b0000...0100, but it equal to 32'b1111...10100. Not ok, but next step.
My version of division: -4d12 / 3 = 32'b0000...0100 / 32'b0000...0011 = 1
Standart version: - 4'd12 / 3 = 1431655761
Can anyone tell why? Why 4 bit number keeps extra bits?
You need to read section 11.8.2 Steps for evaluating an expression of the 1800-2012 LRM. They key piece you are missing is that the operand is 4'd12 and that it is sized to 32 bits as an unsigned value before the unary - operator is applied.
If you want the 4-bit value treated as a signed -3, then you need to write
intA = - 4'sd12 / 3 // result is 1
here the parser interprets -'d12 as 32 bits number which is unsigned initially and the negative sign would result in the negation of bits. so the result would be
negation of ('d12)= negation of (28 zeros + 1100)= 28ones+2zeros+2ones =
11111111111111111111111111110011. gives output to 4294967283 . if you divide this number (4294967283) by 3 the answer would be 1,431,655,761.
keep smiling :)

fixed point integer division ("fractional division") algorithm

The Honeywell DPS8 computer (and others) have/had a "divide fractional" instruction:
"This instruction divides a 71-bit fractional dividend (including sign) by a 36-bit
fractional divisor (including sign) to form a 36-bit fractional quotient (including
sign) and a 36-bit fractional remainder (including sign). Bit 35 of the remainder
corresponds to bit 70 of the dividend. The remainder sign is equal to the dividend
sign unless the remainder is zero."
So, as I understand it, this is integer division with the decimal point way over on the left.
.qqqqq / .ddddd
(I did scaled integer math in FORTH back in the day, but my memories of the techniques are lost in fog of time.)
To implement this instruction in a DPS8 emulator, I believe I need to start by creating two 70 bit numbers: the 71 bit dividend less it's sign bit, and the the 36 bit divisor less its sign bit and shifted 35 bits to the left so that the decimal points line up.
I think I can then form the remainder and quotient (in C) with '%' and '/', but I am unsure if those results need to be normalized (i.e. shifted).
I found an example of a "shift and subtract" algorithm "Computer Arithmetic", slide 10), but I would prefer a more straight forward implementation.
Am I on the right track, or is the solution more nuanced (fixing up the signs and detection of errors have been elided from here; those stages are well documented. The actual division is the issue.). Any pointers to C implementations of this kind of hardware emulation would be particularly helpful.
I do not have the definitive answer, but as a division is a division, you might find it helpful to look at some basic division routines.
Imagine that you have a 32-bit variable and you want an 8-bit fractional part.
You then have an integer part between 0 and 16777215, and a fractional part which is between 0 and 255.
0xiiiiiiff (where i is the integer part, f is the fractional part).
Imagine you have a 24-bit dividend (numerator), say the value 3, and a 24-bit divisor (denominator), say the value 13.
As we quickly will see, 3/13 is greater than zero and less than one. That means our fractional part is nonzero, but our integer part is filled completely with zeros.
So to do the above division using a standard divide function, we'll just bit-shift the dividend by N, thus we will get N bits of precision in our fractional part.
quotient_fp = (dividend_ip << 8) / divisor_ip
So far, so good.
But what if we want the divisor to have a fractional part, then ?
If we just shift the divisor up by 8, then we'll have a problem:
(dividend_ip << 8) / (divisor_ip << 8)
- because we'll obviously lose our fractional part of the quotient (result).
Instead, we'll need to shift the dividend up by as many bits as we shift the fractional part up...
((dividend_ip << 8) << 8) / (divisor_ip << 8)
...That makes it...
(dividend_ip << (dividend_precision + divisor_precision) / (divisor_ip << divisor_precision)
Now, let's put our fractional part math into the picture...
(((dividend_ip << dividend_precision) | dividend_fp) << divisor_precision) / ((divisor_ip << divisor_precision) | divisor_fp)
Our quotient's precision will be the same as dividend_precision, which is 8 bits.
Unfortunately, this eats a lot of bits.
Fortunately, in your case, the integer part is not important, so you'll have a lot of room for the fractional part.
Let's increase the precision to 15 bits; this can be tested using normal 32-bit integers...
(((dividend_ip << 15) | dividend_fp) << 15) / ((divisor_ip << 15) | divisor_fp)
Our quotient will now have a 15-bit precision.
OK, but since you're supplying only the fractional parts and the integer part is always zero anyway, you should be able to just toss the integer part. That makes it....
(((dividend_ip << 16) | dividend_fp) << 16) / ((divisor_ip << 16) | divisor_fp)
... reduced to ...
(dividend_fp << 16) / divisor_fp
... now let's use a 64-bit integer instead, we can get 32 bits of precision in the quotient...
(dividend_fp << 32) / divisor_fp
... some compilers have support for a int128_t (it can be enabled on some platforms for GCC), so you might be able to use that type, in order to get 128 bits easily. I have not tried it, but I've come across info on the Web earlier; search for int128_t, and you might find out how.
If you get the int128_t to work, you could make the dividend 128 bit, the divisor 64 bit and the quotient 64 bit...
quotient_fp = ((dividend_fp << 36) / divisor) >> (64 - 36)
... in order to get 36 bits precision.
Notice that since the result is in the top 36 bits of the quotient, the quotient needs to be shifted down (64 - 36) = 28 bits.
You could even go as high as (128 - 36) = 92 bits precision:
(dividend_fp << 92) / divisor
Now, that you probably (hopefully) have a solution, I would like to recommend that you get familiar with low-level binary divide (again; since you've been there a while ago).
The best sources seem to be how hardware divides binary numbers; such as microcontrollers, CPUs and the like. Assembly language dividers are also good for getting to know the inner workings. Often 32-bit divide routines that use bit-shifting are very good sources.
Through the time, I've come across a very clever implementation for ARM in ARM assembly language. Normally I wouldn't post references or assembly language examples, but considering that the code is very small, I think it would be alright.
Taken from A Fast Hi Precision Fixed Point Divide
r0 is the numerator (dividend)
r2 is the denominator (divisor)
mov r1,#0
adds r0,r0,r0
.rept 32
adcs r1,r2,r1,lsl#1
subcc r1,r1,r2
adcs r0,r0,r0
.endr
r0 is the quotient (result)
r1 is the remainder (rest, modulo result)
The above routine contains the basics for an unsigned divide.
I hope this information will be useful. It may contain errors, as I have not tested any code or example mentioned. I'm confident, though, that it's not all wrong. ;)

Resources