Computer_Architecture + Verilog - verilog

I am doing a divider circuit in verilog and using the non-restoring division algorithm.
I am having trouble representing the remainder as a fractional binary number.
For example if I do 0111/0011 (7/3) I get the quotient as 0010 and remainder as 0001 which is correct but I want to represent it as 0010.0101.
Can Someone help ??

Suppose, as in your example, you are dividing 4 bit numbers, but want an extra 4 bits of fractional precision in the result.
One approach is to simply multiply the numerator by 2^4 before doing the division.
i.e.
instead of
0111/0011 = 0010 (+remainder)
do
01110000/0011 = 00100101 (+remainder)

hi just do mathematics !!!
you have already got the Q(quotient) and R(remainder) , now with the remainder you multiply that with 10(decimal) in binary 1010 that for example
7/3 gives 2 as Q and 1 as remainder than just multiply this 1 with 10 then again apply your logic which gives 10/3 gives 3 as Q so your answer will be
3(Q(first_division)).3(second division-Q)
try it it is working . and very easy to implement in verilog .

Related

What is the bitwise negation for an integer?

I have homework assignment with a piece to compute the bitwise negation of integer value. It say 512 go into -513.
I have a solution that does x = 512 y = 512*(-1)+(-1).
Is that correct way?
I think you need to first negate and add 1.
-x = ~x + 1
consequently
~x= -x -1
This property is based on the way negative number are represented in two's complement. To represent a negative number $A$ on n bits, one uses the complement of |A| to 2n, i.e. the number 2n-|A|
It is easy to see that A+~A=111...11 as bits in the addition will always be 0 and 1 and 111...111 is the number just before 2n, or 2n-1.
As -|A| is coded by 2n-|A|, and A +~A=2n-1, we can say that -A=~A+1 or equivalently ~A=-A-1
This is true for any number, positive or negative. And ~512=-512-1=-513
val = 512
print (~val)
output:
-513
~ bitwise complement
Sets the 1 bits to 0 and 1 to 0.
For example ~2 would result in -3.
This is because the bit-wise operator would first represent the number in sign and magnitude which is 0000 0010 (8 bit operator) where the MSB is the sign bit.
Then later it would take the negative number of 2 which is -2.
-2 is represented as 1000 0010 (8 bit operator) in sign and magnitude.
Later it adds a 1 to the LSB (1000 0010 + 1) which gives you 1000 0011.
Which is -3.
Otherwise:
y = -(512+1)
print (y)
output:
-513

Maximum bit-width to store a summation of M n-bit binary numbers

I am trying to find the formula to calculate the maximum bit-width required to contain a sum of M n-bit unsigned binary numbers. Thanks!
The maximum bit-width needed should be ceil(log_2(M * (2^n - 1))).
Edit: Thanks to #MBurnham I realize now that it should be floor(log_2(M * (2^n - 1))) + 1 instead.
Assuming positive integers, you need floor(log2(x)) + 1 bits to store x. and the largest value the sum of m n-bit numbers can produce would be m * 2^n.
So I believe the formula should be
floor(log2(m * 2^n)) + 1
bits.
If I add 2 numbers the I need 1 bit more than the wider of the 2 numbers to store the result. So, if I add 2 n-bit numbers, I need n+1 bits to store the result.
if I add another n-bit number, I need (n+1)+1 bits to store the result (that's 3 n-bit numbers added so far)
if I add another n-bit number, I need ((n+1)+1)+1 bits to store the result (that's 4 n-bit numbers added so far)
if I add another n-bit number, I need (((n+1)+1)+1)+1 bits to store the result (that's 5 n-bit numbers added so far)
So, I think your formula is
n + M - 1

Verilog code to compute cosx using Taylor series approximation

I'm trying to implement COS X function in Verilog using Taylor series. The problem statement presented to me is as below
"Write a Verilog code to compute cosX using Taylor series approximation. Please attach the source and test bench code of the 8-bit outputs in signed decimal radix format for X = 0° to 360° at the increment of 10° "
I need to understand a couple of things before i proceed. Please correct me if i am wrong someplace
Resolution calculation : 10° increments to cover 0° to 360° => 36 positions
36 in decimal can be represented by 6 bits. Since we can use 6 bits, the resolution is slightly better by using 64 words. The 64 words represent 0° to 360° hence each word represents a resolution of 5.625° ie all values of Cos from 0° to 360° in increments of 5.625°. Thus resolution is 5.625°
Taylor series calculation
Taylor series for cos is given by Cos x approximation by Taylor series
COS X = 1 − (X^2/2!) + (X^4/4!) − (X^6/6!) ..... (using only 3~4 terms)
I have a couple of queries
1) While it is easy to generate X*X (X square) or X cube terms using a multiplier, i am not sure how to deal with the extra bits generated during calculation of X square or X cube terms . Output is 8 bits only
eg X=6 bits ; X square =12 bits ; X cube = 18 bits.
Do i generate them anyways and later ignore them by considering just the MSB 8 bits of the entire result ? ... such a cos wave would suck right ?
2) I am not sure how to handle the +1 addition at start of Taylor series ...COS X = 1 − (X^2/2!) + (X^4/4!) .... Do i add binary 1 directly or do i have to scale the 1 as 2^8 = 255 or 2^6 = 64 since i am using 6 bits at input and 8 bits at output ?
I think this number series normally gives a number in the range +1 to -1. SO you have to decide how you are going to use your 8 bits.
I think a signed number with 1 integer bit and 7 fractional bits, you will not be able to represent 1, but very close.
I have a previous answer explaining how to use fixed-point with verilog. Once your comfortable with that you need to look at how bit growth occurs during multiply.
Just because you are outputting 1 bit int, 7 bit frac internally you could (should) use more to compute the answer.
With 7 fractional bits a 1 integer would look like 9'b0_1_0000000 or 1*2**7.

How to extract dyadic fraction from float

Now, floating and double-precision numbers, although they can approximate any sort of number (although the same could be said integers, floats are just more precise), they are represented as binary decimals internally. For example, one tenth would be approximated
0.00011001100110011... (... only goes to computers precision, not infinity)
Now, any number in binary with finite bits as something called a dyadic fraction representation in mathematics (has nothing to do with p-adic). This means you represent it as a fraction, where the denominator is a power of 2. For example, let's say our computer approximates one tenth as 0.00011. The dyadic fraction for that is 3/32 or 3/(2^5), which is close to one tenth. Now for my technical question. What would be the simplest way to extract the dyadic fraction from a floating number.
Irrelevant Note: If you are wondering why I would want to do this, it is because I am creating a surreal number library in Haskell. Dyadic fractions are easily translated into Surreal numbers, which is why it is convenient that binary is easily translated into dyadic, (I'll sure have trouble with the rational numbers though.)
The decodeFloat function seems useful for this. Technically, you should also check that floatRadix is 2, but as far I can see this is always the case in GHC.
Just be careful since it does not simplify mantissa and exponent. Here, if I evaluate decodeFloat (1.0 :: Double) I get an exponent of -52 and a mantissa of 2^52 which is not what I expected.
Also, toRational seems to generate a dyadic fraction. I am not sure this is always the case, though.
Hold your numbers in binary and convert to decimal for display.
Binary numbers are all dyatic. The numbers after the decimal place represent the number of powers of two for the denominator and the number evaluated without a decimal place is the numerator. That's binary numbers for you.
There is an ideal representation for surreal numbers in binary. I call them "sinary". It's this:
0s is Not a number
1s is zero
10s is neg one
11s is one
100s is neg two
101s is neg half
110s is half
111s is two
... etc...
so you see that the standard binary count matches the surreal birth order of numeric values when evaluated in sinary. The way to determine the numeric value of sinary is that the 1's are rights and the 0's are lefts. We start with +/-1's and then 1/2, 1/4, 1/8, etc. With sign equal to + for 1 and - for 0.
ex: evaluating sinary
1011011s
-> is the 91st surreal number (because 64+16+8+2+1 = 91)
-> with a value of −0.28125, because...
1011011
NLRRLRR
+-++-++
+ 0 − 1 + 1/2 + 1/4 − 1/8 + 1/16 + 1/32
= 0 − 32/32 + 16/32 + 8/32 − 4/32 + 2/32 + 1/32
= − 9/32
The surreal numbers form a binary tree, so there is an ideal binary format matching their location on the tree according to the Left/Right pattern to reach the number. Assign 1 to right and 0 to left. Then the birth order of surreal number is equal to the binary count of this representation. ie: the 15th surreal number value represented in sinary is the 15th number representation in the standard binary count. The value of a sinary is the surreal label value. Strip the leading bit from the representation, and start adding +1's or -1's depending on if the number starts with 1 or 0 after the first one. Then once the bit flips, begin adding and subtracting halved values (1/2, 1/4, 1/8, etc) using + or - values according to the bit value 1/0.
I have tested this format and it seems to work well. And there are some other secrets... such as the left and right of any sinary representation is the same binary format with the tail clipped to the last 0 and last 1 respectively. Conversion to decimal into a dyatic is NOT required in order to preform the recursive functions requested by Conway.

Python 3 - What is ">>"

This is the confusing line: x_next = (x_next + (a // x_prev)) >> 1
It is bit-wise shift. The next will give you some intuitions:
>>> 16 >> 1
8
>>> 16 >> 2
4
>>> 16 >> 3
2
>>> bin(16)
'0b10000'
>>> bin(16 >> 1)
'0b1000'
>>> bin(16 >> 2)
'0b100'
The >> operator is the same operator as it is in C and many other languages.
A bitshift to the right. If your number is like this in binary: 0100 than it will be 0010 after >> 1. With >> 2 it will be 0001.
So basically it's a nice way to divide your number by 2 (while flooring the remainder) ;)
It is the right shift operator.
Here it is being used to divide by 2. It would be far more clear to write this as
x_next = (x_next + (a // x_prev)) // 2
Sadly a lot of people try to be clever and use shift operators in place of multiplication and division. Typically this just leads to lots of confusion for the poor individuals who have to read the code at a later date.
Most newer/younger programmers do not worry about efficiency because the computers are so fast.
But if you are working on a 8-bit or 16-bit processor that may or may not have a hardware multiply and rarely has a hardware divide, then shifting integers takes one machine cycle while a multiply may take 16 or more and a divide may take 50-200 machine cycles. When your processor clock is in the GHz range you do not notice the difference, but if your instruction rate is 8 MHz or less it adds up very quickly.
So for efficiency people shift to multiply or divide for powers of two, especially in C which is the most common language for small processors and controllers.
I see it so often that I do not even think about it anymore.
Some of the things I do in C:
x = y >> 3; // is the same ad the floor of a divide by 8
if (y & 0x04) x++; // adds the rounding so the answer is rounded
For most microcontrollers, the compiler lets you see the resulting machine code generated and you can see what different statements generate. With that type of feedback, after awhile you just start writing more efficient code.
It means "right shift". It works the same as floor division by 2:
>>> a = 7
>>> a >> 1
3
>>> a // 2
3

Resources