Understanding a simple round-robin arbiter verilog code - verilog

Have a look at the following arbiter.v code :
Someone told me to think of rr_arbiter is that it's a simplified ripple-borrow circuit that wraps around.
'base' is a one hot signal indicating the first request that should be
considered for a grant.
huh ? Do you guys understand how to generate 'base' input signal ?
Notice the subtraction. The borrow logic causes it to search to find
the next set bit.
why ~(double_req-base) ?
module arbiter (
req, grant, base
);
parameter WIDTH = 16;
input [WIDTH-1:0] req;
output [WIDTH-1:0] grant;
input [WIDTH-1:0] base;
wire [2*WIDTH-1:0] double_req = {req,req};
wire [2*WIDTH-1:0] double_grant = double_req & ~(double_req-base);
assign grant = double_grant[WIDTH-1:0] | double_grant[2*WIDTH-1:WIDTH];
endmodule

Edit: as someone pointed out, I initially only pointed out how to set the input signals and gave two special cases as an example. Let me try to explain how it works. A good start for this is your question:
why ~(double_req-base) ?
As someone pointed out to you, this is based on the principle of the ripple borrow subtractor. When you subtract one number from another, regardless of the numeral system you are using, you start from the lowest order and try to subtract two numbers from the same order. In a binary example, this would look like this:
1011 = 11
0010 - = 2 -
────── ────
1001 = 9
As you can see, 1 - 1 is valid and yields 0. However, if that's not possible you can borrow from a higher order number. This image shows a simple example of how this looks like in the decimal system. An example in the decimal system could be:
1001 = 01(10)1 = 9
0010 - = 00 1 0 - = 2 -
────── ───────── ───
0111 = 01 1 1 = 7
Since 0 - 1 is not possible in the second position, we take the 1 from the fourth position, set the third position to 1 and set the second postition to 10 (so, 2 in the decimal system). This is very similar to the example in the decimal system I posted before.
Important for the arbiter: the next 1 of the original number (req), seen from the position of base, will be set to zero. All numbers between the base position and that 0 will be set to 1. After inverting the result of the subtraction, only this position will be 1 as seen from the base.
However, numbers with a lower order than the base could still be 1 with this technique. Therfore, we conjunct the original number with your calculated number (double_req & ~(double_req-base)). This makes sure that possible 1s at positions lower than base are eliminated.
Finally, the fact that it is doubled up makes sure that it will not run out of positions to borrow from. If it needs to borrow from these "second" doubled up block, the disjunction (double_grant[WIDTH-1:0] | double_grant[2*WIDTH-1:WIDTH]) makes sure that it returns the right index. I added an example for this to the examples below.
Original post
You can interpret base as your starting index in req. This is the first bit the code will consider to arbitrate. You should set this value to last_arbitrated_position + 1.
Take a look at the 4 bit (pseudocode) example I created below. Let's take some arbitrary numbers:
req = 4'b1101 // Your vector from which one position should be arbitrated
base = 4'b0010 // The second position is the first position to consider
Now, from arbiter.v, the following follows:
double_req = 1101 1101
double_grant = 1101 1101 & ~(1101 1011) = 1101 1101 & 0010 0100 = 0000 0100
In the last steps, arbiter.v then actually assigns the position that should be granted:
grant = 0100 | 0000 = 0100
This is correct, because we set the second position as base, and the next valid position was the third. Another example, where the base is a position which is also valid in req, is:
req = 4'b1111
base = 4'b0010
double_req = 1111 1111
double_grant = 1111 1111 & ~(1111 1101) = 1111 1111 & 0000 0010 = 0000 0010
grant = 0010 | 0000
Which is again correct, because in this case we defined that the first position that may be arbitrated is the second position, and this position is indeed valid.
The code example you posted also takes care of wrapping around the most-significant bit. This means that if you set a base, but there is no valid position greater than that base, it will wrap around and start arbitration from the least-significant bit. An example for this case would be:
req = 4'b0010
base = 4'b0100
double_req = 0010 0010
double_grant = 0010 0010 & ~(1110 0001) = 0010 0010 & 0001 1110 = 0010 0000
grant = 0000 | 0010

The purpose of the arbiter is to find out which request to grant and avoid granting it to the same source repeatably.
Now presume that we have several req bits set and the base, which is the state of the previously granted request shifted left by 1.
So, your task is to find the first set bit on the left of the base bit (or at it) to grant the request. Subtraction operation will flip all bits left from
the base bit, ending at the first set bit e.g.
1100 << req
-0010 << base
====
1010
-^^-
^
the req[2] bit is the one we want to grant the request. It was flipped to '0'. All bits left of it and bits right of the base bit were not changed. We need to get the bit which was changed last to '0'.
The way to do it is to 'and' value of the request with the inversion of the result of the subtraction. The changed bits will always have a single pattern: leftmost will be '0' and the rest will be '1'. This leftmost will be exactly in the place where there was '1' in the request. So, inversion will make it to be '1' and will invert all unchanged bits left and right. Anding it with the original request will efficiently get rid of unchanged bits and will guarantee that our newly found '1' will be preserved.
1010 << result of subtraction
~0101 << ~(req-base)
&1100 << req
=0100
Now, the problem appears if we are heading toward overflow:
0010
-1000
====
1010
~0101
&1010
=0000 << oops
But we want to get bit[1] from the request.
The way to solve it is to concat another copy of req in front of this one and continue subtraction to hit the lowest bit int he top
portion:
0010 0010
-0000 1000
====
0001 1010
~1110 0101
&0010 0010
=0010 0000
now we only need to choose between upper and lower parts:
0010 | 0000 = 0010
here you are, you got your result.

Related

How do I order strings of length N and 2 bits?

I have a string of length N with 2 bits. I am trying to find a function to order these strings. For example:
F(110) = 1
F(101) = 2
F(011) = 3
The strategy I adopted was labeling the bits by their position, so that for the first case K=1 and L=2 and hence
F(1,2) = 1
F(1,3) = 2
F(2,3) = 3
Does anyone have an idea of what this function might be?
If you are dealing with actual strings, sort them alphabetically ascending. If you are dealing with integers, there are some workarounds:
Convert the integer into bit-strings and sort alphabetically ascending.
or
Reverse the bits in the integer (011 becomes 110) and sort numerically ascending.
However, these workarounds might be slow. The function F described by you turns out to be pretty simple (assuming you are given the positions of the 1-bits) and is therefore a good solution.
To come up with an implementation of F we first look at the sequence of all bit-strings with exactly two 1-bits. Here we don't care about the length of the bit-string. We simply increment the bit-strings from left to right (opposed to the usual Arabic interpretation of numbers where you increment from right to left).
Next to the actual bit-string I replaced all 0 by ., the left 1 by l, and the right 1 by r. This makes it easier to see the pattern.
1: 11 lr
2: 101 l.r
3: 011 .lr
4: 1001 l..r
5: 0101 .l.r
6: 0011 ..lr
7: 10001 l...r
8: 01001 .l..r
9: 00101 ..l.r
10: 00011 ...lr
11: 100001 l....r
… … …
The function F is supposed to count the steps needed to increment to a given bit-string.
In the following, L is the index of the left 1-bit and R is the index of the right 1-bit. As in your question, we use 1-based indices. That is, the leftmost character in a string has index 1
For the right 1-bit to move one position to the right, the left 1-bit has to "catch up". If the left 1-bit starts at L=1 then catching up takes R-1 steps (when counting the start step L=1 too). For the right 1-bit to reach a high position, the left 1-bit has to catch up multiple times, as it is returned to the start each time the right 1-bit moves one to the right. Each time, catching up takes a little bit longer as the right 1-bit is further away from the start. The first time it takes 1 step, then 2, then 3, and so on. Thus, For the right 1-bit to reach position R it takes 1+2+3+…+(R-1) steps = (R-1)·(R-2)/2 steps. After that, we only have to move the left 1-bit to its position, which takes L steps. Therefore the function is
F(L,R) := (R-1)·(R-2) / 2 + L
Note that this function only is easy to implement if you know L and R. If you have an integer and would need to determine L and R first, it might be easier and faster to reverse the integer instead and sort numerically ascending. Determining L and R might be slower than reversing the bits in the integer.

bitwise operations in python (or,and, |) [duplicate]

Consider this code:
x = 1 # 0001
x << 2 # Shift left 2 bits: 0100
# Result: 4
x | 2 # Bitwise OR: 0011
# Result: 3
x & 1 # Bitwise AND: 0001
# Result: 1
I can understand the arithmetic operators in Python (and other languages), but I never understood 'bitwise' operators quite well. In the above example (from a Python book), I understand the left-shift but not the other two.
Also, what are bitwise operators actually used for? I'd appreciate some examples.
Bitwise operators are operators that work on multi-bit values, but conceptually one bit at a time.
AND is 1 only if both of its inputs are 1, otherwise it's 0.
OR is 1 if one or both of its inputs are 1, otherwise it's 0.
XOR is 1 only if exactly one of its inputs are 1, otherwise it's 0.
NOT is 1 only if its input is 0, otherwise it's 0.
These can often be best shown as truth tables. Input possibilities are on the top and left, the resultant bit is one of the four (two in the case of NOT since it only has one input) values shown at the intersection of the inputs.
AND | 0 1 OR | 0 1 XOR | 0 1 NOT | 0 1
----+----- ---+---- ----+---- ----+----
0 | 0 0 0 | 0 1 0 | 0 1 | 1 0
1 | 0 1 1 | 1 1 1 | 1 0
One example is if you only want the lower 4 bits of an integer, you AND it with 15 (binary 1111) so:
201: 1100 1001
AND 15: 0000 1111
------------------
IS 9 0000 1001
The zero bits in 15 in that case effectively act as a filter, forcing the bits in the result to be zero as well.
In addition, >> and << are often included as bitwise operators, and they "shift" a value respectively right and left by a certain number of bits, throwing away bits that roll of the end you're shifting towards, and feeding in zero bits at the other end.
So, for example:
1001 0101 >> 2 gives 0010 0101
1111 1111 << 4 gives 1111 0000
Note that the left shift in Python is unusual in that it's not using a fixed width where bits are discarded - while many languages use a fixed width based on the data type, Python simply expands the width to cater for extra bits. In order to get the discarding behaviour in Python, you can follow a left shift with a bitwise and such as in an 8-bit value shifting left four bits:
bits8 = (bits8 << 4) & 255
With that in mind, another example of bitwise operators is if you have two 4-bit values that you want to pack into an 8-bit one, you can use all three of your operators (left-shift, and and or):
packed_val = ((val1 & 15) << 4) | (val2 & 15)
The & 15 operation will make sure that both values only have the lower 4 bits.
The << 4 is a 4-bit shift left to move val1 into the top 4 bits of an 8-bit value.
The | simply combines these two together.
If val1 is 7 and val2 is 4:
val1 val2
==== ====
& 15 (and) xxxx-0111 xxxx-0100 & 15
<< 4 (left) 0111-0000 |
| |
+-------+-------+
|
| (or) 0111-0100
One typical usage:
| is used to set a certain bit to 1
& is used to test or clear a certain bit
Set a bit (where n is the bit number, and 0 is the least significant bit):
unsigned char a |= (1 << n);
Clear a bit:
unsigned char b &= ~(1 << n);
Toggle a bit:
unsigned char c ^= (1 << n);
Test a bit:
unsigned char e = d & (1 << n);
Take the case of your list for example:
x | 2 is used to set bit 1 of x to 1
x & 1 is used to test if bit 0 of x is 1 or 0
what are bitwise operators actually used for? I'd appreciate some examples.
One of the most common uses of bitwise operations is for parsing hexadecimal colours.
For example, here's a Python function that accepts a String like #FF09BE and returns a tuple of its Red, Green and Blue values.
def hexToRgb(value):
# Convert string to hexadecimal number (base 16)
num = (int(value.lstrip("#"), 16))
# Shift 16 bits to the right, and then binary AND to obtain 8 bits representing red
r = ((num >> 16) & 0xFF)
# Shift 8 bits to the right, and then binary AND to obtain 8 bits representing green
g = ((num >> 8) & 0xFF)
# Simply binary AND to obtain 8 bits representing blue
b = (num & 0xFF)
return (r, g, b)
I know that there are more efficient ways to acheive this, but I believe that this is a really concise example illustrating both shifts and bitwise boolean operations.
I think that the second part of the question:
Also, what are bitwise operators actually used for? I'd appreciate some examples.
Has been only partially addressed. These are my two cents on that matter.
Bitwise operations in programming languages play a fundamental role when dealing with a lot of applications. Almost all low-level computing must be done using this kind of operations.
In all applications that need to send data between two nodes, such as:
computer networks;
telecommunication applications (cellular phones, satellite communications, etc).
In the lower level layer of communication, the data is usually sent in what is called frames. Frames are just strings of bytes that are sent through a physical channel. This frames usually contain the actual data plus some other fields (coded in bytes) that are part of what is called the header. The header usually contains bytes that encode some information related to the status of the communication (e.g, with flags (bits)), frame counters, correction and error detection codes, etc. To get the transmitted data in a frame, and to build the frames to send data, you will need for sure bitwise operations.
In general, when dealing with that kind of applications, an API is available so you don't have to deal with all those details. For example, all modern programming languages provide libraries for socket connections, so you don't actually need to build the TCP/IP communication frames. But think about the good people that programmed those APIs for you, they had to deal with frame construction for sure; using all kinds of bitwise operations to go back and forth from the low-level to the higher-level communication.
As a concrete example, imagine some one gives you a file that contains raw data that was captured directly by telecommunication hardware. In this case, in order to find the frames, you will need to read the raw bytes in the file and try to find some kind of synchronization words, by scanning the data bit by bit. After identifying the synchronization words, you will need to get the actual frames, and SHIFT them if necessary (and that is just the start of the story) to get the actual data that is being transmitted.
Another very different low level family of application is when you need to control hardware using some (kind of ancient) ports, such as parallel and serial ports. This ports are controlled by setting some bytes, and each bit of that bytes has a specific meaning, in terms of instructions, for that port (see for instance http://en.wikipedia.org/wiki/Parallel_port). If you want to build software that does something with that hardware you will need bitwise operations to translate the instructions you want to execute to the bytes that the port understand.
For example, if you have some physical buttons connected to the parallel port to control some other device, this is a line of code that you can find in the soft application:
read = ((read ^ 0x80) >> 4) & 0x0f;
Hope this contributes.
I didn't see it mentioned above but you will also see some people use left and right shift for arithmetic operations. A left shift by x is equivalent to multiplying by 2^x (as long as it doesn't overflow) and a right shift is equivalent to dividing by 2^x.
Recently I've seen people using x << 1 and x >> 1 for doubling and halving, although I'm not sure if they are just trying to be clever or if there really is a distinct advantage over the normal operators.
I hope this clarifies those two:
x | 2
0001 //x
0010 //2
0011 //result = 3
x & 1
0001 //x
0001 //1
0001 //result = 1
Think of 0 as false and 1 as true. Then bitwise and(&) and or(|) work just like regular and and or except they do all of the bits in the value at once. Typically you will see them used for flags if you have 30 options that can be set (say as draw styles on a window) you don't want to have to pass in 30 separate boolean values to set or unset each one so you use | to combine options into a single value and then you use & to check if each option is set. This style of flag passing is heavily used by OpenGL. Since each bit is a separate flag you get flag values on powers of two(aka numbers that have only one bit set) 1(2^0) 2(2^1) 4(2^2) 8(2^3) the power of two tells you which bit is set if the flag is on.
Also note 2 = 10 so x|2 is 110(6) not 111(7) If none of the bits overlap(which is true in this case) | acts like addition.
Sets
Sets can be combined using mathematical operations.
The union operator | combines two sets to form a new one containing items in either.
The intersection operator & gets items only in both.
The difference operator - gets items in the first set but not in the second.
The symmetric difference operator ^ gets items in either set, but not both.
Try It Yourself:
first = {1, 2, 3, 4, 5, 6}
second = {4, 5, 6, 7, 8, 9}
print(first | second)
print(first & second)
print(first - second)
print(second - first)
print(first ^ second)
Result:
{1, 2, 3, 4, 5, 6, 7, 8, 9}
{4, 5, 6}
{1, 2, 3}
{8, 9, 7}
{1, 2, 3, 7, 8, 9}
This example will show you the operations for all four 2 bit values:
10 | 12
1010 #decimal 10
1100 #decimal 12
1110 #result = 14
10 & 12
1010 #decimal 10
1100 #decimal 12
1000 #result = 8
Here is one example of usage:
x = raw_input('Enter a number:')
print 'x is %s.' % ('even', 'odd')[x&1]
Another common use-case is manipulating/testing file permissions. See the Python stat module: http://docs.python.org/library/stat.html.
For example, to compare a file's permissions to a desired permission set, you could do something like:
import os
import stat
#Get the actual mode of a file
mode = os.stat('file.txt').st_mode
#File should be a regular file, readable and writable by its owner
#Each permission value has a single 'on' bit. Use bitwise or to combine
#them.
desired_mode = stat.S_IFREG|stat.S_IRUSR|stat.S_IWUSR
#check for exact match:
mode == desired_mode
#check for at least one bit matching:
bool(mode & desired_mode)
#check for at least one bit 'on' in one, and not in the other:
bool(mode ^ desired_mode)
#check that all bits from desired_mode are set in mode, but I don't care about
# other bits.
not bool((mode^desired_mode)&desired_mode)
I cast the results as booleans, because I only care about the truth or falsehood, but it would be a worthwhile exercise to print out the bin() values for each one.
Bit representations of integers are often used in scientific computing to represent arrays of true-false information because a bitwise operation is much faster than iterating through an array of booleans. (Higher level languages may use the idea of a bit array.)
A nice and fairly simple example of this is the general solution to the game of Nim. Take a look at the Python code on the Wikipedia page. It makes heavy use of bitwise exclusive or, ^.
There may be a better way to find where an array element is between two values, but as this example shows, the & works here, whereas and does not.
import numpy as np
a=np.array([1.2, 2.3, 3.4])
np.where((a>2) and (a<3))
#Result: Value Error
np.where((a>2) & (a<3))
#Result: (array([1]),)
i didnt see it mentioned, This example will show you the (-) decimal operation for 2 bit values: A-B (only if A contains B)
this operation is needed when we hold an verb in our program that represent bits. sometimes we need to add bits (like above) and sometimes we need to remove bits (if the verb contains then)
111 #decimal 7
-
100 #decimal 4
--------------
011 #decimal 3
with python:
7 & ~4 = 3 (remove from 7 the bits that represent 4)
001 #decimal 1
-
100 #decimal 4
--------------
001 #decimal 1
with python:
1 & ~4 = 1 (remove from 1 the bits that represent 4 - in this case 1 is not 'contains' 4)..
Whilst manipulating bits of an integer is useful, often for network protocols, which may be specified down to the bit, one can require manipulation of longer byte sequences (which aren't easily converted into one integer). In this case it is useful to employ the bitstring library which allows for bitwise operations on data - e.g. one can import the string 'ABCDEFGHIJKLMNOPQ' as a string or as hex and bit shift it (or perform other bitwise operations):
>>> import bitstring
>>> bitstring.BitArray(bytes='ABCDEFGHIJKLMNOPQ') << 4
BitArray('0x142434445464748494a4b4c4d4e4f50510')
>>> bitstring.BitArray(hex='0x4142434445464748494a4b4c4d4e4f5051') << 4
BitArray('0x142434445464748494a4b4c4d4e4f50510')
the following bitwise operators: &, |, ^, and ~ return values (based on their input) in the same way logic gates affect signals. You could use them to emulate circuits.
To flip bits (i.e. 1's complement/invert) you can do the following:
Since value ExORed with all 1s results into inversion,
for a given bit width you can use ExOR to invert them.
In Binary
a=1010 --> this is 0xA or decimal 10
then
c = 1111 ^ a = 0101 --> this is 0xF or decimal 15
-----------------
In Python
a=10
b=15
c = a ^ b --> 0101
print(bin(c)) # gives '0b101'
You can use bit masking to convert binary to decimal;
int a = 1 << 7;
int c = 55;
for(int i = 0; i < 8; i++){
System.out.print((a & c) >> 7);
c = c << 1;
}
this is for 8 digits you can also do for further more.

How can I implement a sequence generator using a universal shift register?

How can I implement a sequence generator that generates the following sequence
0000
1000
0001
0011
0110
1101
1110
1111
using a universal shift register? The shift register I need to use is the 74LS194 model shown below where the inputs S1, S0 controls the shift mode.
If (S1,S0) = (0,0), then the current value is locked to the next state.
If (1,0), it's shift to right. If (0,1), it's shift to left, and (1,1) indicates parallel loading from the parallel data inputs.
I think this would be a simple question if the requirement was just using flip flops of my choice and not the shift register, but with this requirement I don't know where to start. Even though I draw 4 Karnaugh maps for each digit I don't seem to grasp a clue. Any help would be appreciated. Thanks in advance!
Universal shift register 74LS194:
Edit: I took the advice and wrote the next state table with the input signals.
I noticed that there were too many input variables to consider when I drew the next state table. You have to consider the Qd Qc Qb Qa, CLR, S1, S0, RIn, LIn signals for each next state Qd, Qc, Qb, Qa, which means a 9 variable Karnaugh map for each Q, which I know is ridiculous. Am I missing something here?
The first thing to do is to figure out the shift mode (S1,S0). In all cases only one shift can work:
0000, 1101, 1110 => shift right
1000, 0001, 0011, 0110 => shift left
1111 => load all zeros
Since not all combinations of Q0-Q3 are used in the sequence, there are many valid functions from Q0-Q3 to S0,S1. I notice that you can make the decision based on the number of 1 bits.
Now that you know how each code shifts, you can calculate the input bit (LSI/RSI)
0000, 1101, 1110 => LSI=1
1000, 0001, 0110 => RSI=1
0011 => RSI=0
Looks like LSI can always be 1.
There are lots of functions that are valid for RSI. RSI=NOT(Q0&Q1) works.

What is >>>symbol in verilog?

May I know what is this symbol >>> in verilog. When should I use it? Thanks!
e.g
always #(posedge Clock) begin
if (Clear) begin
a < = c>>>8;
b < = d>>>16;
end
end
It is an arithmetic right shift operator (see page 19-20 of the link). It is the reverse case from Java (Java >> is arithmetic right shift, while >>> is logical right shift).
Arithmetic right shift is to handle case when the number right-shifted is positive/negative with this behavior:
Shift right specified number of bits, fill with value of sign bit if
expression is signed, othewise fill with zero
To illustrate, if you have signed expression with value of, say like this:
1000 1100
--------- >>> 2
1110 0011 //note the left most bits are 1
But for unsigned:
1000 1100
--------- >>> 2
0010 0011
The left most will be filled with 0.

Computer_Architecture + Verilog

I am doing a divider circuit in verilog and using the non-restoring division algorithm.
I am having trouble representing the remainder as a fractional binary number.
For example if I do 0111/0011 (7/3) I get the quotient as 0010 and remainder as 0001 which is correct but I want to represent it as 0010.0101.
Can Someone help ??
Suppose, as in your example, you are dividing 4 bit numbers, but want an extra 4 bits of fractional precision in the result.
One approach is to simply multiply the numerator by 2^4 before doing the division.
i.e.
instead of
0111/0011 = 0010 (+remainder)
do
01110000/0011 = 00100101 (+remainder)
hi just do mathematics !!!
you have already got the Q(quotient) and R(remainder) , now with the remainder you multiply that with 10(decimal) in binary 1010 that for example
7/3 gives 2 as Q and 1 as remainder than just multiply this 1 with 10 then again apply your logic which gives 10/3 gives 3 as Q so your answer will be
3(Q(first_division)).3(second division-Q)
try it it is working . and very easy to implement in verilog .

Resources