How can I implement a sequence generator using a universal shift register? - circuit

How can I implement a sequence generator that generates the following sequence
0000
1000
0001
0011
0110
1101
1110
1111
using a universal shift register? The shift register I need to use is the 74LS194 model shown below where the inputs S1, S0 controls the shift mode.
If (S1,S0) = (0,0), then the current value is locked to the next state.
If (1,0), it's shift to right. If (0,1), it's shift to left, and (1,1) indicates parallel loading from the parallel data inputs.
I think this would be a simple question if the requirement was just using flip flops of my choice and not the shift register, but with this requirement I don't know where to start. Even though I draw 4 Karnaugh maps for each digit I don't seem to grasp a clue. Any help would be appreciated. Thanks in advance!
Universal shift register 74LS194:
Edit: I took the advice and wrote the next state table with the input signals.
I noticed that there were too many input variables to consider when I drew the next state table. You have to consider the Qd Qc Qb Qa, CLR, S1, S0, RIn, LIn signals for each next state Qd, Qc, Qb, Qa, which means a 9 variable Karnaugh map for each Q, which I know is ridiculous. Am I missing something here?

The first thing to do is to figure out the shift mode (S1,S0). In all cases only one shift can work:
0000, 1101, 1110 => shift right
1000, 0001, 0011, 0110 => shift left
1111 => load all zeros
Since not all combinations of Q0-Q3 are used in the sequence, there are many valid functions from Q0-Q3 to S0,S1. I notice that you can make the decision based on the number of 1 bits.
Now that you know how each code shifts, you can calculate the input bit (LSI/RSI)
0000, 1101, 1110 => LSI=1
1000, 0001, 0110 => RSI=1
0011 => RSI=0
Looks like LSI can always be 1.
There are lots of functions that are valid for RSI. RSI=NOT(Q0&Q1) works.

Related

How do I order strings of length N and 2 bits?

I have a string of length N with 2 bits. I am trying to find a function to order these strings. For example:
F(110) = 1
F(101) = 2
F(011) = 3
The strategy I adopted was labeling the bits by their position, so that for the first case K=1 and L=2 and hence
F(1,2) = 1
F(1,3) = 2
F(2,3) = 3
Does anyone have an idea of what this function might be?
If you are dealing with actual strings, sort them alphabetically ascending. If you are dealing with integers, there are some workarounds:
Convert the integer into bit-strings and sort alphabetically ascending.
or
Reverse the bits in the integer (011 becomes 110) and sort numerically ascending.
However, these workarounds might be slow. The function F described by you turns out to be pretty simple (assuming you are given the positions of the 1-bits) and is therefore a good solution.
To come up with an implementation of F we first look at the sequence of all bit-strings with exactly two 1-bits. Here we don't care about the length of the bit-string. We simply increment the bit-strings from left to right (opposed to the usual Arabic interpretation of numbers where you increment from right to left).
Next to the actual bit-string I replaced all 0 by ., the left 1 by l, and the right 1 by r. This makes it easier to see the pattern.
1: 11 lr
2: 101 l.r
3: 011 .lr
4: 1001 l..r
5: 0101 .l.r
6: 0011 ..lr
7: 10001 l...r
8: 01001 .l..r
9: 00101 ..l.r
10: 00011 ...lr
11: 100001 l....r
… … …
The function F is supposed to count the steps needed to increment to a given bit-string.
In the following, L is the index of the left 1-bit and R is the index of the right 1-bit. As in your question, we use 1-based indices. That is, the leftmost character in a string has index 1
For the right 1-bit to move one position to the right, the left 1-bit has to "catch up". If the left 1-bit starts at L=1 then catching up takes R-1 steps (when counting the start step L=1 too). For the right 1-bit to reach a high position, the left 1-bit has to catch up multiple times, as it is returned to the start each time the right 1-bit moves one to the right. Each time, catching up takes a little bit longer as the right 1-bit is further away from the start. The first time it takes 1 step, then 2, then 3, and so on. Thus, For the right 1-bit to reach position R it takes 1+2+3+…+(R-1) steps = (R-1)·(R-2)/2 steps. After that, we only have to move the left 1-bit to its position, which takes L steps. Therefore the function is
F(L,R) := (R-1)·(R-2) / 2 + L
Note that this function only is easy to implement if you know L and R. If you have an integer and would need to determine L and R first, it might be easier and faster to reverse the integer instead and sort numerically ascending. Determining L and R might be slower than reversing the bits in the integer.

Understanding a simple round-robin arbiter verilog code

Have a look at the following arbiter.v code :
Someone told me to think of rr_arbiter is that it's a simplified ripple-borrow circuit that wraps around.
'base' is a one hot signal indicating the first request that should be
considered for a grant.
huh ? Do you guys understand how to generate 'base' input signal ?
Notice the subtraction. The borrow logic causes it to search to find
the next set bit.
why ~(double_req-base) ?
module arbiter (
req, grant, base
);
parameter WIDTH = 16;
input [WIDTH-1:0] req;
output [WIDTH-1:0] grant;
input [WIDTH-1:0] base;
wire [2*WIDTH-1:0] double_req = {req,req};
wire [2*WIDTH-1:0] double_grant = double_req & ~(double_req-base);
assign grant = double_grant[WIDTH-1:0] | double_grant[2*WIDTH-1:WIDTH];
endmodule
Edit: as someone pointed out, I initially only pointed out how to set the input signals and gave two special cases as an example. Let me try to explain how it works. A good start for this is your question:
why ~(double_req-base) ?
As someone pointed out to you, this is based on the principle of the ripple borrow subtractor. When you subtract one number from another, regardless of the numeral system you are using, you start from the lowest order and try to subtract two numbers from the same order. In a binary example, this would look like this:
1011 = 11
0010 - = 2 -
────── ────
1001 = 9
As you can see, 1 - 1 is valid and yields 0. However, if that's not possible you can borrow from a higher order number. This image shows a simple example of how this looks like in the decimal system. An example in the decimal system could be:
1001 = 01(10)1 = 9
0010 - = 00 1 0 - = 2 -
────── ───────── ───
0111 = 01 1 1 = 7
Since 0 - 1 is not possible in the second position, we take the 1 from the fourth position, set the third position to 1 and set the second postition to 10 (so, 2 in the decimal system). This is very similar to the example in the decimal system I posted before.
Important for the arbiter: the next 1 of the original number (req), seen from the position of base, will be set to zero. All numbers between the base position and that 0 will be set to 1. After inverting the result of the subtraction, only this position will be 1 as seen from the base.
However, numbers with a lower order than the base could still be 1 with this technique. Therfore, we conjunct the original number with your calculated number (double_req & ~(double_req-base)). This makes sure that possible 1s at positions lower than base are eliminated.
Finally, the fact that it is doubled up makes sure that it will not run out of positions to borrow from. If it needs to borrow from these "second" doubled up block, the disjunction (double_grant[WIDTH-1:0] | double_grant[2*WIDTH-1:WIDTH]) makes sure that it returns the right index. I added an example for this to the examples below.
Original post
You can interpret base as your starting index in req. This is the first bit the code will consider to arbitrate. You should set this value to last_arbitrated_position + 1.
Take a look at the 4 bit (pseudocode) example I created below. Let's take some arbitrary numbers:
req = 4'b1101 // Your vector from which one position should be arbitrated
base = 4'b0010 // The second position is the first position to consider
Now, from arbiter.v, the following follows:
double_req = 1101 1101
double_grant = 1101 1101 & ~(1101 1011) = 1101 1101 & 0010 0100 = 0000 0100
In the last steps, arbiter.v then actually assigns the position that should be granted:
grant = 0100 | 0000 = 0100
This is correct, because we set the second position as base, and the next valid position was the third. Another example, where the base is a position which is also valid in req, is:
req = 4'b1111
base = 4'b0010
double_req = 1111 1111
double_grant = 1111 1111 & ~(1111 1101) = 1111 1111 & 0000 0010 = 0000 0010
grant = 0010 | 0000
Which is again correct, because in this case we defined that the first position that may be arbitrated is the second position, and this position is indeed valid.
The code example you posted also takes care of wrapping around the most-significant bit. This means that if you set a base, but there is no valid position greater than that base, it will wrap around and start arbitration from the least-significant bit. An example for this case would be:
req = 4'b0010
base = 4'b0100
double_req = 0010 0010
double_grant = 0010 0010 & ~(1110 0001) = 0010 0010 & 0001 1110 = 0010 0000
grant = 0000 | 0010
The purpose of the arbiter is to find out which request to grant and avoid granting it to the same source repeatably.
Now presume that we have several req bits set and the base, which is the state of the previously granted request shifted left by 1.
So, your task is to find the first set bit on the left of the base bit (or at it) to grant the request. Subtraction operation will flip all bits left from
the base bit, ending at the first set bit e.g.
1100 << req
-0010 << base
====
1010
-^^-
^
the req[2] bit is the one we want to grant the request. It was flipped to '0'. All bits left of it and bits right of the base bit were not changed. We need to get the bit which was changed last to '0'.
The way to do it is to 'and' value of the request with the inversion of the result of the subtraction. The changed bits will always have a single pattern: leftmost will be '0' and the rest will be '1'. This leftmost will be exactly in the place where there was '1' in the request. So, inversion will make it to be '1' and will invert all unchanged bits left and right. Anding it with the original request will efficiently get rid of unchanged bits and will guarantee that our newly found '1' will be preserved.
1010 << result of subtraction
~0101 << ~(req-base)
&1100 << req
=0100
Now, the problem appears if we are heading toward overflow:
0010
-1000
====
1010
~0101
&1010
=0000 << oops
But we want to get bit[1] from the request.
The way to solve it is to concat another copy of req in front of this one and continue subtraction to hit the lowest bit int he top
portion:
0010 0010
-0000 1000
====
0001 1010
~1110 0101
&0010 0010
=0010 0000
now we only need to choose between upper and lower parts:
0010 | 0000 = 0010
here you are, you got your result.

Generate all "without-replacement" subsets series

I'm looking for a way to generate all possible subcombinations of a set, where each element can be used at most one time.
For example, the set {1,2,3} would yield
{{1},{2},{3}}
{{1},{2,3}}
{{1,2},{3}}
{{2},{1,3}}
{{1,2,3}}
A pseudocode hint would be great. Also, if there is a term for this, or a terminology that applies, I would love to learn it.
First, a few pointers.
The separation of a set into disjoint subsets is called a set partition (Wikipedia, MathWorld).
A common way to encode a set partition is a restricted growth string.
The number of set partitions is a Bell number, and they grow fast: for a set of 20 elements, there are 51,724,158,235,372 set partitions.
Here is how encoding works.
Look at the elements in increasing order: 1, 2, 3, 4, ... .
Let c be the current number of subsets, initially 0.
Whenever the current element is the lowest element of its subset, we assign this set the number c, and then increase c by 1.
Regardless, we write down the number of the subset which contains the current element.
It follows from the procedure that the first element of the string will be 0, and each next element is no greater than the maximum so far plus one. Hence the name, "restricted growth strings".
For example, consider the partition {1,3},{2,5},{4}.
Element 1 is the lowest in its subset, so subset {1,3} is labeled by 0.
Element 2 is the lowest in its subset, so subset {2,5} is labeled by 1.
Element 3 is in the subset already labeled by 0.
Element 4 is the lowest in its subset, so subset {4} is labeled by 2.
Element 5 is in the subset already labeled by 1.
Thus we get the string 01021.
The string tells us:
Element 1 is in subset 0.
Element 2 is in subset 1.
Element 3 is in subset 0.
Element 4 is in subset 2.
Element 5 is in subset 1.
To get a feel of it from a different angle, here are all partitions of a four-element set, along with the respective reduced growth strings:
0000 {1,2,3,4}
0001 {1,2,3},{4}
0010 {1,2,4},{3}
0011 {1,2},{3,4}
0012 {1,2},{3},{4}
0100 {1,3,4},{2}
0101 {1,3},{2,4}
0102 {1,3},{2},{4}
0110 {1,4},{2,3}
0111 {1},{2,3,4}
0112 {1},{2,3},{4}
0120 {1,4},{2},{3}
0121 {1},{2,4},{3}
0122 {1},{2},{3,4}
0123 {1},{2},{3},{4}
As for pseudocode, it's relatively straightforward to generate all such strings.
We do it recursively.
Maintain the value c, assign every number from 0 to c inclusive to the current position, and for each such choice, recursively construct the rest of the string.
Also it is possible to generate them lazily, starting with a string with all zeroes and repeatedly finding the lexicographically next such string, akin to how next_permutation is used to list all permutations.
Lastly, if you'd like to see more than that (along with the mentioned next function), here's a bit of self-promotion.
Recently, we did a learning project at my university, which required the students to implement various functions for combinatorial objects with reasonable efficiency.
Here is the part we got for restricted growth strings; I linked the header part which describes the functions in English.

What is >>>symbol in verilog?

May I know what is this symbol >>> in verilog. When should I use it? Thanks!
e.g
always #(posedge Clock) begin
if (Clear) begin
a < = c>>>8;
b < = d>>>16;
end
end
It is an arithmetic right shift operator (see page 19-20 of the link). It is the reverse case from Java (Java >> is arithmetic right shift, while >>> is logical right shift).
Arithmetic right shift is to handle case when the number right-shifted is positive/negative with this behavior:
Shift right specified number of bits, fill with value of sign bit if
expression is signed, othewise fill with zero
To illustrate, if you have signed expression with value of, say like this:
1000 1100
--------- >>> 2
1110 0011 //note the left most bits are 1
But for unsigned:
1000 1100
--------- >>> 2
0010 0011
The left most will be filled with 0.

Computer_Architecture + Verilog

I am doing a divider circuit in verilog and using the non-restoring division algorithm.
I am having trouble representing the remainder as a fractional binary number.
For example if I do 0111/0011 (7/3) I get the quotient as 0010 and remainder as 0001 which is correct but I want to represent it as 0010.0101.
Can Someone help ??
Suppose, as in your example, you are dividing 4 bit numbers, but want an extra 4 bits of fractional precision in the result.
One approach is to simply multiply the numerator by 2^4 before doing the division.
i.e.
instead of
0111/0011 = 0010 (+remainder)
do
01110000/0011 = 00100101 (+remainder)
hi just do mathematics !!!
you have already got the Q(quotient) and R(remainder) , now with the remainder you multiply that with 10(decimal) in binary 1010 that for example
7/3 gives 2 as Q and 1 as remainder than just multiply this 1 with 10 then again apply your logic which gives 10/3 gives 3 as Q so your answer will be
3(Q(first_division)).3(second division-Q)
try it it is working . and very easy to implement in verilog .

Resources