what is this command in Verilog - verilog

I am new to verilog and I was reading few codes online. I came across the following line of code and didn't understand what exactly this means
wr_ptr_reg <= {ADDR_WIDTH + 1{1'b0}};
I would appreciate if someone could explain what it means

1'b0 describes a 1-bit wide binary zero value. <n>{<value>} gives a bit vector formed by concatenating n copies of the bit vector value. In this case, it creates a bit vector containing ADDR_WIDTH + 1 copies of 0 bits. ADDR_WIDTH will be a previously declared parameter representing some constant value (probably stored as an integer, which is basically a 32-bit bit vector). Then you are storing zero to wr_ptr_reg. <= indicates a non-blocking assignment. This basically means that its value will not be updated until the rest of the current block is finished. You can treat all non-blocking assignments in a block as if they happen at the same time when the block finishes.
It would be much clearer to add parenthesis:
wr_ptr_reg <= {(ADDR_WIDTH + 1){1'b0}};

{..} is a concatenation operator. { count { vector } } means concatenate the vector count times.
In this case the vector is a single bit which is repeated ADDR_WIDTH + 1 times.
Thus you get a vector consisting of (ADDR_WIDTH + 1) zeros.
This is another example: { 4 { 3'b101} } is equal to 12'b101101101101
Thus you set the wr_ptr_reg to all zero's (assuming wr_ptr_reg consists of ADDR_WIDTH + 1 bits)

Related

Circular Buffer: Selecting Range of Indices that Include the Wraparound Point

I think this question is best understood with an example. So here we go:
Imagine the following are defined:
parameter number_of_points_before_point_of_interest = 4;
logic [15:0] test_data = 16'b0000111100001111;
logic [3: 0] point_of_interest;
logic [7: 0] output_data;
if the value assigned to point_of_interest is 1 and the value to number_of_points_before_point_of_interest is 4. I want my output_data to be {test_data[E: F], test_data[5:0]} or 8'b00111100.
So in essence, I want to take 8 bits starting from (point_of_interest - number_of_points_before_point_of_interest) and ending at (point_of_interest
- number_of_points_before_point_of_interest + 7).
Since point_of_interest is a variable number, the following two indexing methods are invalid:
To make the code more concise: point_of_interest --> pot
number_of_points_before_point_of_interest --> num_pt_before_pot
buffer[pot - num_pt_before_pot: 4'hF] // Invalid since pot not constant
buffer[pot -: num_pt_before_pot] // Part-select doesn't work either
Note: Variability of pot is not an issue in the second case since starting point can be variable. Regardless, part-select does not provide the desirable results in this example.
Your help is very much appreciated. Thanks in advance
A simple trick you can do is replicate your test_data, then take a slice of it
output_data = {2{test_data}}[16+pot-before_pot-:2*before_pot];

Solving math with integers larger than any available integer data type

In some programming competitions where the numbers are larger than any available integer data type, we often use strings instead.
Question 1:
Given these large numbers, how to calculate e and f in the below expression?
(a/b) + (c/d) = e/f
note: GCD(e,f) = 1, i.e. they must be in minimised form. For example {e,f} = {1,2} rather than {2,4}.
Also, all a,b,c,d are large numbers known to us.
Question 2:
Can someone also suggest a way to find GCD of two big numbers (bigger than any available integer type)?
I would suggest using full bytes or words rather than strings.
It is relatively easy to think in base 256 instead of base 10 and a lot more efficient for the processor to not do multiplication and division by 10 all the time. Ideally, choose a word size that is half the processor's natural word size, as that makes carry easy to implement. Of course thinking in base 64K or 4G is slightly more complex, but even better than base 256.
The only downside is generating the initial big numbers from the ascii input, which you get for free in base 10. Using a larger word size you can make this more efficient by processing a number of digits initially into a single word (eg 9 digits at a time into 4G), then performing a long multiply of that single word into the correct offset in your large integer format.
A compromise might be to run your engine in base 1 billion: This will still be 9 or 81 times more efficient than using base 10!
The simplest way to solve this equation is to multiply a/b * d/d and c/d * b/b so they both have the common denominator b*d.
I think you will then need to prime factorise your big numbers e and f to find any common factors. Remember to search again for the same factor squared.
Of course, that means you have to write a prime generating sieve. You only need to generate factors up to the square root, or half the digits of the min value of e and f.
You could prime factorise b and d to get a lower initial denominator, but you will need to do it again anyway after the addition.
I think that the way to solve this is to separate the problem:
Process the input numbers as an array of characters (ie. std::string)
Make a class where each object can store an std::list (or similar) that represents one of the large numbers, and can do the needed arithmetic with your data
You can then solve your problems normally, without having to worry about your large inputs causing overflow.
Here's a webpage that explains how you can have such an arithmetic class (with sample code in C++ showing addition).
Once you have such an arithmetic class, you no longer need to worry about how to store the data or any overflow.
I get the impression that you already know how to find the GCD when you don't have overflow issues, but just in case, here's an explanation of finding the GCD (with C++ sample code).
As for the specific math problem:
// given formula: a/b + c/d = e/f
// = ( ( a*d + b*c ) / ( b*d ) )
// Define some variables here to save on copying
// (I assume that your class that holds the
// large numbers is called "ARITHMETIC")
ARITHMETIC numerator = a*d + b*c;
ARITHMETIC denominator = b*d;
ARITHMETIC gcd = GCD( numerator , denominator );
// because we know that GCD(e,f) is 1, this implies:
ARITHMETIC e = numerator / gcd;
ARITHMETIC f = denominator / gcd;

Using " * " for multiplication of binary numbers, only gives me addition, why? (Code here)

I'm learning operations with " + ", " - " and " * ", addition and subtraction works well, but multiplication gives me only additions, link for the code:
http://www.edaplayground.com/x/NvT
I checked the code, can't understand what's going on. I gave enough space (bits) the result variable.
BTW, It's a code intended for fixed-point operations including fractional numbers, but everything is calculated as integers.
Your select signal is only on 1bit.
Then when you set select = 2 it assigns the lower bit of 2(2'b10) i.e. 0.
You should change select declaration by :
input [1:0] select; // In the module
reg [1:0] select; // In the testbench
To avoid such errors I would advise you to use the complete notation of values:
x'tnnn...nnn
where x is the width of the signal, t is the type (d for decimal, h for hexa, b for binary,...) and nnn...nnn the value in the type specified.
For example for the decimal value 2 you will have several notations that will make sense in certain situations:
2'd2 //2 bits decimal
2'h2 //2 bits hexadecimal
2'b10//2 bits binary
For more informations about these notations you can read this pdf.

Lua: Working with Bit32 Library to Change States of I/O's

I am trying to understand exactly how programming in Lua can change the state of I/O's with a Modbus I/O module. I have read the modbus protocol and understand the registers, coils, and how a read/write string should look. But right now, I am trying to grasp how I can manipulate the read/write bit(s) and how functions can perform these actions. I know I may be very vague right now, but hopefully the following functions, along with some questions throughout them, will help me better convey where I am having the disconnect. It has been a very long time since I've first learned about bit/byte manipulation.
local funcCodes = { --[[I understand this part]]
readCoil = 1,
readInput = 2,
readHoldingReg = 3,
readInputReg = 4,
writeCoil = 5,
presetSingleReg = 6,
writeMultipleCoils = 15,
presetMultipleReg = 16
}
local function toTwoByte(value)
return string.char(value / 255, value % 255) --[[why do both of these to the same value??]]
end
local function readInputs(s)
local s = mperia.net.connect(host, port)
s:set_timeout(0.1)
local req = string.char(0,0,0,0,0,6,unitId,2,0,0,0,6)
local req = toTwoByte(0) .. toTwoByte(0) .. toTwoByte(6) ..
string.char(unitId, funcCodes.readInput)..toTwoByte(0) ..toTwoByte(8)
s:write(req)
local res = s:read(10)
s:close()
if res:byte(10) then
local out = {}
for i = 1,8 do
local statusBit = bit32.rshift(res:byte(10), i - 1) --[[What is bit32.rshift actually doing to the string? and the same is true for the next line with bit32.band.
out[#out + 1] = bit32.band(statusBit, 1)
end
for i = 1,5 do
tDT.value["return_low"] = tostring(out[1])
tDT.value["return_high"] = tostring(out[2])
tDT.value["sensor1_on"] = tostring(out[3])
tDT.value["sensor2_on"] = tostring(out[4])
tDT.value["sensor3_on"] = tostring(out[5])
tDT.value["sensor4_on"] = tostring(out[6])
tDT.value["sensor5_on"] = tostring(out[7])
tDT.value[""] = tostring(out[8])
end
end
return tDT
end
If I need to be a more specific with my questions, I'll certainly try. But right now I'm having a hard time connecting the dots with what is actually going on to the bit/byte manipulation here. I've read both books on the bit32 library and sources online, but still don't know what these are really doing. I hope that with these examples, I can get some clarification.
Cheers!
--[[why do both of these to the same value??]]
There are two different values here: value / 255 and value % 255. The "/" operator represents divison, and the "%" operator represents (basically) taking the remainder of division.
Before proceeding, I'm going to point out that 255 here should almost certainly be 256, so let's make that correction before proceeding. The reason for this correction should become clear soon.
Let's look at an example.
value = 1000
print(value / 256) -- 3.90625
print(value % 256) -- 232
Whoops! There was another problem. string.char wants integers (in the range of 0 to 255 -- which has 256 distinct values counting 0), and we may be given it a non-integer. Let's fix that problem:
value = 1000
print(math.floor(value / 256)) -- 3
-- in Lua 5.3, you could also use value // 256 to mean the same thing
print(value % 256) -- 232
What have we done here? Let's look 1000 in binary. Since we are working with two-byte values, and each byte is 8 bits, I'll include 16 bits: 0b0000001111101000. (0b is a prefix that is sometimes used to indicate that the following number should be interpreted as binary.) If we split this into the first 8 bits and the second 8 bits, we get: 0b00000011 and 0b11101000. What are these numbers?
print(tonumber("00000011",2)) -- 3
print(tonumber("11101000",2)) -- 232
So what we have done is split a 2-byte number into two 1-byte numbers. So why does this work? Let's go back to base 10 for a moment. Suppose we have a four-digit number, say 1234, and we want to split it into two two-digit numbers. Well, the quotient 1234 / 100 is 12, and the remainder of that divison is 34. In Lua, that's:
print(math.floor(1234 / 100)) -- 12
print(1234 % 100) -- 34
Hopefully, you can understand what's happening in base 10 pretty well. (More math here is outside the scope of this answer.) Well, what about 256? 256 is 2 to the power of 8. And there are 8 bits in a byte. In binary, 256 is 0b100000000 -- it's a 1 followed by a bunch of zeros. That means it a similar ability to split binary numbers apart as 100 did in base 10.
Another thing to note here is the concept of endianness. Which should come first, the 3 or the 232? It turns out that different computers (and different protocols) have different answers for this question. I don't know what is correct in your case, you'll have to refer to your documentation. The way you are currently set up is called "big endian" because the big part of the number comes first.
--[[What is bit32.rshift actually doing to the string? and the same is true for the next line with bit32.band.]]
Let's look at this whole loop:
local out = {}
for i = 1,8 do
local statusBit = bit32.rshift(res:byte(10), i - 1)
out[#out + 1] = bit32.band(statusBit, 1)
end
And let's pick a concrete number for the sake of example, say, 0b01100111. First let's lookat the band (which is short for "bitwise and"). What does this mean? It means line up the two numbers and see where two 1's occur in the same place.
01100111
band 00000001
-------------
00000001
Notice first that I've put a bunch of 0's in front of the one. Preceeding zeros don't change the value of the number, but I want all 8 bits for both numbers so that I can check each digit (bit) of the first number with each digit of the second number. In each place where there both numbers had a 1 (the top number had a 1 "and" the bottom number had a 1), I put a 1 for the result, otherwise I put 0. That's bitwise and.
When we bitwise and with 0b00000001 as we did here, you should be able to see that we will only get a 1 (0b00000001) or a 0 (0b00000000) as the result. Which we get depends on the last bit of the other number. We have basically separated out the last bit of that number from the rest (which is often called "masking") and stored it in our out array.
Now what about the rshift ("right shift")? To shift right by one, we discard the rightmost digit, and move everything else over one space the the right. (At the left, we usually add a 0 so we still have 8 bits ... as usual, adding a bit in front of a number doesn't change it.)
right shift 01100111
\\\\\\\\
0110011 ... 1 <-- discarded
(Forgive my horrible ASCII art.) So shifting right by 1 changes our 0b01100111 to 0b00110011. (You can also think of this as chopping off the last bit.)
Now what does it mean to shift right be a different number? Well to shift by zero does not change the number. To shift by more than one, we just repeat this operation however many times we are shifting by. (To shift by two, shift by one twice, etc.) (If you prefer to think in terms of chopping, right shift by x is chopping off the last x bits.)
So on the first iteration through the loop, the number will not be shifted, and we will store the rightmost bit.
On the second iteration through the loop, the number will be shifted by 1, and the new rightmost bit will be what was previously the second from the right, so the bitwise and will mask out that bit and we will store it.
On the next iteration, we will shift by 2, so the rightmost bit will be the one that was originally third from the right, so the bitwise and will mask out that bit and store it.
On each iteration, we store the next bit.
Since we are working with a byte, there are only 8 bits, so after 8 iterations through the loop, we will have stored the value of each bit into our table. This is what the table should look like in our example:
out = {1,1,1,0,0,1,1,0}
Notice that the bits are reversed from how we wrote them 0b01100111 because we started looking from the right side of the binary number, but things are added to the table starting on the left.
In your case, it looks like each bit has a distinct meaning. For example, a 1 in the third bit could mean that sensor1 was on and a 0 in the third bit could mean that sensor1 was off. Eight different pieces of information like this were packed together to make it more efficient to transmit them over some channel. The loop separates them again into a form that is easy for you to use.

Optimize this comparator for better synthesis

I have a module which is basically a LUT whose input is 64 bits. The LUT always block consists of a case statement which compares the input to over 200 different integers. The default case in the case statement checks if the input is > 100 or not before assigning the output a default value.
My problem is that when I synthesize, it leads to a 65 bit comparator, and I was wondering if there are better ways of doing it so that a large comparator isn't synthesized.
Here's my code snippet:
always #(in)
begin
case (in)
-100: out <= 495050;
-99: out <= 500000;
...
99: out <= 99500000;
100: out <= 99504950;
default:
begin
if (in > 100)
out <= 99504950;
else
out <= 495050;
end
endcase
end
Thanks,
Faisal
Assuming that in is a 64 bit number, what you can do is to chop it off such that you only have to 'compare' the lowest few bits, and then you can do quick checks to see if the number is outside of the range needed.
For example, let's just chop off in at 8 bits, and assign it to an 8 bit signed register. This should allow you to represent between -128 and 127.
You can test if the full number is larger than 127 by: !in[63] && (|in[62:8]) (check if any upper bit is 1, and the MSB is not set).
You can test if the full number is less than -128 by: in[63] && !(&in[62:8]) (check if any upper bit is 0, and the MSB is set).
Now you know three things:
if the number is larger than 127
if the number is between 127 and -128
and if the number is less than -128.
You should be able to use a small 8-bit LUT for the inbetween case, or use your default values if it's in either of the upper ranges.
Note I might expect a good synthesizer to do this automatically for you, but if you look at the generated netlist and it's too large you can try this to see if it gives you a better result.
It seems like You have calculated table with some function values of input x = [-100;100]. If so, it would be better to store them in memory one after another starting from some base address. So to read them, You can write base + X + 100 value on the address bus, and obtain value you need.
In case you need a gigantic multiplexer, you may want to try using a "parallel" case directive.
As for comparator in "default" - I have the same problem, so I am waiting for an answer.
I wanted to write this as a comment but I have no such privilege

Resources