I have read that use of nonblocking assignments is not allowed in Verilog functions. Can anyone suggest a plausible explanation for this?
The IEEE Std for Verilog (1364-2001), section "10.3.4 Function rules" states:
A function shall not have any nonblocking assignments.
The 1800-2009 IEEE Std elaborates more on this:
Functions shall execute with no delay. Thus, a process calling a
function shall return immediately. Statements that do not block shall
be allowed inside a function; specifically, nonblocking assignments,
event triggers, clocking drives, and fork-join_none constructs shall
be allowed inside a function.
The intention was for functions to be simple to evaluate in the Verilog event queue. If you need to advance time, use a task instead of a function.
Try not to think about functions in Verilog like functions in C:
Functions in Verilog are designed to be a developer-friendly way to instantiate identical combinational logic in multiple places at once rather than having to write it over again / make a module for it. A lot of "newbies" to Verilog try to rationalize functions like they are C functions, and while they are "returning" a value, it is easier (and more correct) in the end to conceptualize them as blocks of combinational gates.
Note that this is different from a "task", which are more generally used for executing things "in order", which would probably be more useful in a testbench situation than a function
As you learn Verilog try not to rationalize the HDL you write as "code", because it is a different style of thinking.
EDIT: Took out some bad explanation on my part
Related
As we know, there are two types of 'flops' namely Asynchronous(reset) and Synchronous(reset).
Similarly, do we have 'latches' with types Asynchronous and Synchronous?
If yes, how do we model them using a Verilog code?
The terms asynchronous and synchronous are relative terms to a clock or some other synchronizing_ signal. A latch only has an enable or load signal, so there is nothing for it to be synchronized to, and those terms do not apply.
By the way, this question is more suited to https://electronics.stackexchange.com/.
When incompletely assigning a value I get a latch. But why did I get a latch in the example below? I think there is no need for the latch of F output because it is defined at all values of SEL.
Verilog code:
always # (ENB or D or A or B pr SEL)
if (ENB)
begin
Q=D;
if (SEL)
F=A;
else
F=B;
end
Inferred logic:
Although it is defined at all values of SEL, it is not defined for all values of ENB. If ENB = 0, your code says that both Q and F should hold the value from the previous cycle. This is also what is inferred in the image you are linking: only update Q and F if ENB = 1.
If you want Q to be a latch and F not, you can do this:
always # (ENB or D or A or B or SEL)
begin
if (ENB)
Q=D;
if (SEL)
F=A;
else
F=B;
end
Edit: additional information
As pointed out in the comments, I only showed how you could realize combinational logic and a latch, without modifying your code too much. There are, however, some things which could be done better. So, a non-TL;DR version:
Although it is possible to put combinational logic and latches in one procedural block, it is better to split them into two blocks. You are designing two different kinds of hardware, so it is also better to seperate them in Verilog.
Use nonblocking assignments instead of blocking assignments when modeling latches. Clifford E. Cummings wrote an excellent paper on the difference between blocking and nonblocking assignments and why it is important to know the difference. I am also going to use this paper as source here: Nonblocking Assignments in Verilog Synthesis, Coding Styles That Kill!
First, it is important to understand what a race condition in Verilog is (Cummings):
A Verilog race condition occurs when two or more statements that are scheduled to execute in the same simulation time-step, would give different results when the order of statement execution is changed, as permitted by the IEEE Verilog Standard.
Simply put: always blocks may be executed in an arbitrary order, which could cause race conditions and thus unexpected behaviour.
To understand how to prevent this, it is important to understand the difference between blocking and nonblocking assignments. When you use a blocking assignment (=), the evaluation of the right-hand side (in your code A, B, and D) and assignment of the left-hand side (in your code Q and F) is done without interruption from any other Verilog statement (i.e., "it happens immediately"). When using a nonblocking assignment (<=), however, the left-hand side is only updated at the end of a timestep.
As you can imagine, the latter assignment type helps to prevent race conditions, because you know for sure at what moment the left-hand side of your assignment will be updated.
After an analysis of the matter, Cummings concludes, i.a., the following:
Guideline #1: When modeling sequential logic, use nonblocking assignments.
Guideline #2: When modeling latches, use nonblocking assignments.
Guideline #3: When modeling combinational logic with an always block, use blocking assignments.
A last point which I want to highlight from the aforementioned paper is the "why". Except from the fact that you are sure the right hardware is infered, it also helps when correlating pre-synthesis simulations with the the behaviour of your actual hardware:
But why? In general, the answer is simulation related. Ignoring the above guidelines [about using blocking or nonblocking assignments on page 2 of the paper] can still infer the correct synthesized logic, but the pre-synthesis simulation might not match the behavior of the synthesized circuit.
This last point is not possible if you want to strictly adhere to Verilog2001, but if you are free to choose your Verilog version, try to use always_combfor combinational logic and always_latch for latches. Both keywords automatically infer the sensitivity list, and it is easier for tools to find out if you actually coded up the logic you intended to design.
Quoting from the SystemVerilog LRM:
The always_latch construct is identical to the always_comb construct except that software tools should perform additional checks and warn if the behavior in an always_latch construct does not represent latched logic, whereas in an always_comb construct, tools should check and warn if the behavior does not represent combinational logic.
With these tips, your logic would look like this:
always_latch
begin
if (ENB)
Q <= D;
end
always_comb
begin
if (SEL)
F = A;
else
F = B;
end
Part 1:
I was always told to use functions in Verilog to avoid code duplication. But can't I do that with a module? If my understanding is correct, all functions can be re-written as modules in Verilog except that modules cannot be instantiated from the inside of an always block. Except, in this case, I can always stick with modules. Am I correct?
Part 2:
If I am correct, why can't the Verilog compiler be written in such a way that the modules get the treatment of a function? I mean, why can't the compiler allow the programmer to instantiate a module inside n block and stop supporting functions?
module != function. Their usage in verilog is completely different.
Functions are actually extended expressions and it is used in expressions. It can be used in rhs expression of the 'assign' statement or in expressions inside any procedural block.
Functions cannot consume time.
Functions must return a value.
Modules are used to express hardware hierarchy and contain concurrent procedural blocks (which might contain functions).
Modules may consume time.
Modules cannot return value. (output ports are not return values)
Potentially you can create a function which replaces internals of a single always block and write an equivalent module with an always block which returns function. But this is it.
You are not correct :). Verilog compiler cannot be written in such a way, because there is a verilog standard which every verilog compiler must follow. Otherwise it will not be verilog.
I'm by no means a Verilog expert, and I was wondering if someone knew which of these ways to increment a value was better. Sorry if this is too simple a question.
Way A:
In a combinational logic block, probably in a state machine:
//some condition
count_next = count + 1;
And then somewhere in a sequential block:
count <= count_next;
Or Way B:
Combinational block:
//some condition
count_en = 1;
Sequential block:
if (count_en == 1)
count <= count + 1;
I have seen Way A more often. One potential benefit of Way B is that if you are incrementing the same variable in many places in your state machine, perhaps it would use only one adder instead of many; or is that false?
Which method is preferred and why? Do either have a significant drawback?
Thank you.
One potential benefit of Way B is that if you are incrementing the same variable in many places in your state machine, perhaps it would use only one adder instead of many; or is that false?
Any synthesis tool will attempt automatic resource sharing. How well they do so depends on the tool and code written. Here is a document that describes some features of Design Compiler. Notice that in some cases, less area means worse timing.
Which method is preferred and why? Do either have a significant drawback?
It depends. Verilog(for synthesis) is a means to implement some logic circuit but the spec does not specify exactly how this is done. Way A may be the same as Way B on an FPGA but Way A is not consistent with low power design on an ASIC due to the unconditional sequential assignment. Using reset nets is almost a requirement on an ASIC but since many FPGAs start in a known state, you can save quite a bit of resources by not having them.
I use Way A in my Verilog code. My sequential blocks have almost no logic in them; they just assign registers based on the values of the "wire regs" computed in the combinational always blocks. There is just less to go wrong this way. And with Verilog we need all the help we can get.
What is your definition of "better" ?
It can be better performance (faster maximum frequency of the synthesized circuit), smaller area (less logic gates), or faster simulation execution.
Let's consider smaller area case for Xilinx and Altera FPGAs. Registers in those FPGA families have enable input. In your "Way B", *count_en* will be directly mapped into that enable register input, which will result in less logic gates. Essentially, "Way B" provides more "hints" to a synthesis tool how to better synthesize that circuit. Also it's possible that most FPGA synthesis tools (I'm talking about Xilinx XST, Altera MAP, Mentor Precision, and Synopsys Synplify) will correctly infer register enable input from the "Way A".
If *count_en* is synthesized as enable register input, that will result in better performance of the circuit, because your counter increment logic will have less logic levels.
Thanks
can you say what is the meaning of that
always # *
Is there any possible side effects after using that statement ?
It's just a shortcut for listing all of the wires that the always block depends on. Those wires are the "sensitivity list". One advantage of using it is that synthesized code is unlikely to care what you put in the sensitivity list (other than posedge and negedge) because the wires will be "physically" connected together. A simulator might rely on the list to choose which events should cause the block to execute. If you change the block and forget to update the list your simulation might diverge from the actual synthesized behavior.
In SystemVerilog, we would prefer that you use always_comb begin...end instead of always #*.
The big drawback with always#* is that when some of your combinatorial logic involves constants, the always #* may not trigger at time 0, it needs to see a signal change to trigger. always_comb guarantees to trigger at time 0 at least once.
Another benefit of always_comb is that it in-lines function calls. If you call a function, and the body of the function references a signal not passed as an argument, always #* will not be sensitive to that signal.
#Ben Jackson answered correctly. The answer to the second part is there are no possible side effects; I consider this a recommended practice for combinatorial logic.