Do we have Asynchronous and Synchronous Latches in Verilog? - verilog

As we know, there are two types of 'flops' namely Asynchronous(reset) and Synchronous(reset).
Similarly, do we have 'latches' with types Asynchronous and Synchronous?
If yes, how do we model them using a Verilog code?

The terms asynchronous and synchronous are relative terms to a clock or some other synchronizing_ signal. A latch only has an enable or load signal, so there is nothing for it to be synchronized to, and those terms do not apply.
By the way, this question is more suited to https://electronics.stackexchange.com/.

Related

What is the difference between structural Verilog and behavioural Verilog?

As in the title, what are the main differences between structural and behavioural Verilog?
There is no strict definition of these terms, according to the IEEE Std. However, customarily, structural refers to describing a design using module instances (especially for the lower-level building blocks such as AND gates and flip-flops), whereas behavioral refers to describing a design using always blocks.
Gate netlists are always structural, and RTL code is typically behavioral. It is common for RTL to have instances of clock gates and synchronizer cells.
Structural
Here functions are defined using basic components such as an invertor,
a MUX, a adder, a decoder, basic digital logic gates etc.. It is just
like connecting and arranging different parts of circuits available to
implement a function.
Behavorial
The Behavioral description in Verilog is used to describe the function
of a design in an algorithmic manner. Behavioral modeling in Verilog
uses constructs similar to C language constructs. Further , this is
divided into 2 sub categories .
(a) Continuous
assignment of data to outputs are continuous. This will be
implemented using explicit "assign" statements or by assigning a
value to a wire during its declaration .
In case of assign any change in input will
immediately effect the output . Hence output is to be declared as
wire
(b) Procedural
Here the data assignments are not carried out continuously instead it
happens on specific events specified in sensitivity list. This type of
modelling scheme is implemented using procedural blocks such as
"always"or "initial" .
Here, output variables must be defined as reg because they need to
keep hold of previous value until new assignment occurs after any change in specified sensitivity list.
Hope this helps :)
Structural Verilog is usually referred to a Verilog code which is synthesizable (has an accurate and meaningful hardware realization) and is usually written in Register Transfer Level (RTL).
On the other hand Behavioral Verilog is usually a behavioral description of a hardware or functionality on a higher level. behavioral code does not have to be synthesizable for example when you define a delay in your verilog code scaled by the timescale, the synthesizer does not consider it when it is translating your code into logic and hardware, but rather it has simulation purposes.
The same goes for structural and behavioral VHDL.
Behavioral doesn't use logic gates description you can use And,Or,Not gates that are already defined in verilog
while structural uses logic gates description where you describe that you want a module called (And/Or/Not) and describe what it does & / | / ~.
Structural verilog deals with the primitives in simple word like and, or, not etc..
The primitives are called/inferred from libraries and connected with input output ports.
Example
module structural(y,a,b);
input a,b;
output y;
and a1 (y,a,b); // and is the primitive inferred and a1 is the instance name.
endmodule
Behavioral verilog deals with the logic or behavior of a system. It handles complex logic implementation and which is why in industry all implement the behavioral models of the system called as RTL. Once the behavioral RTL is validated by front end engineers using SV/UVM then this RTL is converted into Gate Level i.e Structural which go for synthesis.
Please refer the book of verilog written by Samir Palnitkar for more details.
Verilog is both a behavioral and a structural language. Internals of each module can be defined at four levels of abstraction, depending on the needs of the design.
Structural Verilog describes how a module is composed of simpler modules or of basic primitives such as gates or transistors. Behavioral Verilog describes how the outputs are computed as functions of the inputs.
Behavioral level
->This is the highest level of abstraction provided by Verilog HDL. mainly construct using
"always" and "initial" block.
Dataflow level
-> At this level, the module is designed by specifying the data flow. condition describe using "assign" keyword.
Gate level
->The module is implemented in terms of logic gates and interconnections between
these gates.
Switch level
->This is the lowest level of abstraction provided by Verilog. A module can be
implemented in terms of switches, storage nodes, and the interconnections
between them.

Why are nonblocking assignments not allowed in Verilog functions?

I have read that use of nonblocking assignments is not allowed in Verilog functions. Can anyone suggest a plausible explanation for this?
The IEEE Std for Verilog (1364-2001), section "10.3.4 Function rules" states:
A function shall not have any nonblocking assignments.
The 1800-2009 IEEE Std elaborates more on this:
Functions shall execute with no delay. Thus, a process calling a
function shall return immediately. Statements that do not block shall
be allowed inside a function; specifically, nonblocking assignments,
event triggers, clocking drives, and fork-join_none constructs shall
be allowed inside a function.
The intention was for functions to be simple to evaluate in the Verilog event queue. If you need to advance time, use a task instead of a function.
Try not to think about functions in Verilog like functions in C:
Functions in Verilog are designed to be a developer-friendly way to instantiate identical combinational logic in multiple places at once rather than having to write it over again / make a module for it. A lot of "newbies" to Verilog try to rationalize functions like they are C functions, and while they are "returning" a value, it is easier (and more correct) in the end to conceptualize them as blocks of combinational gates.
Note that this is different from a "task", which are more generally used for executing things "in order", which would probably be more useful in a testbench situation than a function
As you learn Verilog try not to rationalize the HDL you write as "code", because it is a different style of thinking.
EDIT: Took out some bad explanation on my part

"Wait-free" data in Haskell

I've been led to believe that the GHC implementation of TVars is lock-free, but not wait-free. Are there any implementations that are wait-free (e.g. a package on Hackage)?
Wait-freedom is a term from distributed computing. An algorithm is wait-free if a thread (or distributed node) is able to terminate correctly even if all input from other threads is delayed/lost at any time.
If you care about consistency, then you cannot guarantee wait-freedom (assuming that you always want to terminate correctly, i.e. guarantee availability). This follows from the CAP theorem [1], since wait-freedom essentially implies partition-tolerance.
[1] http://en.wikipedia.org/wiki/CAP_theorem
Your question "Are there any implementations that are wait-free?" is a bit incomplete. STM (and thus TVar) is rather complex and has support built into the compiler - you can't build it properly with Haskell primitives.
If you're looking for any data container that allows mutation and can be non-blocking then you want IORefs or MVars (but those can block if no value is available).

Verilog Best Practice - Incrementing a variable

I'm by no means a Verilog expert, and I was wondering if someone knew which of these ways to increment a value was better. Sorry if this is too simple a question.
Way A:
In a combinational logic block, probably in a state machine:
//some condition
count_next = count + 1;
And then somewhere in a sequential block:
count <= count_next;
Or Way B:
Combinational block:
//some condition
count_en = 1;
Sequential block:
if (count_en == 1)
count <= count + 1;
I have seen Way A more often. One potential benefit of Way B is that if you are incrementing the same variable in many places in your state machine, perhaps it would use only one adder instead of many; or is that false?
Which method is preferred and why? Do either have a significant drawback?
Thank you.
One potential benefit of Way B is that if you are incrementing the same variable in many places in your state machine, perhaps it would use only one adder instead of many; or is that false?
Any synthesis tool will attempt automatic resource sharing. How well they do so depends on the tool and code written. Here is a document that describes some features of Design Compiler. Notice that in some cases, less area means worse timing.
Which method is preferred and why? Do either have a significant drawback?
It depends. Verilog(for synthesis) is a means to implement some logic circuit but the spec does not specify exactly how this is done. Way A may be the same as Way B on an FPGA but Way A is not consistent with low power design on an ASIC due to the unconditional sequential assignment. Using reset nets is almost a requirement on an ASIC but since many FPGAs start in a known state, you can save quite a bit of resources by not having them.
I use Way A in my Verilog code. My sequential blocks have almost no logic in them; they just assign registers based on the values of the "wire regs" computed in the combinational always blocks. There is just less to go wrong this way. And with Verilog we need all the help we can get.
What is your definition of "better" ?
It can be better performance (faster maximum frequency of the synthesized circuit), smaller area (less logic gates), or faster simulation execution.
Let's consider smaller area case for Xilinx and Altera FPGAs. Registers in those FPGA families have enable input. In your "Way B", *count_en* will be directly mapped into that enable register input, which will result in less logic gates. Essentially, "Way B" provides more "hints" to a synthesis tool how to better synthesize that circuit. Also it's possible that most FPGA synthesis tools (I'm talking about Xilinx XST, Altera MAP, Mentor Precision, and Synopsys Synplify) will correctly infer register enable input from the "Way A".
If *count_en* is synthesized as enable register input, that will result in better performance of the circuit, because your counter increment logic will have less logic levels.
Thanks

using always#* | meaning and drawbacks

can you say what is the meaning of that
always # *
Is there any possible side effects after using that statement ?
It's just a shortcut for listing all of the wires that the always block depends on. Those wires are the "sensitivity list". One advantage of using it is that synthesized code is unlikely to care what you put in the sensitivity list (other than posedge and negedge) because the wires will be "physically" connected together. A simulator might rely on the list to choose which events should cause the block to execute. If you change the block and forget to update the list your simulation might diverge from the actual synthesized behavior.
In SystemVerilog, we would prefer that you use always_comb begin...end instead of always #*.
The big drawback with always#* is that when some of your combinatorial logic involves constants, the always #* may not trigger at time 0, it needs to see a signal change to trigger. always_comb guarantees to trigger at time 0 at least once.
Another benefit of always_comb is that it in-lines function calls. If you call a function, and the body of the function references a signal not passed as an argument, always #* will not be sensitive to that signal.
#Ben Jackson answered correctly. The answer to the second part is there are no possible side effects; I consider this a recommended practice for combinatorial logic.

Resources