What is the benefit the using DesignWare multiplier Library versus custom multiplier? - verilog

As I know, design ware library has multiplier cell library.but I don't know What is the benefit the using DesignWare multiplier Library versus custom multiplier(like booth algorithm)?
Does anyone know what is different and pros &cons each them?

Generally, using an IP from a vendor/device specific tailored library, is beneficial in a sense that it is implemented with a knowledge of the device's specific special function cells, which might contain a dedicated multiplier, divisor, ROM or whatever is not a standard logic cell. So basically, a complex logic could be implemented without "wasting" standard cells thus preserving them for other stuff.

Related

How do Verilog Compilers Interpret Addition

I know that Verilog has an arithmetic add operator. If I'm building an adder, should I make my own or use that? Which will perform better in my processor?
For simulation, the add operator will behave according to the standard and should be fine to use unless you have a reason to simulate a specific adder implementation.
For synthesis, what you get depends on your synthesis tool and final hardware platform. For example, FPGAs usually have dedicated logic for adds and using the Verilog add operator should take advantage of that automatically.
If you need extreme performance on your hardware, it's possible you could do better by using the available primitives directly. Though add is a very common operation and synthesis should be able to handle it well for most use cases.

Pass parameter during instantiation of ip core in vivado

Although it seems impossible from research:
Passing parameter to xci core
I am designing a custom core which uses an instance of a Xilinx FIFO. However, the top module has parameters which are exposed in the IP Packager, and should modify the included FIFO core.
module top();
parameter C_FIFO_DEPTH = 256
xilinx_fifo_core #(
.FIFO_DEPTH(C_FIFO_DEPTH)
) my_fifo_instance (...);
This way, when someone instantiates my module, by overriding parameter C_FIFO_DEPTH, they also change the embedded FIFO's depth.
Although this would work for user written modules, it doesn't work for instances of IP cores (xci), which seem to be configurable only through the "Customize IP" gui.
I have disabled Out-of-context generation, but still no dice.
I am currently working on a (very messy) solution using tcl scripts in the packaged core, however an elegant solution is desperately needed.
You can do this with the XPM_FIFO_xxx cores. Look in the UG953 Libraries Guide for docs and examples. You can also do it for RAM with XPM_MEMORY_xxx.
I can't think of any elegant solution, but here are three more messy ones:
(1) just use the largest FIFO you'll ever need. (Clearly likely to be a waste of area.)
(2) create a range of FIFOs of different sizes and use generate case to choose the right one. (Only any good if the range of useful sizes is reasonably limited.)
(3) don't use an IP block - design your own FIFO. (You probably thought of that.)

What SystemVerilog features should be avoided in synthesis?

SystemVerilog introduced some very useful constructs to improve coding style. However, as one of my coworkers always says, "You are not writing software, you are describing hardware." With that in mind, what features of the language should be avoided when the end result needs to be synthesized? This paper shows what features are currently synthesizable by the Synopsys tools, but to be safe I think one should only use the features that are synthesizable by all of the major vendors. Also, what constructs will produce strange results in the netlist which will be difficult to follow in an ECO?
In summary: I like compact and easy to maintain code, but not if it causes issues in the back end. What should I avoid?
Edit: In response to the close vote I want to try to make this a bit more specific. This question was inspired by this answer. I am a big fan of using the 'sugar' as Dave calls it to reduce the code complexity, but not if some synthesis tools are going to mangle signal names and make the result difficult to deal with. I am looking for more examples like this.
Theoretically, if you can write software that is synthesized into machine code to run on a piece of hardware, that software can be synthesized into hardware. And conversely, there are hardware constructs in Verilog-1995 that are not considered synthesizable simply because none of the major vendors ever got around to supporting it (e.g. assign/deassign). We still have people using //synopsis translate on/off because it took so long for them to support `ifdef SYNOPSYS.
Most of what I consider to be safe for synthesis in SystemVerilog is what I call syntactic sugar for Verilog. This is just more convenient ways of writing the same Verilog code with a lot less typing. Examples would be:
data types: typedef, struct, enum, int, byte
use of those types as ports, arguments and function return values
assignment operators: ++ -- +=
type casting and bit-streaming
packages
interfaces
port connection shortcuts
defaults for function/tasks/macro arguments, and port connections
Most of the constructs that fall into this category are taken from C and don't really change how the code gets synthesized. It's just more convenient to define and reference signals.
The place it gets difficult to synthesize is where there is dynamically allocated storage. This would be class objects, queues, dynamic arrays, and strings. as well as dynamically created processes with fork/join.
I think some people have a misconception about SystemVerilog thinking it is only for Verification when in fact the first version of the standard was the synthesizable subset, and Intel was one of the first users of it as a language for Design.
SystemVerilog(SV) can be used both as a HDL (Hardware Description Language) and HVL (Hardware Verification Language) and that is why it is often termed an "HDVL".
There are several interesting design constructs in SV which are synthesizable and can be used instead instead of older Verilog constructs, which are helpful in optimizing code and achieving faster results.
enum of SV vs parameter of Verilog while modelling FSM.
Use of logic instead of reg and wire.
Use of always_ff, always_comb, always_latch in place of
single always blocks in Verilog.
Use of the unique and priority statements instead of Verilog's
full and parallel case statements.
Wide range of data types available in SV.
Now what I have discussed above are those constructs of SystemVerilog which are used in RTL design.
But, the constructs which are used in the Verification Environment are non-synthesizable. They are as follows:
Dynamic arrays and associative arrays.
Program Blocks and Clocking blocks.
Mailboxes
Semaphores
Classes and all their related features.
Tasks
Chandle data types.
Queues.
Constrained random features.
Delay, wait, and event control statements.

Any benefits from implementing CSA versus just using multiplication symbol when synthesizing?

I am synthesizing some multiplication units in verilog and I was wondering if you generally get better results in terms of area/power savings if you implement your own CSA using booth encoding when multplying or if you just use the * symbol and let the synthesis tool take care of the problem for you?
Thank you!
Generally, I tend to trust the compiler tools I use and don't fret so much about the results as long as they meet my timing and area budgets.
That said, with multipliers that need to run at fast speeds I find I get better results (in DC, at least) if I create a Verilog module containing the multiply (*) and a retiming register or two, and push down into this module to synthesise it before popping up to toplevel synthesis. It seems as if the compiler gets 'distracted' by other timing paths if you try to do everything at once, so making it focus on a multiplier that you know is going to be tricky seems to help.
You have this question tagged with "FPGA." If your target device is an FPGA then it may be advisable to use FPGA's multiplier megafunction (don't remember what Xilinx calls it these days.)
This way, you will be sure that the tool utilizes the whatever internal hardware structure that you intend to use irrespective of synthesizer tool. You will be sure to get an optimum solution that is also predictable from a timing and latency standpoint.
Additionally, you don't have to test it for all the corner cases, especially important if you are doing signed multiplication and what kind of coding guidelines you follow.
I agree with #Marty in that I would use *. I have previously built my own low power adder structures, which then ran in to problems when the design shifted process/had to be run at a higher frequency. Hard coded architectures like this remove quite a bit of portability from the code.
Using the directives is nice in trials to see the different size (area) of architectures, but I leave the decision to the synthesis tool to make the best call based on the timing constraints and available area. I am not sure how power aware the tools are by default. Previously we ended up getting an extra license which added a lot of power aware knowledge to the synthesis.

Why are there not more control structures in most programming languages?

Why do most languages seem to only exhibit fairly basic control structures from a logic point of view? Stuff like If ... then, Else..., loops, For each, switch statement, etc. The standard list seems fairly basic from a logic point of view.
Why is there not much more in the way of logic syntactical sugar? Perhaps something like a proposition engine, where you could feed an array of premises or functions that return complicated self referential interdependent functions and results. Something where you could chain together a complex array of conditions, but represented in a way that was easy and clear to read in the code.
Premise 1
Premise 2 if and only if Premise 1
Premise 3
Premise 4 if Premise 2 and Premise 3
Premise 5 if and only if Premise 4
etc...
Conclusion
I realize that this kind of logic this can be constructed in functions and/or nested conditional statements. But why are there not generally more syntax options for structuring these kind of logical propositions without resulting in hairy looking conditional statements that can be hard to read and debug?
Is there an explanation for the kinds of control structures we typically see in mainstream programming languages? Are there specific control structures you would like to see directly supported by a language's syntax? Does this just add unnecessary complexity to the language?
Have you looked a Prolog? A Prolog program is basically a set of rules that is turned into one big evaluation engine.
From my personal experience Prolog is a bit too weird and I actually prefer ifs, whiles and so on but YMMV.
Boolean algebra is not difficult, and provides a solution for any conditionals you can think of, plus an infinite number of other variants.
You might as well ask for special syntax for "commonly-used" arithmetic expressions. Who is to say what qualifies as commonly-used? And where do you stop adding special-case syntax?
Adding to the complexity of a language parser is not preferable to using constructive expression syntax, combined with extensibility through defining functions.
It's been a long time since my Logic class in college but I would guess it's a mixture of difficulty in writing them into the language vs. the frequency with which they'd be used. I can't say I've ever had the need for them (not that I can recall). For those times that you would require something of that ilk the language designers probably figure you can work out the logic yourself using just the basic structures.
Just my wild guess though.
Because most programming languages don't provide sufficient tools for users to implement them, it is not seen as an important enough feature for the implementer to provide as an extension, and it isn't demanded enough or used enough to be added to the standard.
If you really want it, use a language that provides it, or provides the tools to implement it (for instance, lisp macros).
It sounds as though you are describing a rules engine.
The basic control algorithms we use mirror what processor can do efficiently. Basicly this boils down to simple test-and-branches.
It may seem limiting to you, but many people don't like the idea of writing a simple-looking line of code that requires hundreds or thousands (or millions) of processor cycles to complete. Among these people are systems software folks, who write things like Operating Systems and compilers. Naturally most compilers are going to reflect their own writer's concerns.
It relates to the concern regarding atomicity. If you can express A,B,C,D in simpler structures Y, Z, why not simply not supply A,B,C,D but supply Y, Z instead?
The existing languages reflect 60 years of the tension between atomicity and usability. The modern approach is "small language, large libraries". (C#, Java, C++, etc).
Because computers are binary, all decisions must come down to a 1/0, yes/no, true/false, etc.
To be efficient, the language constructs must reflect this.
Eventually all your code goes down to a micro-code that is executed one instruction at a time. Until the micro-code and accompanying CPU can describe something more colorful, we are stuck with a very plain language.

Resources