I would like to generate some combinatorial logic, and I would like to use recursive functions for this (with predefined preprocessor arguments, of course).
Simple example: Factorial function
I have a reg [10:0] number; and I want to have the logic for counting it's factorial, but I want some predefined variable msb to be determine the MSB and have number[msb:0] as a starting number, and go on from there.
And module would receive the number and call fact_func(number) which would calcuate the factorial, but only the shortened one.
Is something like this possible in Verilog? Have functions generate logic?
I'm not totally sure what you're asking, but building logic with recursive function calls is very much possible in Verilog. You'll need to build a parameterized module that includes a generate block instantiating itself with a progressively smaller (or larger) parameter value, until you get to the base case.
Here's an example of it in action. You could easily modify this to generate the Fibonacci sequence.
Keep in mind, however, that generating blocks this way can synthesize very, very large. You will need to ensure the tools you're using are succeeding in optimizing it to a reasonable size.
Related
Say I have a Verilog module that's parameterizable like the below example:
// Crunches numbers using lots of parallel cores
module number_cruncher
#(parameter NUMBER_OF_PARALLEL_CORES = 4)
(input clock, ..., input [31:0] data, ... etc);
// Math happens here
endmodule
Using Verilog 1364-2005, I want to write a testbench that runs tests on this module with many different values NUMBER_OF_PARALLEL_CORES.
One option that I know will work is to use a generate block to create a bunch of different number_crunchers with different values for NUMBER_OF_PARALLEL_CORES. This isn't very flexible, though - the values need to be chosen at compile time.
Of course, I could also explicitly instantiate a lot of different modules, but that is time consuming and won't work for the sort of "fuzz" testing I want to do.
My questions:
Is there a way to do this by using a plusarg passed in from the command line using $value$plusargs? (I strongly suspect the answer is 'no' for Verilog 1364-2005).
Is there another way to "fuzz" module parameterizations in a testbench, or is using a generate block the only way?
Since $value$plusargs is evaluated at runtime, it can not be used to set parameter values, which must be done at compile-time.
However, if you use generate to instantiate multiple instances of the design with different parameter settings, you might be able to use $value$plusargs to selectively activate or enable one instance at a time. For example, in the testbench, you could use the runtime argument to only drive the inputs of a specific instance.
I need to generate random 1023 bit vectors in Verilog. I need something that I can put in the testbench to generate such vectors and feed these vectors to the design under test
As mentioned in the comments, concatenated calls to $random might serve your purpose unless you really need to avoid the 32-bit periodicity (which might still be possible via $random but Im not sure how best to do it if possible).
Better would be to use some SystemVerilog constructs, namely random classes:
class myVector;
rand bit [1022:0] vec; // 1023 bits
endclass
...
myVector vec = new;
vec.randomize();
getRandomVector <= vec.vec;
vec.randomize();
getAnotherRandomVector <= vec.vec;
If you want to ensure no repeats from multiple calls to .randomize(), you can declare vec to be randc instead (you can read more on the exact differences in IEEE 1800-2012 Ch 18). You can also provide constraints on the randomization to only get vectors of a certain form if you need it (like limit the number of 1s or anything like that).
Part 1:
I was always told to use functions in Verilog to avoid code duplication. But can't I do that with a module? If my understanding is correct, all functions can be re-written as modules in Verilog except that modules cannot be instantiated from the inside of an always block. Except, in this case, I can always stick with modules. Am I correct?
Part 2:
If I am correct, why can't the Verilog compiler be written in such a way that the modules get the treatment of a function? I mean, why can't the compiler allow the programmer to instantiate a module inside n block and stop supporting functions?
module != function. Their usage in verilog is completely different.
Functions are actually extended expressions and it is used in expressions. It can be used in rhs expression of the 'assign' statement or in expressions inside any procedural block.
Functions cannot consume time.
Functions must return a value.
Modules are used to express hardware hierarchy and contain concurrent procedural blocks (which might contain functions).
Modules may consume time.
Modules cannot return value. (output ports are not return values)
Potentially you can create a function which replaces internals of a single always block and write an equivalent module with an always block which returns function. But this is it.
You are not correct :). Verilog compiler cannot be written in such a way, because there is a verilog standard which every verilog compiler must follow. Otherwise it will not be verilog.
I'm trying out the random number generation from the new library in C++11 for a simple dice class. I'm not really grasping what actually happens but the reference shows an easy example:
std::default_random_engine generator;
std::uniform_int_distribution<int> distribution(1,6);
int dice_roll = distribution(generator);
I read somewhere that with the "old" way you should only seed once (e.g. in the main function) in your application ideally. However I'd like an easily reusable dice class. So would it be okay to use this code block in a dice::roll() method although multiple dice objects are instantiated and destroyed multiple times in an application?
Currently I made the generator as a class member and the last two lines are in the dice:roll() methods. It looks okay but before I compute statistics I thought I'd ask here...
Think of instantiating a pseudo-random number generator (PRNG) as digging a well - it's the overhead you have to go through to be able to get access to water. Generating instances of a pseudo-random number is like dipping into the well. Most people wouldn't dig a new well every time they want a drink of water, why invoke the unnecessary overhead of multiple instantiations to get additional pseudo-random numbers?
Beyond the unnecessary overhead, there's a statistical risk. The underlying implementations of PRNGs are deterministic functions that update some internally maintained state to generate the next value. The functions are very carefully crafted to give a sequence of uncorrelated (but not independent!) values. However, if the state of two or more PRNGs is initialized identically via seeding, they will produce the exact same sequences. If the seeding is based on the clock (a common default), PRNGs initialized within the same tick of the clock will produce identical results. If your statistical results have independence as a requirement then you're hosed.
Unless you really know what you're doing and are trying to use correlation induction strategies for variance reduction, best practice is to use a single instantiation of a PRNG and keep going back to it for additional values.
I'm by no means a Verilog expert, and I was wondering if someone knew which of these ways to increment a value was better. Sorry if this is too simple a question.
Way A:
In a combinational logic block, probably in a state machine:
//some condition
count_next = count + 1;
And then somewhere in a sequential block:
count <= count_next;
Or Way B:
Combinational block:
//some condition
count_en = 1;
Sequential block:
if (count_en == 1)
count <= count + 1;
I have seen Way A more often. One potential benefit of Way B is that if you are incrementing the same variable in many places in your state machine, perhaps it would use only one adder instead of many; or is that false?
Which method is preferred and why? Do either have a significant drawback?
Thank you.
One potential benefit of Way B is that if you are incrementing the same variable in many places in your state machine, perhaps it would use only one adder instead of many; or is that false?
Any synthesis tool will attempt automatic resource sharing. How well they do so depends on the tool and code written. Here is a document that describes some features of Design Compiler. Notice that in some cases, less area means worse timing.
Which method is preferred and why? Do either have a significant drawback?
It depends. Verilog(for synthesis) is a means to implement some logic circuit but the spec does not specify exactly how this is done. Way A may be the same as Way B on an FPGA but Way A is not consistent with low power design on an ASIC due to the unconditional sequential assignment. Using reset nets is almost a requirement on an ASIC but since many FPGAs start in a known state, you can save quite a bit of resources by not having them.
I use Way A in my Verilog code. My sequential blocks have almost no logic in them; they just assign registers based on the values of the "wire regs" computed in the combinational always blocks. There is just less to go wrong this way. And with Verilog we need all the help we can get.
What is your definition of "better" ?
It can be better performance (faster maximum frequency of the synthesized circuit), smaller area (less logic gates), or faster simulation execution.
Let's consider smaller area case for Xilinx and Altera FPGAs. Registers in those FPGA families have enable input. In your "Way B", *count_en* will be directly mapped into that enable register input, which will result in less logic gates. Essentially, "Way B" provides more "hints" to a synthesis tool how to better synthesize that circuit. Also it's possible that most FPGA synthesis tools (I'm talking about Xilinx XST, Altera MAP, Mentor Precision, and Synopsys Synplify) will correctly infer register enable input from the "Way A".
If *count_en* is synthesized as enable register input, that will result in better performance of the circuit, because your counter increment logic will have less logic levels.
Thanks