How to print to a single file from different modules in verilog? - verilog

As you know $fdisplay can print info into a file. But if you instantiate a module (like a BFM:Bus functional model) several times in the test bench and each of them has $fdisplay, then a problem may occur : Simultaneous access to a file
I my expriene that issue causes a not neat and tidy output file.
So how can I achieve my goal?
Python-Equivalent of my question is here.
P.S. The simulator's console has limitation of what can be aggregated and my logs are somewhat long. So I should print them to file. Also merging all verilog codes into one is not possible at all.(Think how the BFM models are)

If you want all the output from your BFMs to go into a single file, your problem is not with $fdisplay but with $fopen. You need to create a top level function that calls $fopen only if it has not been called before.
integer file=0;
function integer bfm_fopen;
begin
if (file)
bfm_fopen = file;
else begin
file = $fopen("logfile");
bfm_fopen = file;
end
end
endfunction
Then call top_level.bfm_fopen from your BFMs

Related

Is there a verilog function to iterate over multiple input files for validation?

I can open a file with
integer fd;
fd = $fopen ("file_to_open.txt", "r");
But if i have multiple files in a directory such as
f1.txt
f2.txt
f3.txt
How can I iterate over these files without explicitly defining their name in my verilog testbench?
In bash, this would be done with:
for f in ./*.txt; do
...
done
Verilog does not have a way of directly interacting with the Operating System a simulation is running on, so it cannot expand file names.
Two alternatives you have in Verilog are:
writing some C code that does this and linking that code through the VPI (difficult)
Creating a bash script that builds your list of files into a text file, then reading that list one line at a time with $fgets or $fscanf. (cumbersome)
This would be much easier if you were using SystemVerilog because it has string data types, and a direct interface to C (DPI)

Attempting to append all content into file, last iteration is the only one filling text document

I'm trying to Create a file and append all the content being calculated into that file, but when I run the script the very last iteration is written inside the file and nothing else.
My code is on pastebin, it's too long, and I feel like you would have to see exactly how the iteration is happening.
Try to summarize it, Go through an array of model numbers, if the model number matches call the function that calculates that MAC_ADDRESS, when done calculating store all the content inside a the file.
I have tried two possible routes and both have failed, giving the same result. There is no error in the code (it runs) but it just doesn't store the content into the file properly there should be 97 different APs and it's storing only 1.
The difference between the first and second attempt,
1 attempt) I open/create file in the beginning of the script and close at the very end.
2 attempt) I open/create file and close per-iteration.
First Attempt:
https://pastebin.com/jCpLGMCK
#Beginning of code
File = open("All_Possibilities.txt", "a+")
#End of code
File.close()
Second Attempt:
https://pastebin.com/cVrXQaAT
#Per function
File = open("All_Possibilities.txt", "a+")
#per function
File.close()
If I'm not suppose to reference other websites, please let me know and I'll just paste the code in his post.
Rather than close(), please use with:
with open('All_Possibilities.txt', 'a') as file_out:
file_out.write('some text\n')
The documentation explains that you don't need + to append writes to a file.
You may want to add some debugging console print() statements, or use a debugger like pdb, to verify that the write() statement actually ran, and that the variable you were writing actually contained the text you thought it did.
You have several loops that could be a one-liner using readlines().
Please do this:
$ pip install flake8
$ flake8 *.py
That is, please run the flake8 lint utility against your source code,
and follow the advice that it offers you.
In particular, it would be much better to name your identifier file than to name it File.
The initial capital letter means something to humans reading your code -- it is
used when naming classes, rather than local variables. Good luck!

reading binary values stored in text using verilog

I am working on project in which I have to read 5 bits binary values from text file. I have to read each 5 bit binary number and then assign them to 5 different 1 bit registers one by one. Moreover i can't use 'memreadb' because my text file is so huge almost 2mb and I think 'memreadb' can't deal with such a huge file because its not working in my case. so can anyone please tell me how to use 'fopen' and 'fread' function to solve my problem because i have not work on file handling in Verilog till now. And can anyone provide me an example similar to my problem?
Thanks,
Sami
You can use $fscanf to read your file one line at a time.
integer status, fd;
reg [4:0] value;
initial begin
fd = $fopen("data_file.dat", "r");
if (!fd) $error("could not read file");
while (!$feof(fd)) begin
status = $fscanf(fd,"%b",value);
// check status, then do what you need to do with value
end
end

Read nth line in Node.js without reading entire file

I'm trying to use Node.js to get a specific line for a binary search in a 48 Million line file, but I don't want to read the entire file to memory. Is there some function that will let me read, say, line 30 million? I'm looking for something like Python's linecache module.
Update for how this is different: I would like to not read the entire file to memory. The question this is identified as a duplicate of reads the entire file to memory.
You should use readline module from Node’s standard library. I deal with 30-40 million rows files in my project and this works great.
If you want to do that in a less verbose manner and don’t mind to use third party dependency use nthline package:
const nthline = require('nthline')
, filePath = '/path/to/100-million-rows-file'
, rowNumber = 42
nthline(rowNumber, filePath)
.then(line => console.log(line))
According to the documentation, you can use fs.createReadStream(path[, options]), where:
options can include start and end values to read a range of bytes from the file instead of the entire file.
Unfortunately, you have to approximate the desired position/line, but it seems to be no seek like function in node js.
EDIT
The above solution works well with lines that have fixed length.
New line character is nothing more than a character like all the others, so looking for new lines is like looking for lines that start with the character a.
Because of that, if you have lines with variable length, the only viable approach is to load them one at a time in memory and discard those in which you are not interested.

Perl program structure for parsing

I've got question about program architecture.
Say you've got 100 different log files with different formats and you need to parse and put that info into an SQL database.
My view of it is like:
use general config file like:
program1->name1("apache",/var/log/apache.log) (modulename,path to logfile1)
program2->name2("exim",/var/log/exim.log) (modulename,path to logfile2)
....
sqldb->configuration
use something like a module (1 file per program) type1.module (regexp, logstructure(somevariables), sql(tables and functions))
fork or thread processes (don't know what is better on Linux now) for different programs.
So question is, is my view of this correct? I should use one module per program (web/MTA/iptablat)
or there is some better way? I think some regexps would be the same, like date/time/ip/url. What to do with that? Or what have I missed?
example: mta exim4 mainlog
2011-04-28 13:16:24 1QFOGm-0005nQ-Ig
<= exim#mydomain.org.ua** H=localhost
(exim.mydomain.org.ua)
[127.0.0.1]:51127 I=[127.0.0.1]:465
P=esmtpsa
X=TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32
CV=no A=plain_server:spam S=763
id=1303985784.4db93e788cb5c#mydomain.org.ua T="test" from
<exim#exim.mydomain.org.ua> for
test#domain.ua
everything that is bold is already parsed and will be putted into sqldb.incoming table. now im having structure in perl to hold every parsed variable like $exim->{timstamp} or $exim->{host}->{ip}
my program will do something like tail -f /file and parse it line by line
Flexability: let say i want to add supprot to apache server (just timestamp userip and file downloaded). all i need to know what logfile to parse, what regexp shoud be and what sql structure should be. So im planning to have this like a module. just fork or thread main process with parameters(logfile,filetype). Maybe further i would add some options what not to parse (maybe some log level is low and you just dont see mutch there)
I would do it like this:
Create a config file that is formatted like this: appname:logpath:logformatname
Create a collection of Perl class that inherit from a base parser class.
Write a script which loads the config file and then loops over its contents, passing each iteration to its appropriate handler object.
If you want an example of steps 1 and 2, we have one on our project. See MT::FileMgr and MT::FileMgr::* here.
The log-monitoring tool wots could do a lot of the heavy lifting for you here. It runs as a daemon, watching as many log files as you could want, running any combination of perl regexes over them and executing something when matches are found.
I would be inclined to modify wots itself (which its licence freely allows) to support a database write method - have a look at its existing handle_* methods.
Most of the hard work has already been done for you, and you can tackle the interesting bits.
I think File::Tail is a nice fit.
You can make an array of File::Tail objects and poll them with select like this:
while (1) {
($nfound,$timeleft,#pending)=
File::Tail::select(undef,undef,undef,$timeout,#files);
unless ($nfound) {
# timeout - do something else here, if you need to
} else {
foreach (#pending) {
# here you can handle log messages depending on filename
print $_->{"input"}." (".localtime(time).") ".$_->read;
}
(from perl File::Tail doc)

Resources