How do I apply command line overrides to SystemVerilog ovm_sequence objects? - verilog

I'd like to apply a command line override to an ovm_sequence object like this:
+ovm_set_config_int=*,max_timeout,100000
The max_timeout field is declared inside ovm_sequence_utils macro.
Is there any way to do it? My understanding is that ovm sequences are not part of the ovm hierarchy, so perhaps they can't be modified from the command line.

I'm not aware of a mechanism that lets you set-up config space like that from the command line. A quick grep of the OVM source doesn't show anything either.
A quick comment on
ovm sequences are not part of the ovm hierarchy
They're not constructed at build time, that's correct. They are created just before they start running on a sequencer, but any ovm_object based class can interrogate a config integer via get_config_int()
Normally I'd use a plus-arg for things like this, and the set the config int in my base test class based on that plus-arg. For example, the command line would have:
+max_timeout=100000
...and then, in my base test class, which all my tests inherit from:
function void build();
int timeout;
[....]
if ($value$plusargs("max_timeout=%d", timeout)) begin
`ovm_info(get_type_name(), "Setting timeout", OVM_MEDIUM);
set_config_int("*", "max_timeout", timeout");
end
[....]
endfunction
Normally my uses are not quite so literal as that, having flags that set multiple values up, but that's the basics of it.

I got it working (following instructions from http://www.testbench.in/OT_10_OVM_SEQUENCE_5.html) by adding the following to my ovm_sequence in task body():
if(!(p_sequencer.get_config_int("max_timeout",max_timeout)))
max_timeout = ... // some default value
The key here is that the command line config needs to be set for the sequencer, and the sequence can pick up that config using the above-mentioned code.

Related

Snakemake: Parameter as wildcard used in parallel script runs

I'm fairly new to snakemake and inherited a kind of huge worflow that consists in a sequence of 17 rules that run in serial.
Each rule takes outputs from the previous rules and uses them to run a python script. Everything has worked great so far except that now I'm trying to improve the worflow since some of the rules can be run in parallel.
A rough example of what I'm trying to achieve, my understanding is that wildcards should allow me to solve this.
grid = [ 10 , 20 ]
rule all:
input:
expand("path/to/C/{grid}/file_C" ,grid = grid)
rule process_A:
input:
path_A = "path/to/A/file_A"
path_B = "path/to/B/{grid}/file_B" # A rule further in the worflow could need a file from a previous rule saved with this structure
params:
grid = lambda wc: wc.get(grid)
output:
path_C = "path/to/C/{grid}/file_C"
script:
"script_A.py"
And inside the script I retrieve the grid size parameter:
grid = snakemake.params.grid
In the end the whole rule process_A should be rerun with grid = 10 and with grid = 20 and save each result to a folder whose path depends on grid also.
I know there are several things wrong with this, but I can't seem to find were to start from to figure this out. The error I'm getting now is:
name 'params' is not defined
Any help as to where to start from?
It would be useful to post the error stack trace of name 'params' is not defined to know exactly what is causing it. For now...
And inside the script I retrieve the grid size parameter:
grid = snakemake.params.grid
I suspect you are mixing the script directive with the shell directive. Probably you want something like:
rule process_A:
input: ...
output: ...
params: ...
script:
"script_A.py"
inside script_A.py snakemake will replace snakemake.params.grid with the actual param value.
Alternatively, write a standalone python script that parses command line arguments and you execute like any other program using the shell directive. (I tend to prefer this solution as it makes things more explicit and easier to debug but it also means more boiler-plate code to write a standalone script).

TaskWarrior automatically modify UDA

I have a question. Let's say that I have created User Defined Attribute attr with values A,B,C.
How to configure taskwarrior to automatically change the attr value from A to B when I enter
task x start
and change attr from B to C when
task x done
Disadvantage of suggested solution:
You continuously need to have a script running in the background.
There can occur a small delay between your task x start command, and the change of UDA attr
It is a bit of a tedious method, perhaps you can also accomplish your goal using solely taskwarrior commands/settings.
It is made for fun and I can currently not offer any security or proper functioning guarantees. I tested and use it on WSL Ubuntu 16.04.
Assumptions:
If you enter task x start the attribute Start is set to a valid date.
Solution:
You can have a script running in the background that reads the properties of all tasks, and as soon as it detects a valid date in the Start attribute of a tasks, and a value of B in the UDA attr then it sets the UDA attr to C by executing the command task x modify attr:C command.
I made a script/small project that sorts on a custom setting of project and urgency, and it contains the functionalities of:
Running in the background from startup automatically,
Scanning the taskproperties and automatically applying the changes that are programmed in the script.
So in effect,
You should modify/add the UDA attr here:
And duplicate and change for example method private static void setCustomSort(ArrayList<Task> taskList) {1 on line 88 of the main
(For the 2nd step, between //get uuid and //create command you should add the condition that checks the task for a valid id. Then if it has, change the command that is generated to task modify attr:C)
The instructions to compile the java code and set up automation are listed here.

How to run a string from an input file as python code?

I am creating something along the likes of a text adventure game. I have a .yaml file that is my input. This file looks something like this
node_type:
action
title:
Do some stuff
info:
This does some stuff and things
script:
'print("hello world")
print(ret_val)
foo.bar(True)
ret_val = (foo.bar() == True)
if (thing):
print(thing)
print(ret_val)
'
My end goal is to have my python program run the script portion of the yaml file exactly as if it had been copy pasted into the main code. (I know there are about ten bazillion security reasons I should not be running user input like this, but I am the only one writing these nodes, and the only one using this program so I'm mostly just ignoring this fact...)
Currently my attempt goes like this: I load my yaml file as a dict using pyyaml
node = yaml.safe_load(file.yaml)
Then I'm trying to use exec to run my code and hitting a lot of problems, I can't run if statements, I simply get a syntax error, and I can't get any sort of return value from my code. I've tried this as a work around:
def main()
ret_val = "test";
thing = exec(node['script'], globals(),locals())
print(ret_val)
which when run with the above .yaml file prints
>> hello world
>> test
>> True
>> test
for some reason not actually modifying any of my main variables even though I fed them to exec.
Is there any way for me to work around these issues or is there an all together better way to be doing this?
One way of doing this would be to parse the code out and save it to a .py file, from which it can be imported dynamically, for example by importlib.
You might want to encapsulate parsed code into a function, which you can then easily call to invoke your action. Also, it would make sense to specify some default imports there.

Throws error when passing argument with space in JAVA_OPTS in Linux

I am passing command line parameters to gatling script.
This works and executes my test in Windows operating system:
set JAVA_OPTS="-DuserCount=2 -DflowRepeatCount=3 -DdefinitionId=102168 -DtestServerUrl=https://someURL -DenvAuthenticationHeaderFromPostman="Basic UWRZm9aGwsxFsB1V7RXK0OlB5cmZvcm1hbmNldGVzdDE="
It works and takes input which is passed
**********************INPUT*************************************
User Count ====>> 2
Repeat Count ====>> 3
Definition ID ====>> 102168
Environment URL ====>> https://someURL
Authentication Header ====>> Basic UWRZm9aGwsxFsB1V7RXK0OlB5cmZvcm1hbmNldGVzdDE=
***********************************************************
I want to do this same thing on Linux System.
While if I use this command in Linux then it throws error or takes Null or Binary values as input
(Passing arguments with ./gatling.sh)
JAVA_OPTS="-DuserCount=2 -DflowRepeatCount=3 -DdefinitionId=102168 -DtestServerUrl='https://someURL' -DenvAuthenticationHeaderFromPostman='Basic UWRZm9aGwsxFsB1V7RXK0OlB5cmZvcm1hbmNldGVzdDE='" ./gatling.sh
Gives this error,
GATLING_HOME is set to /opt/gatling-charts-highcharts-2.0.3 Error:
Could not find or load main class
UWRZm9aGwsxFsB1V7RXK0OlB5cmZvcm1hbmNldGVzdDE='
Here the problem is the space given in argument of -DenvAuthenticationHeaderFromPostman='Basic UWRZm9aGwsxFsB1V7RXK0OlB5cmZvcm1hbm='.
What is the solution?
The problem is that the $JAVA_OPTS variable is probably not surrounded by quotes. See this question: Passing a space-separated System Property via a shell script doesn't work
The gatling guys clearly forgot to do that.
I would file a bug and/or just edit gatling.sh.
Ideally though you might just want to consider seeing if Gatling takes a properties file or some other way to configure.

Perl program structure for parsing

I've got question about program architecture.
Say you've got 100 different log files with different formats and you need to parse and put that info into an SQL database.
My view of it is like:
use general config file like:
program1->name1("apache",/var/log/apache.log) (modulename,path to logfile1)
program2->name2("exim",/var/log/exim.log) (modulename,path to logfile2)
....
sqldb->configuration
use something like a module (1 file per program) type1.module (regexp, logstructure(somevariables), sql(tables and functions))
fork or thread processes (don't know what is better on Linux now) for different programs.
So question is, is my view of this correct? I should use one module per program (web/MTA/iptablat)
or there is some better way? I think some regexps would be the same, like date/time/ip/url. What to do with that? Or what have I missed?
example: mta exim4 mainlog
2011-04-28 13:16:24 1QFOGm-0005nQ-Ig
<= exim#mydomain.org.ua** H=localhost
(exim.mydomain.org.ua)
[127.0.0.1]:51127 I=[127.0.0.1]:465
P=esmtpsa
X=TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32
CV=no A=plain_server:spam S=763
id=1303985784.4db93e788cb5c#mydomain.org.ua T="test" from
<exim#exim.mydomain.org.ua> for
test#domain.ua
everything that is bold is already parsed and will be putted into sqldb.incoming table. now im having structure in perl to hold every parsed variable like $exim->{timstamp} or $exim->{host}->{ip}
my program will do something like tail -f /file and parse it line by line
Flexability: let say i want to add supprot to apache server (just timestamp userip and file downloaded). all i need to know what logfile to parse, what regexp shoud be and what sql structure should be. So im planning to have this like a module. just fork or thread main process with parameters(logfile,filetype). Maybe further i would add some options what not to parse (maybe some log level is low and you just dont see mutch there)
I would do it like this:
Create a config file that is formatted like this: appname:logpath:logformatname
Create a collection of Perl class that inherit from a base parser class.
Write a script which loads the config file and then loops over its contents, passing each iteration to its appropriate handler object.
If you want an example of steps 1 and 2, we have one on our project. See MT::FileMgr and MT::FileMgr::* here.
The log-monitoring tool wots could do a lot of the heavy lifting for you here. It runs as a daemon, watching as many log files as you could want, running any combination of perl regexes over them and executing something when matches are found.
I would be inclined to modify wots itself (which its licence freely allows) to support a database write method - have a look at its existing handle_* methods.
Most of the hard work has already been done for you, and you can tackle the interesting bits.
I think File::Tail is a nice fit.
You can make an array of File::Tail objects and poll them with select like this:
while (1) {
($nfound,$timeleft,#pending)=
File::Tail::select(undef,undef,undef,$timeout,#files);
unless ($nfound) {
# timeout - do something else here, if you need to
} else {
foreach (#pending) {
# here you can handle log messages depending on filename
print $_->{"input"}." (".localtime(time).") ".$_->read;
}
(from perl File::Tail doc)

Resources