In BACnet VTS scripting is there a way to implement functions/macros? - bacnet

In my VTS scripts almost all the SEND, EXPECT commands have similar parameters(Like DEST ADDRESS, DEST NETWORK etc). So is there a way to avoid duplication using a function or macro. I had not seen any functions/macros in the example scripts of VTS.

Unfortunately, the VTS script is limited in its possibilities. Something like function macros do not exist. But you can use rather than hard-coded values​​, parameters such as IUT_ADDR for an address.
Alternatively, you generate the VTS scripts through an additional tool. A Python script is conceivable that functions as a kind of preprocessor and create the VTS script in prebuild step.

Related

how to remove a data source from rrd db programatically?

I'm trying to remove a data source from a rrd db.
I found I can do something like
"rrdtool tune mydb.rrd DEL:source_name"
and it works, but I want to do it from C/C++ code.
I could use the system function in Linux, but I don't
like the overhead.
I looked in https://oss.oetiker.ch/rrdtool/doc/librrd.en.html
to see if there is something I could use, but I didn't find anything.
I also looked in the rrd source code from https://github.com/oetiker/rrdtool-1.x/tree/master/src
and I found they call rrd_modify_r2() to remove sources, but this function is static, so it's not exported (as opposed to rrdc_create_r2)
So, how can I remove a source from C/C++ code ?
thanks,
Catalin
You use of course rrdtool tune filename.rrd DEL:ds-name to do this from the commandline, as you note.
However the RRDTool C bindings in librrd are not as comprehensive, and do not appear to expose this functionality. Not sure why - the modify functions is clearly useful - but thats how it seems to be.
One option you have would be to simply call the external rrdtool binary with fork/exec, passing the appropriate commandline. This is not a particularly pretty way to do it, but is more portable and compatible with the published interface.

Unit testing of shell commands called in a python script

I am writing unit tests (using Python 3.7, Pytest 3.6 and tox 3.0) for a function that compiles a series of shell commands as a list of strings, and then executes them using the subprocess module. The shell commands do the following:
creates a filename from the function arguments.
cd into a given directory.
checks for a file with a certain name and removes it if it exist.
creates a new file with the same name.
executes a program and pipes the output into the new file.
My test right now is mocking the subprocess module, and then asserting that it was called with a list of strings that contains all the expected commands given some test arguments.
Is there a way to test that the commands do what they are supposed to? Right now my test is only checking if the list of commands I feed to the subprocess module is the same as the one I have asserted it to be. This does not tell me whether the commands are the right ones for what I am trying to achieve. Rather it only serves as a test on whether I can write down the same string in two different source files.
Can I simulate the side effects that I expect the shell commands to have?
Your question combines two interesting topics: a) Testing code generated from a code generator and b) testing shell code.
For testing code from a generator, you have in principle to do the following: i) test that the generator creates the code that is expected - which you have already done, ii) test that the code snippets / pieces which the generator glues together actually behave (independently and in combination) as it is intended (which in your case are the shell code pieces that in the end together will form a valid shell program) - this is the part about testing shell code that will be adressed below, and iii) test that the inputs that control the generator are correct.
It is comparable with a compiler: i) is the compiler code, ii) are assembly code snippets that the compiler combines to get the resulting assembly program, and iii) is the source code that is given to the compiler to get it compiled. Once i), ii) and iii) are tested, there is only seldom the need to also test the assembly code (that is, on assembly code level). In particular, the source code iii) is ideally tested by test frameworks in the same programming language.
In your case it is not so clear how part iii) looks and how it can be tested, though.
Regarding the testing of shell code / shell code snippets: Shell code is dominated by interactions with other executables or the operating system. The type of problems that lies in interactions in shell code goes in the direction of, am I calling the right executables in the right order with the arguments in the right order with properly formatted argument values, and are the outputs in the form I expect them to be etc. To test all this, you should not apply unit-testing, but integration testing instead.
In your case this means that the shell code snippets should be integration-tested on the different target operating systems (that is, not isolated from the operating system). And, the various ways in which the generator puts these snippets together should also be integration tested to see if they together operate nicely on the operating system.
However, there can be shell code that is suitable for unit-testing. This is, for example, code performing computations within the shell, or string manipulations. I would even consider shell code with calls to certain fundamental tools like basename as suitable for unit-testing (interpreting such tools as being part of the 'standard library' if you like). In your case as you describe the generated shell code
creates a filename from the function arguments.
This sounds like one example of a good candidate for 'unit-testing' shell code: This filename creation functionality could be put into a shell function and then be tested in isolation.
With pytest-mock you can request the mocker fixture and then spy on subprocess functions:
def test_xxx(mocker):
mocker.spy(subprocess, 'call')
subprocess.call(...)
assert subprocess.call.call_count == 1
P.S. Tests with side effects are generally bad practice, so I would recommend running all the shell commands in a tmpdir (pytest fixture which creates a temporary directory).

Sending data between two programs on the same computer in TCL?

I'm looking for a way for one program to send a string to another program (both in TCL). I've been looking into "threading", however I haven't been able to understand how it works and how to do what I want with it.
I would suggest you look at the comm package in tcllib. This package provides remote script execution between Tcl interpreters using sockets as the communications mechanism. Since both sides are in Tcl, this is an easy way to go.

Groovy DSL scripts

I wrote a Global AST transformation that should be applied to DSL scripts, and am now in the process of selecting the best way to identify specific groovy scripts as these DSL scripts.
I considered the following options:
A custom file extension; The biggest disadvantage here is IDE support: many barely support compilation/editing of files that have non-groovy extensions (you can configure an editor but it requires some tweaking).
A special file name suffix (prefix) but in this case the suffix should be really unique (and thus relatively long) to avoid accidental transformation of regular groovy files (my current choice).
A local AST transformation applied to a script class, this has as disadvantage that one would need to write some boilerplate code for each script.
Having some unique first statement in the scripts that will identify the DSL.
What would in your opinion be the best option to choose and why? Are there any other options at my disposal that I haven't thought about?
If you compile your DSL scripts using GroovyShell you can use CompilerConfiguration.addCompilationCustomizer(ASTTransformationCustomizer(YourGlobalASTTransformation)) to apply the transformation on them.

Securing a workspace variable

Maybe you have come past the following situation. You're working and you start to run one script after another and then suddenly realize you've changed the value of a variable you are interested in. Apart from making a backup of the workspace, is there no other way to protect the variables?
Is there a way to select individual variables in the workspace that you're going to protect?
Apart from seeing the command history register, is there a history register of the different values that have been given to one particular variable?
Running scripts in sequence is a recipe for disaster. If possible, try turning those scripts into functions. This will naturally do away with the problems of overwriting variables you are running into, since variables inside functions are local to those functions whereas variables in scripts are local to the workspace -- and thus easily accessed/overwritten by separate scripts (often unintentionally, especially if you use variable names like "result").
I also agree that writing functions can be helpful in this situation. If however you are manipulating very large data sets then you need to be careful to write your code in a form which doesn't make multiple copies of variables within your functions or you may run into memory shortage problems.
No, there is no workspace history. I would say, if you run into that problem that you described, you should consider changing your programming style.
I would suggest you:
put that much code or information in your script, so you can start from an empty workspace to fulfill a task. For that reason I always put clear all at the start of my main file.
If it's getting too complex, consider calling functions. If you need values that are generated by another script or function, rewrite that script to become a function and call it in your main file or save the variables. Loading variables is absolutely okay. But running scripts in sequence leads to disaster as mentioned by marciovm.

Resources