Pass Parameters to Kiba run Method - kiba-etl

I'm trying to use something similar to the code that's used for the kiba cli programmatically as ...
filename = './path/to/script.rb'
script_content = IO.read(filename)
job_definition = Kiba.parse(script_content, filename)
Kiba.run(job_definition) # <= I want to pass additional parameters here
I'd like to be able to pass parameters to via the .run command besides the job_definition. It doesn't look like the run supports this but figured I'd check.

Related

Snakemake: Parameter as wildcard used in parallel script runs

I'm fairly new to snakemake and inherited a kind of huge worflow that consists in a sequence of 17 rules that run in serial.
Each rule takes outputs from the previous rules and uses them to run a python script. Everything has worked great so far except that now I'm trying to improve the worflow since some of the rules can be run in parallel.
A rough example of what I'm trying to achieve, my understanding is that wildcards should allow me to solve this.
grid = [ 10 , 20 ]
rule all:
input:
expand("path/to/C/{grid}/file_C" ,grid = grid)
rule process_A:
input:
path_A = "path/to/A/file_A"
path_B = "path/to/B/{grid}/file_B" # A rule further in the worflow could need a file from a previous rule saved with this structure
params:
grid = lambda wc: wc.get(grid)
output:
path_C = "path/to/C/{grid}/file_C"
script:
"script_A.py"
And inside the script I retrieve the grid size parameter:
grid = snakemake.params.grid
In the end the whole rule process_A should be rerun with grid = 10 and with grid = 20 and save each result to a folder whose path depends on grid also.
I know there are several things wrong with this, but I can't seem to find were to start from to figure this out. The error I'm getting now is:
name 'params' is not defined
Any help as to where to start from?
It would be useful to post the error stack trace of name 'params' is not defined to know exactly what is causing it. For now...
And inside the script I retrieve the grid size parameter:
grid = snakemake.params.grid
I suspect you are mixing the script directive with the shell directive. Probably you want something like:
rule process_A:
input: ...
output: ...
params: ...
script:
"script_A.py"
inside script_A.py snakemake will replace snakemake.params.grid with the actual param value.
Alternatively, write a standalone python script that parses command line arguments and you execute like any other program using the shell directive. (I tend to prefer this solution as it makes things more explicit and easier to debug but it also means more boiler-plate code to write a standalone script).

Pass runtime arguments in Gatling

I am trying to pass runtime args while running gatling tests for couple of fields. For example I am trying to pass number of users dynamically when running the test. How can I do that?
This is documented in the official documentation:
This can be done very easily with additional JAVA_OPTS in the launch script:
JAVA_OPTS="-Dusers=500 -Dramp=3600"
val nbUsers = Integer.getInteger("users", 1)
val myRamp = java.lang.Long.getLong("ramp", 0L)
setUp(scn.inject(rampUsers(nbUsers).during(myRamp.seconds)))
// Of course, passing a String is just as easy as:
JAVA_OPTS="-Dfoo=bar"
val foo = System.getProperty("foo")
The easiest way is going for Java system properties, for example if you have workload model defined as:
setUp(scn.inject(atOnceUsers(1)).protocols(httpProtocol))
if you change it from hard-coded to dynamic reading of the system property like:
setUp(scn.inject(atOnceUsers(Integer.parseInt(System.getProperty("userCount")))).protocols(httpProtocol))
you will be able to pass the desired number of users dynamically via -D command-line argument like:
gatling.bat -DuserCount=5 -s computerdatabase.BasicSimulation
More information: Gatling Installation, Verification and Configuration - the Ultimate Guide

SoapUI: Transfer groovy script results using Property Transfer

I am an absolute noob in SoapUI. I am looking out for answer on this but somehow could not really find it.
I am in a situation where I would like to transfer two of the groovy scripts's results to another Groovyscript. Unfortunately, while using Property Transfer, the destined groovy script gets fully overriden by the return value by the source script. How shall I approach this?
Please find below the example for the same:
As you may see, I would like to pass the value of the transferred result of generateCreated and generateNonce to the generatePassword script in testRunner.testCase.getPropertyValue("Nonce") and testRunner.testCase.getPropertyValue("Created")
But this just doesnt seem to work for me.
You don't need a Property Transfer teststep for that.
You just let your first two scripts run - as you already are doing.
Then in your third Groovy Script, you just pull the results into variables.
This can be done using something like
def result = context.expand( '${Groovy Script#result}' )
In your case above, I suspect you would adjust that to something like
def created = context.expand( '${generateCreated#result}' )
def nonce = context.expand( '${generateNonce#result}' )
Insert those lines in your script whereever you need those variables, and then you have the variables "created" and "nonce" holding the results.

How to run a string from an input file as python code?

I am creating something along the likes of a text adventure game. I have a .yaml file that is my input. This file looks something like this
node_type:
action
title:
Do some stuff
info:
This does some stuff and things
script:
'print("hello world")
print(ret_val)
foo.bar(True)
ret_val = (foo.bar() == True)
if (thing):
print(thing)
print(ret_val)
'
My end goal is to have my python program run the script portion of the yaml file exactly as if it had been copy pasted into the main code. (I know there are about ten bazillion security reasons I should not be running user input like this, but I am the only one writing these nodes, and the only one using this program so I'm mostly just ignoring this fact...)
Currently my attempt goes like this: I load my yaml file as a dict using pyyaml
node = yaml.safe_load(file.yaml)
Then I'm trying to use exec to run my code and hitting a lot of problems, I can't run if statements, I simply get a syntax error, and I can't get any sort of return value from my code. I've tried this as a work around:
def main()
ret_val = "test";
thing = exec(node['script'], globals(),locals())
print(ret_val)
which when run with the above .yaml file prints
>> hello world
>> test
>> True
>> test
for some reason not actually modifying any of my main variables even though I fed them to exec.
Is there any way for me to work around these issues or is there an all together better way to be doing this?
One way of doing this would be to parse the code out and save it to a .py file, from which it can be imported dynamically, for example by importlib.
You might want to encapsulate parsed code into a function, which you can then easily call to invoke your action. Also, it would make sense to specify some default imports there.

getting name of previous test step in SoapUI groovy script

I'm new to groovy scripting in SoapUI and a bit confused regarding the amount of information available, so I may just have overlooked the answer to this.
There is a method context.getCurrentStep() available in scripts which loaded the GroovyUtils. But in a script step this, of course, returns the name of the script step itself.
Now I want to access the name (more precisely the response) of the previous step without using it's name explicitly. Is there an easy method to acchieve this?
You could do something like:
def currentStepInd = context.currentStepIndex
def previousStep = testRunner.testCase.getTestStepAt(currentStepInd - 1)
log.info previousStep.name
More information is available in the API JavaDocs.
You would want to do the following in your script:
def response = context.expand( '${previous_step_name#Response#}' )

Resources