Is it possible to add multiple commands using karate.fork()? I tried adding the commands using ; or && separation but the second command doesn't seem to be getting executed.
I am trying to cd to a particular directory before executing bash on a shell script.
* def command =
"""
function(line) {
var proc = karate.fork({ redirectErrorStream: false, useShell: true, line: line });
proc.waitSync();
karate.set('sysOut', proc.sysOut);
karate.set('sysErr', proc.sysErr);
karate.set('exitCode', proc.exitCode);
}
"""
* call command('cd ../testDirectory ; bash example.sh')
Note that instead of line - args as an array of command line arguments is supported, so try that as well - e.g. something like:
karate.fork({ args: ['cd', 'foo;', 'bash', 'example.sh'] })
But yes this may need some investigation. You can always try to have all the commands in a single batch file which should work.
Would be good if you can try the 1.0 RC since some improvements may have been added: https://github.com/intuit/karate/wiki/1.0-upgrade-guide
Related
I've got a Nextflow process that looks like:
process my_app {
publishDir "${outdir}/my_app", mode: params.publish_dir_mode
input:
path input_bam
path input_bai
val output_bam
val max_mem
val threads
val container_home
val outdir
output:
tuple env(output_prefix), path("${output_bam}"), path("${output_bam}.bai"), emit: tuple_ch
shell:
'''
my_script.sh \
!{input_bam} \
!{output_bam} \
!{max_mem} \
!{threads}
output_prefix=$(echo !{output_bam} | sed "s#.bam##")
'''
}
This process is only creating two .bam .bai files but my_script.sh is also creating other .vcf that are not being published in the output directory.
I tried it by doing in order to retrieve the files created by the script but without success:
output:
tuple env(output_prefix), path("${output_bam}"), path("${output_bam}.bai"), path("${output_prefix}.*.vcf"), emit: mt_validation_simulation_tuple_ch
but in logs I can see:
Error executing process caused by:
Missing output file(s) `null.*.vcf` expected by process `my_app_wf:my_app`
What I am missing? Could you help me? Thank you in advance!
The problem is that the output_prefix has only been defined inside of the shell block. If all you need for your output prefix is the file's basename (without extension), you can just use a regular script block to check file attributes. Note that variables defined in the script block (but outside the command string) are global (within the process scope) unless they're defined using the def keyword:
process my_app {
...
output:
tuple val(output_prefix), path("${output_bam}{,.bai}"), path("${output_prefix}.*.vcf")
script:
output_prefix = output_bam.baseName
"""
my_script.sh \\
"${input_bam}" \\
"${output_bam}" \\
"${max_mem}" \\
"${threads}"
"""
}
If the process creates the BAM (and index) it might even be possible to refactor away the multiple input channels if an output prefix can be supplied up front. Usually this makes more sense, but I don't have enough details to say one way or the other. The following might suffice as an example; you may need/prefer to combine/change the output declaration(s) to suit, but hopefully you get the idea:
params.publish_dir = './results'
params.publish_mode = 'copy'
process my_app {
publishDir "${params.publish_dir}/my_app", mode: params.publish_mode
cpus 1
memory 1.GB
input:
tuple val(prefix), path(indexed_bam)
output:
tuple val(prefix), path("${prefix}.bam{,.bai}"), emit: bam_files
tuple val(prefix), path("${prefix}.*.vcf"), emit: vcf_files
"""
my_script.sh \\
"${indexed_bam.first()}" \\
"${prefix}.bam" \\
"${task.memory.toGiga()}G" \\
"${task.cpus}"
"""
}
Note that the indexed_bam expects a tuple in the form: tuple(bam, bai)
i have process that generates a value. I want to forward this value into an value output channel. but i can not seem to get it working in one "go" - i'll always have to generate a file to the output and then define a new channel from the first:
process calculate{
input:
file div from json_ch.collect()
path "metadata.csv" from meta_ch
output:
file "dir/file.txt" into inter_ch
script:
"""
echo ${div} > alljsons.txt
mkdir dir
python3 $baseDir/scripts/calculate.py alljsons.txt metadata.csv dir/
"""
}
ch = inter_ch.map{file(it).text}
ch.view()
how do I fix this?
thanks!
best, t.
If your script performs a non-trivial calculation, writing the result to a file like you've done is absolutely fine - there's nothing really wrong with this approach. However, since the 'inter_ch' channel already emits files (or paths), you could simple use:
ch = inter_ch.map { it.text }
It's not entirely clear what the objective is here. If the desire is to reduce the number of channels created, consider instead switching to the new DSL 2. This won't let you avoid writing your calculated result to a file, but it might mean you can avoid an intermediary channel, potentially.
On the other hand, if your Python script actually does something rather trivial and can be refactored away, it might be possible to assign a (global) variable (below the script: keyword) such that it can be referenced in your output declaration, like the line x = ... in the example below:
Valid output
values
are value literals, input value identifiers, variables accessible in
the process scope and value expressions. For example:
process foo {
input:
file fasta from 'dummy'
output:
val x into var_channel
val 'BB11' into str_channel
val "${fasta.baseName}.out" into exp_channel
script:
x = fasta.name
"""
cat $x > file
"""
}
Other than that, your options are limited. You might have considered using the env output qualifier, but this just adds some syntactic-sugar to your shell script at runtime, such that an output file is still created:
Contents of test.nf:
process test {
output:
env myval into out_ch
script:
'''
myval=$(calc.py)
'''
}
out_ch.view()
Contents of bin/calc.py (chmod +x):
#!/usr/bin/env python
print('foobarbaz')
Run with:
$ nextflow run test.nf
N E X T F L O W ~ version 21.04.3
Launching `test.nf` [magical_bassi] - revision: ba61633d9d
executor > local (1)
[bf/48815a] process > test [100%] 1 of 1 ✔
foobarbaz
$ cat work/bf/48815aeefecdac110ef464928f0471/.command.sh
#!/bin/bash -ue
myval=$(calc.py)
# capture process environment
set +u
echo myval=$myval > .command.env
Based on:
Groovy executing shell commands
I have this groovy script:
def proc = "some bash command".execute()
//proc.out.close() // hm does not seem to be needed...
proc.waitFor()
if (proc.exitValue()) {
def errorMsg = proc.getErrorStream().text
println "[ERROR] $errorMsg"
} else {
println proc.text
}
That I use the execute various linux bash commands. Currently it works fine even without the proc.out.close() statement.
What is the purpose of proc.out.close() and why is it (not?) needed
proc.text is actually proc.getText()
form groovy api doc: Read the text of the output stream of the Process. Closes all the streams associated with the process after retrieving the text.
http://docs.groovy-lang.org/docs/latest/html/groovy-jdk/java/lang/Process.html#getText()
So, when using proc.text you don't need to call proc.out.close()
I am beginning to think my search skills are lacking.
I trying to find any articles on how with Groovy, to open an interactive process, read its output and then write to the process depending on the output text. All I can find is how printing, reading and writing with files. Nothing about how to Write to a interactive process.
The process is asking for a password
Write the password to process
Something like this if possible:
def process = "some-command.sh".execute()
process.in.eachLine { line ->
if (line.contains("enter password")) {
process.out.write("myPassword")
}
}
This here works reading from the process output:
def process = "some-command.sh".execute()
process.in.eachLine { line ->
println line
}
Though it stops when the process is asking for input. It does not print out the line with the question.
Edit: Found out why it did not print the line with the ask password. It was not a new line. The question was a simple print (not println). How do I read when there is not yet a new line?
I have been told expect can be used, but I am looking for a solution which does not require a dependency.
1.bat
#echo off
echo gogogo
set /P V=input me:
echo V=%V%
this script waits for input just after :
gogogo
input me:
this means that eachLine not triggered for input me because no new line after it
however the previous line gogogo could be caught
and following script works for gogogo but does not work for input me
groovy
def process = "1.bat".execute()
process.in.eachLine { line ->
if (line.contains("gogogo")) {
process.out.write("myPassword\n".getBytes("UTF-8"))
process.out.flush()
}
}
groovy2
probably this could be optimized.. following script works without new line:
def process = "1.bat".execute()
def pout = new ByteArrayOutputStream()
def perr = new ByteArrayOutputStream()
process.consumeProcessOutput(pout, perr) //starts listening threads and returns immediately
while(process.isAlive()){
Thread.sleep(1234)
if(pout.toString("UTF-8").endsWith("input me:")){
process.out.write("myPassword\n".getBytes("UTF-8"))
process.out.flush()
}
}
I am using spawn to execute latexmk with child_process.spawn command, args, options where command = 'latexmk' and the args array is
["-interaction=nonstopmode", "-f", "-cd", "-pdf", "-synctex=1", "-file-line-error", "-shell-escape", "-xelatex", "-outdir="/Users/user/Documents/path with space/.auxdir"", ""/Users/user/Documents/path with space/main.tex""]
The options just set some environment variables. I get the following from stderr:
stderr: mkdir ": Permission denied at /usr/texbin/latexmk line 1889.
latexmk seems to be trying to search for a path that contains the ". The quotes are necessary in case of whitespace in the path. What can I do to solve this?
EDIT:
To be clear, I need to populate outdir and the filePath like so:
args.push("-outdir=\"#{outdir}\"")
args.push("\"#{filePath}\"")
Problem is that "\'#{}\'" gives me the literal strings.
For a given program, I needed to have two cases in which I would use it. One when it would return a buffer and the other when it returned a stream. From the nodejs documentation, exec is used for the former while spawn is used for the latter. I wanted to share the same arguments except for maybe and extra arg_i thrown in there for spawn. The issue I faced was finding a solution that handled whitespace. The following resolves the issue:
args= ["arg1", "arg2", "...", "arg_i", "...", "arg_n", "path /with /space"]
For exec:
escapedArgs = (item.replace(' ', '\\ ') for item in args)
command = "program #{escapedArgs.join(' ')}"
proc = child_process.exec command, options, (error, stdout, stderr)
For spawn, args does not need to be modified:
proc = child_process.spawn command, args, options