Spawn Argument Array Element with Quotes - node.js

I am using spawn to execute latexmk with child_process.spawn command, args, options where command = 'latexmk' and the args array is
["-interaction=nonstopmode", "-f", "-cd", "-pdf", "-synctex=1", "-file-line-error", "-shell-escape", "-xelatex", "-outdir="/Users/user/Documents/path with space/.auxdir"", ""/Users/user/Documents/path with space/main.tex""]
The options just set some environment variables. I get the following from stderr:
stderr: mkdir ": Permission denied at /usr/texbin/latexmk line 1889.
latexmk seems to be trying to search for a path that contains the ". The quotes are necessary in case of whitespace in the path. What can I do to solve this?
EDIT:
To be clear, I need to populate outdir and the filePath like so:
args.push("-outdir=\"#{outdir}\"")
args.push("\"#{filePath}\"")
Problem is that "\'#{}\'" gives me the literal strings.

For a given program, I needed to have two cases in which I would use it. One when it would return a buffer and the other when it returned a stream. From the nodejs documentation, exec is used for the former while spawn is used for the latter. I wanted to share the same arguments except for maybe and extra arg_i thrown in there for spawn. The issue I faced was finding a solution that handled whitespace. The following resolves the issue:
args= ["arg1", "arg2", "...", "arg_i", "...", "arg_n", "path /with /space"]
For exec:
escapedArgs = (item.replace(' ', '\\ ') for item in args)
command = "program #{escapedArgs.join(' ')}"
proc = child_process.exec command, options, (error, stdout, stderr)
For spawn, args does not need to be modified:
proc = child_process.spawn command, args, options

Related

Nextflow capture output file by partial pattern

I've got a Nextflow process that looks like:
process my_app {
publishDir "${outdir}/my_app", mode: params.publish_dir_mode
input:
path input_bam
path input_bai
val output_bam
val max_mem
val threads
val container_home
val outdir
output:
tuple env(output_prefix), path("${output_bam}"), path("${output_bam}.bai"), emit: tuple_ch
shell:
'''
my_script.sh \
!{input_bam} \
!{output_bam} \
!{max_mem} \
!{threads}
output_prefix=$(echo !{output_bam} | sed "s#.bam##")
'''
}
This process is only creating two .bam .bai files but my_script.sh is also creating other .vcf that are not being published in the output directory.
I tried it by doing in order to retrieve the files created by the script but without success:
output:
tuple env(output_prefix), path("${output_bam}"), path("${output_bam}.bai"), path("${output_prefix}.*.vcf"), emit: mt_validation_simulation_tuple_ch
but in logs I can see:
Error executing process caused by:
Missing output file(s) `null.*.vcf` expected by process `my_app_wf:my_app`
What I am missing? Could you help me? Thank you in advance!
The problem is that the output_prefix has only been defined inside of the shell block. If all you need for your output prefix is the file's basename (without extension), you can just use a regular script block to check file attributes. Note that variables defined in the script block (but outside the command string) are global (within the process scope) unless they're defined using the def keyword:
process my_app {
...
output:
tuple val(output_prefix), path("${output_bam}{,.bai}"), path("${output_prefix}.*.vcf")
script:
output_prefix = output_bam.baseName
"""
my_script.sh \\
"${input_bam}" \\
"${output_bam}" \\
"${max_mem}" \\
"${threads}"
"""
}
If the process creates the BAM (and index) it might even be possible to refactor away the multiple input channels if an output prefix can be supplied up front. Usually this makes more sense, but I don't have enough details to say one way or the other. The following might suffice as an example; you may need/prefer to combine/change the output declaration(s) to suit, but hopefully you get the idea:
params.publish_dir = './results'
params.publish_mode = 'copy'
process my_app {
publishDir "${params.publish_dir}/my_app", mode: params.publish_mode
cpus 1
memory 1.GB
input:
tuple val(prefix), path(indexed_bam)
output:
tuple val(prefix), path("${prefix}.bam{,.bai}"), emit: bam_files
tuple val(prefix), path("${prefix}.*.vcf"), emit: vcf_files
"""
my_script.sh \\
"${indexed_bam.first()}" \\
"${prefix}.bam" \\
"${task.memory.toGiga()}G" \\
"${task.cpus}"
"""
}
Note that the indexed_bam expects a tuple in the form: tuple(bam, bai)

Executing multiple linux commands using karate.fork()

Is it possible to add multiple commands using karate.fork()? I tried adding the commands using ; or && separation but the second command doesn't seem to be getting executed.
I am trying to cd to a particular directory before executing bash on a shell script.
* def command =
"""
function(line) {
var proc = karate.fork({ redirectErrorStream: false, useShell: true, line: line });
proc.waitSync();
karate.set('sysOut', proc.sysOut);
karate.set('sysErr', proc.sysErr);
karate.set('exitCode', proc.exitCode);
}
"""
* call command('cd ../testDirectory ; bash example.sh')
Note that instead of line - args as an array of command line arguments is supported, so try that as well - e.g. something like:
karate.fork({ args: ['cd', 'foo;', 'bash', 'example.sh'] })
But yes this may need some investigation. You can always try to have all the commands in a single batch file which should work.
Would be good if you can try the 1.0 RC since some improvements may have been added: https://github.com/intuit/karate/wiki/1.0-upgrade-guide

How do I run an external file in soapui and take the output and set it as header

I would like to run an external .bat file using groovy script in soapUI. also would like to use the output generated from the external file as the value for the header
here is the script that I am using to run the bat file
String line
def p = "cmd /c C:\\Script\\S1.bat".execute()
def bri = new BufferedReader (new InputStreamReader(p.getInputStream()))
while ((line = bri.readLine()) != null) {log.info line}
here is the content of the bat file
java -jar SignatureGen.jar -pRESOURCE -nRandomString -mGET -d/api/discussion-streams/metadata -teyJ0eXAiOiJKV1QiLCJhbGciOiJIUzUxMiJ9.eyJjbGllbnQiOiIxIiwicm9sZSI6IllGQURNSU4iLCJleHAiOjI3NTgzMjU2MDIsInRpIjo3MjAwNiwiaWF0IjoxNTU4MzI1NjAyLCJwZXJzb24iOiI1In0.bbci7ZBWmPsANN34Ris9H0-mosKF2JLTZ-530Rex2ut1kjCwprZr_196N-K1alFBH_A9pbG0MPspaDOnvOKOjA
The following code:
def p = "ls -la".execute()
def err = new StringBuffer()
def out = new StringBuffer()
p.waitForProcessOutput(out, err)
p.waitForOrKill(5000)
int ret = p.exitValue()
// optionally check the exit value and err for errors
println "ERR: $err"
println "OUT: $out"
// if you want to do something line based with the output
out.readLines().each { line ->
println "LINE: $line"
}
is based on linux, but translates to windows by just replacing the ls -la with your bat file invocation cmd /c C:\\Script\\S1.bat.
This executes the process, calls waitForProcessOutput to make sure the process doesn't block and that we are saving away the stdout and stderr streams of the process, and then waits for the process to finish using waitForOrKill.
After the waitForOrKill the process has either been terminated because it took too long, or it has completed normally. Whatever the case, the out variable will contain the output of the command. To figure out whether or not there was an error during bat file execution, you can inspect the ret and err variables.
I chose the waitForOrKill timeout at random, adjust to fit your needs. You can also use waitFor without a timeout which will wait until the process completes, but it is generally better to set some timeout to make sure your command doesn't execute indefinitely.

How to run a process that have an argument containing new-lines?

I have a command that have the structure :
xrdcp "root://server/file?authz=ENVELOPE&Param1=Val1" local_file_path
The problem is that ENVELOPE in text that should be unquoted in command line
and it contains a lot of new-lines
I cannot use repr as it will replace new-line with \n
Moreover subprocess seems to automatically use repr on the items from the list arguments
In bash this command is usually run with
xrdcp "root://server/file?authz=$(<ENVELOPE)&Param1=Val1" local_file
So, is there a way to run a command while keeping the new lines in the arguments?
Thank you!
Later Edit:
my actual code is :
envelope = server['envelope']
complete_url = "\"" + server['url'] + "?" + "authz=" + "{}".format(server['envelope']) + xrdcp_args + "\""
xrd_copy_list = []
xrd_copy_list.extend(xrdcp_cmd_list)
xrd_copy_list.append(complete_url)
xrd_copy_list.append(dst_final_path_str)
xrd_job = subprocess.Popen(xrd_copy_list, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = xrd_job.communicate()
print(stdout)
print(stderr)

How do I get the output (STDOUT) from Cucumber::CLI::Main.execute into a variable

Running a cucumber script in Jruby 9.1.7.0. The output goes to STDOUT. How can I get it to save it into a local variable ?
require 'cucumber'
require 'stringio'
#output = StringIO.new
features = 'features/first.feature'
args = features.split.concat %w(-f html)
# Run cucumber
begin
# output goes to STDOUT
Cucumber::Cli::Main.new(args).execute!
rescue SystemExit
puts "Cucumber calls #kernel.exit(), killing your script unless you rescue"
end
If you type in cmd "cucumber --help"
-o, --out [FILE|DIR] Write output to a file/directory instead of STDOUT. This option
applies to the previously specified --format, or the
default format if no format is specified. Check the specific
formatter's docs to see whether to pass a file or a dir.
You can modify your code with
args = features.split.concat %w(-f html -o test.html)
You can also write it in a tempfile and read the value from the file.
require 'cucumber'
require 'tempfile'
require 'securerandom'
filename = "#{SecureRandom.urlsafe_base64}"
file = Tempfile.new(filename)
filepath = "#{file.path}"
features = "cucumber/ars/features/ars_additional.feature"
args = features.split.concat %w(-f html -o)
args << filepath
Cucumber::Cli::Main.new(args).execute!
#output = file.read
file.close
file.unlink

Resources