Lua script unable to detect/catch error while executing invalid linux command - linux

I have the following function that works fine as long as I give it a valid command to execute. As soon as I give it a non-existent command, the script is interrupted with an error message.
#!/usr/bin/lua
function exec_com(com)
local ok,res=pcall(function() return io.popen(com) end)
if ok then
local tmp=res:read('*a')
res:close()
return ok,tmp
else
return ok,res
end
end
local st,val=exec_com('uptime')
print('Executed "uptime" with status:'..tostring(st)..' and value:'..val)
st,val=exec_com('zzzz')
print('Executed "zzzz" with status:'..tostring(st)..' and value:'..val)
When I run the script above I get the following output:
Executed "uptime" with status:true and value: 18:07:38 up 1 day, 23:00, 3 users, load average: 0.37, 0.20, 0.20
sh: zzzz: command not found
Executed "zzzz" with status:true and value:
You can clearly see above that pcall() function still reported success when executing "zzzz" which is odd.
Can someone help me devise a way to catch an exception when executing a non-existent or ill-formed Linux command using Lua script? Thanks.
Edit: Restated my request after getting the clarification that pcall() works as expected, and the problem is due to popen() failing to throw an error.

I use a method which is similar to your "temporary workaround" but which gives you more information:
local cmd = "uptime"
local f = io.popen(cmd .. " 2>&1 || echo ::ERROR::", "r")
local text = f:read "*a"
if text:find "::ERROR::" then
-- something went wrong
print("error: " .. text)
else
-- all is fine!!
print(text)
end

If you look at io.popen(), you'll see that it'll always return a file handle.
Starts program prog in a separated process and returns a file handle
that you can use to read data from this program (if mode is "r", the
default) or to write data to this program (if mode is "w").
Since, a file handle returned is still a valid value for lua, the pcall(), your local function inside the pcall is returning a true value (and an error is not being propagated); thereby, giving you a true status and no output.

I have come up with my own temporary workaround that pipes the error to /dev/null and determines the success/failure of executed command based on the text received from io.popen():read('*a') command.
Here is my new code:
#!/usr/bin/lua
function exec_com(com)
local res=io.popen(com..' 2>/dev/null')
local tmp=res:read('*a')
res:close()
if string.len(tmp)>0 then
return true,tmp
else
return false,'Error executing command: '..com
end
end
local st,val=exec_com('uptime')
print('Executed "uptime" with status:'..tostring(st)..' and value:'..val)
st,val=exec_com('cat /etc/shadow')
print('Executed "cat /etc/shadow" with status:'..tostring(st)..' and value:'..val)
And the corresponding output is now correct:
Executed "uptime" with status:true and value: 00:10:11 up 2 days, 5:02, 3 users, load average: 0.01, 0.05, 0.19
Executed "cat /etc/shadow" with status:false and value:Error executing command: cat /etc/shadow
In my example above I am creating a "generic" error description. This is an intermediate fix and I am still interested in seeing alternative solutions that can return a more meaningful error message describing why the command failed to execute.

Rather than taking the time reading the whole file into a variable, why not just check if the file is empty with f:read(0)?
Local f = io.popen("NotExist")
if not f:read(0) Then
for l in st:lines() do
print(l)
end
else
error("Command Does Not Exist")
end
From the lua Manual:
As a special case, io.read(0) works as a test for end of file: It returns an empty string if there is more to be read or nil otherwise.

Related

How to call a forward the value of a variable created in the script in Nextflow to a value output channel?

i have process that generates a value. I want to forward this value into an value output channel. but i can not seem to get it working in one "go" - i'll always have to generate a file to the output and then define a new channel from the first:
process calculate{
input:
file div from json_ch.collect()
path "metadata.csv" from meta_ch
output:
file "dir/file.txt" into inter_ch
script:
"""
echo ${div} > alljsons.txt
mkdir dir
python3 $baseDir/scripts/calculate.py alljsons.txt metadata.csv dir/
"""
}
ch = inter_ch.map{file(it).text}
ch.view()
how do I fix this?
thanks!
best, t.
If your script performs a non-trivial calculation, writing the result to a file like you've done is absolutely fine - there's nothing really wrong with this approach. However, since the 'inter_ch' channel already emits files (or paths), you could simple use:
ch = inter_ch.map { it.text }
It's not entirely clear what the objective is here. If the desire is to reduce the number of channels created, consider instead switching to the new DSL 2. This won't let you avoid writing your calculated result to a file, but it might mean you can avoid an intermediary channel, potentially.
On the other hand, if your Python script actually does something rather trivial and can be refactored away, it might be possible to assign a (global) variable (below the script: keyword) such that it can be referenced in your output declaration, like the line x = ... in the example below:
Valid output
values
are value literals, input value identifiers, variables accessible in
the process scope and value expressions. For example:
process foo {
input:
file fasta from 'dummy'
output:
val x into var_channel
val 'BB11' into str_channel
val "${fasta.baseName}.out" into exp_channel
script:
x = fasta.name
"""
cat $x > file
"""
}
Other than that, your options are limited. You might have considered using the env output qualifier, but this just adds some syntactic-sugar to your shell script at runtime, such that an output file is still created:
Contents of test.nf:
process test {
output:
env myval into out_ch
script:
'''
myval=$(calc.py)
'''
}
out_ch.view()
Contents of bin/calc.py (chmod +x):
#!/usr/bin/env python
print('foobarbaz')
Run with:
$ nextflow run test.nf
N E X T F L O W ~ version 21.04.3
Launching `test.nf` [magical_bassi] - revision: ba61633d9d
executor > local (1)
[bf/48815a] process > test [100%] 1 of 1 ✔
foobarbaz
$ cat work/bf/48815aeefecdac110ef464928f0471/.command.sh
#!/bin/bash -ue
myval=$(calc.py)
# capture process environment
set +u
echo myval=$myval > .command.env

subprocess.Popen: does not retun complete output , when run through crontab

I am calling some java binary in unix environment wrapped inside python script
When I call script from bash, output comes clean and also being stored in desired variable , However when i run the same script from Cron, Output stored(in a Variable) is incomplete
my code:
command = '/opt/HP/BSM/PMDB/bin/abcAdminUtil -abort -streamId ETL_' \
'SystemManagement_PA#Fact_SCOPE_OVPAGlobal'
proc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
(output, err) = proc.communicate() # Storing Output in output variable
Value of output variable when running from shell:
Abort cmd output:PID:8717
Executing abort function
hibernateConfigurationFile = /OBRHA/HPE-OBR/PMDB/lib/hibernate-core-4.3.8.Final.jar
Starting to Abort Stream ETL_SystemManagement_PA#Fact_SCOPE_OVPAGlobal
Aborting StreamETL_SystemManagement_PA#Fact_SCOPE_OVPAGlobal
Value of output variable when running from cron:
PID:830
It seems output after creating new process is not being stored inside variable , i don't know why ?
Kintul.
You question seems to be very similar to this one: Capture stdout stderr of python subprocess, when it runs from cron or rc.local
See if that helps you.
This happened because Java utility was trowing exception which is not being cached by subprocess.Popen
However exception is catched by subprocess.check_output
Updated Code :
try:
output = subprocess.check_output(command, shell=True, stderr=subprocess.STDOUT, stdin=subprocess.PIPE)
except subprocess.CalledProcessError as exc:
print("Status : FAIL", exc.returncode, exc.output)
else:
print("Output of Resume cmd: \n{}\n".format(output))
file.write("Output of Resume cmd: \n{}\n".format(output) + "\n")
Output of code:
('Status : FAIL', -11, 'PID:37319\n')
('Status : FAIL', -11, 'PID:37320\n')
Hence , command is throwing exception is being cached by subprocess.check_output but not by subprocess.Popen
Extract form official page of subprocess.check_output
If the return code was non-zero it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute and any output in the output attribute.

docker api ContainerExecInspect cannot get correct exit code

I am using docker engine-api(github.com/docker/engine-api) to execute some command
I use client.ContainerExecCreate and then client.ContainerExecInspect to run my command and then get the command exit code(I run multiple commands in the same container so the exit code get from ContainerInspect is useless for me.)
This is my function use to execute command in container
http://pastebin.com/rTNVuv9T
but ContainerExecInspect return wrong values sometime, because sometimes ContainerExecInspect is called before the command exit and it said exit code is zero, which is wrong
And I wrote a testcase to test it
http://pastebin.com/PED1Rf4k
And the result will not be 233, it will be 0
I have set ExecConfig.Detach = true and ExecStartCheck.Detach = true, but no helps
Is there any way to wait until the command exit then get the exit code?
Addition:
For some of my command running is shell script not a executable, so I think I need to prefix /bin/bash, and wait the container exit, is not what I want, I want to wait the command exit, and the container is still running
I think now I can solve my problem
The main point is when using containerExecAttach it will exposed the hijacked connection and, I can judge whether the command exit by read from the connection until EOF
There are a few point to set then
Should set ExecConfig AttachStdout to true
Then read from hijacked conn
Here is a sample code
atinfo, err := cli.ContainerExecAttach(ctx, execID, ec)
// error handling
defer atinfo.Close()
c = atinfo.Conn
one := make([]byte, 1)
_, err = c.Read(one)
if err == io.EOF {
println("Connection closed")
}
This will wait until the command execute complete
ExecConfig is set to
ec.Detach = false
ec.Tty = false
ec.AttachStdout = true

How to save a table to a file from Lua

I'm having trouble printing a table to a file with lua (and I'm new to lua).
Here's some code I found here to print the table;
function print_r ( t )
local print_r_cache={}
local function sub_print_r(t,indent)
if (print_r_cache[tostring(t)]) then
print(indent.."*"..tostring(t))
else
print_r_cache[tostring(t)]=true
if (type(t)=="table") then
for pos,val in pairs(t) do
if (type(val)=="table") then
print(indent.."["..pos.."] => "..tostring(t).." {")
sub_print_r(val,indent..string.rep(" ",string.len(pos)+8))
print(indent..string.rep(" ",string.len(pos)+6).."}")
elseif (type(val)=="string") then
print(indent.."["..pos..'] => "'..val..'"')
else
print(indent.."["..pos.."] => "..tostring(val))
end
end
else
print(indent..tostring(t))
end
end
end
if (type(t)=="table") then
print(tostring(t).." {")
sub_print_r(t," ")
print("}")
else
sub_print_r(t," ")
end
print()
end
I have no idea where the 'print' command goes to, I'm running this lua code from within another program. What I would like to do is save the table to a .txt file. Here's what I've tried;
function savetxt ( t )
local file = assert(io.open("C:\temp\test.txt", "w"))
file:write(t)
file:close()
end
Then in the print-r function I've changed everywhere it says 'print' to 'savetxt'. This doesn't work. It doesn't seem to access the text file in any way. Can anyone suggest an alternative method?
I have a suspicion that this line is the problem;
local file = assert(io.open("C:\temp\test.txt", "w"))
Update;
I have tried the edit suggested by Diego Pino but still no success. I run this lua script from another program (for which I don't have the source), so I'm not sure where the default directory of the output file might be (is there a method to get this programatically?). Is is possible that since this is called from another program there's something blocking the output?
Update #2;
It seems like the problem is with this line:
local file = assert(io.open("C:\test\test2.txt", "w"))
I've tried changing it "C:\temp\test2.text", but that didn't work. I'm pretty confident it's an error at this point. If I comment out any line after this (but leave this line in) then it still fails, if I comment out this line (and any following 'file' lines) then the code runs. What could be causing this error?
I have no idea where the 'print' command goes to,
print() output goes to default output file, you can change that with io.output([file]), see Lua manuals for details on querying and changing default output.
where do files get created if I don't specify the directory
Typically it will land in current working directory.
Your print_r function prints out a table to stdout. What you want is to print out the output of print_r to a file. Change the print_r function so instead of printing to stdout, it prints out to a file descriptor. Perhaps the easiest way to do that is to pass a file descriptor to print_r and overwrite the print function:
function print_r (t, fd)
fd = fd or io.stdout
local function print(str)
str = str or ""
fd:write(str.."\n")
end
...
end
The rest of the print_r doesn't need any change.
Later in savetxt call print_r to print the table to a file.
function savetxt (t)
local file = assert(io.open("C:\temp\test.txt", "w"))
print_r(t, file)
file:close()
end
require("json")
result = {
["ip"]="192.168.0.177",
["date"]="2018-1-21",
}
local test = assert(io.open("/tmp/abc.txt", "w"))
result = json.encode(result)
test:write(result)
test:close()
local test = io.open("/tmp/abc.txt", "r")
local readjson= test:read("*a")
local table =json.decode(readjson)
test:close()
print("ip: " .. table["ip"])
2.Another way:
http://lua-users.org/wiki/SaveTableToFile
Save Table to File
function table.save( tbl,filename )
Load Table from File
function table.load( sfile )

ClearCase.ClearTool returns No view context available error

I am trying to run the following code, but the got #error 1 at startview command, and #error 2 in desc command.
use Win32::OLE;
$ct = Win32::OLE->new('ClearCase.ClearTool') or die "Could not create ClearTool object\n";
$view = "ccadm01_UARK_DEV";
$output = $ct->CmdExec("pwv") or die("Cleartool returned error: ", Win32::OLE->LastError(), "\n");
print ("pwv \$output = $output\n");
# error 1 : cleartool return error 0
$output = $ct->CmdExec("startview ccadm01_UARK_DEV") or die("Cleartool returned error: ", Win32::OLE->LastError(), "\n");
$CWD = $view_dir;
print( "Current directory: $CWD\n");
# error 2: No view context available
$output = $ct->CmdExec("describe -fmt \"%[versions]Cp\" activity:USR0200004985\#\\Unix_PVOB") or die("Cleartool returned error: ", Win32::OLE->LastError(), "\n");
print ("desc \$output = $output\n");
For #error 1, I tried the same command from DOS, it works.
You need to make sure your $view is a valid dynamic view tag for cleartool startview to work.
(make sure to not use cleartool setview, as it spawns a subshell)
Also if it returns error 0, you can assume it has worked: CAL might return an "error", but status 0 should mean the command has been executed.
An error different from 0, though, means something went wrong.
And you need to cd into that view (/view/<viewTag> or m:\<viewTag>) for a cleartool descr to work.
That one, executed in the wrong folder, is supposed to fail, hence "error 2".
The OP Jirong Hu points in the comments to Using Perl with Rational ClearCase Automation Library (CAL) and this script as an example.

Resources