I am calling some java binary in unix environment wrapped inside python script
When I call script from bash, output comes clean and also being stored in desired variable , However when i run the same script from Cron, Output stored(in a Variable) is incomplete
my code:
command = '/opt/HP/BSM/PMDB/bin/abcAdminUtil -abort -streamId ETL_' \
'SystemManagement_PA#Fact_SCOPE_OVPAGlobal'
proc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
(output, err) = proc.communicate() # Storing Output in output variable
Value of output variable when running from shell:
Abort cmd output:PID:8717
Executing abort function
hibernateConfigurationFile = /OBRHA/HPE-OBR/PMDB/lib/hibernate-core-4.3.8.Final.jar
Starting to Abort Stream ETL_SystemManagement_PA#Fact_SCOPE_OVPAGlobal
Aborting StreamETL_SystemManagement_PA#Fact_SCOPE_OVPAGlobal
Value of output variable when running from cron:
PID:830
It seems output after creating new process is not being stored inside variable , i don't know why ?
Kintul.
You question seems to be very similar to this one: Capture stdout stderr of python subprocess, when it runs from cron or rc.local
See if that helps you.
This happened because Java utility was trowing exception which is not being cached by subprocess.Popen
However exception is catched by subprocess.check_output
Updated Code :
try:
output = subprocess.check_output(command, shell=True, stderr=subprocess.STDOUT, stdin=subprocess.PIPE)
except subprocess.CalledProcessError as exc:
print("Status : FAIL", exc.returncode, exc.output)
else:
print("Output of Resume cmd: \n{}\n".format(output))
file.write("Output of Resume cmd: \n{}\n".format(output) + "\n")
Output of code:
('Status : FAIL', -11, 'PID:37319\n')
('Status : FAIL', -11, 'PID:37320\n')
Hence , command is throwing exception is being cached by subprocess.check_output but not by subprocess.Popen
Extract form official page of subprocess.check_output
If the return code was non-zero it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute and any output in the output attribute.
Related
While trying to integrate sqlmap with my automation tool, when tried to run the command and save the output into a file, the line after which user's input is required is not getting printed on the console; It is getting printed after the arguments are passed. It is required that the console output to be printed on both the places (terminal and output file). As sqlmap requires the user's input during execution cannot use subprocess.check_output()
image
Code snippet:
[try:
cmd = cmd.rstrip()
process = subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
while True:
out = process.stdout.readline()\[0:\]
if out == '' and process.poll() is not None:
break
if out:
output += out
print(out.strip())
except Exception as e:
exception_message = str(e)
output += exception_message
if 'exit status 1' not in exception_message:
self.utilities.print_color("\[!\] Error executing the command: " + cmd,'red')
output += '\r\n'
print(output)
return output][1]
I have two files in Heroku like "app.py" and "notify.py".
when I run the "app.py".
In some case, I'd like to run "notify.py", so I use this function "subprocess.Popen" to call "notify.py" .and I hope that process "notify.py" can run in loop, even after the main process ends.
How can I do that?
By the way, I get the error like this
/bin/sh: 1: ./notify.py: Permission denied
I use some methods like below
child = subprocess.Popen(['./notify.py'], shell=True, cwd='/app', stdin=None, stdout=subprocess.PIPE)
child = subprocess.call(['./notify.py'], shell=True, cwd='/app', stdin=None, stdout=subprocess.PIPE)
but all doesn't work
and I try another like this
i make a function notify() in app.py
and change the code like below
child = subprocess.Popen([notify(text)], shell=True)
it work.
But it will run "notify(text)" then print "test" and return.
Please tell me why and how can I solve this problem.
if event.message.text[0:6] == 'push**':
t = event.message.text
text = t.split('**')
child = subprocess.Popen(['./notify.py'], shell=True, cwd='/app', stdin=None, stdout=subprocess.PIPE)
print('test')
return 0
/bin/sh: 1: ./notify.py: Permission denied
#!/usr/bin/env python3
import subprocess
import os
if False:
# create log file.
kfd = os.open( 'kk.log', os.O_WRONLY )
# redirect stdout & err to a log file.
os.close(1)
os.dup(kfd)
os.close(2)
os.dup(kfd)
subprocess.run([ "echo", "hello world"], check=True )
% ./kk.py
hello world
%
The above works fine, but if you try edit file, and replace False with true:
% ./kk.py
% more kk.log
Traceback (most recent call last):
File "./kk.py", line 16, in <module>
subprocess.run([ "echo", "hello world"], check=True )
File "/usr/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['echo', 'hello world']' returned
non-zero exit status 1.
%
We don't get the output, and the process error exits...
I would have expected it to just work, writing to kk.log.
You probably want to say something like this instead:
#!/usr/bin/env python3
import subprocess
import os
if True:
# create log file.
kfd = os.open( 'kk.log', os.O_WRONLY | os.O_CREAT )
# redirect stdout & err to a log file.
os.dup2(kfd, 1)
os.dup2(kfd, 2)
subprocess.run([ "echo", "hello world"], check=True )
Notice the use of os.dup2. There are two reasons for that. First and foremost, resulting file descriptors are inheritable. Your echo had no open stdout/-err actually and hence failed (You can run the following in shell to echo with just stdout closed to check out the behavior: /bin/echo hello world 1>&-). Also note, it may not always hold true that if I closed stdout (1), the lowest descriptor (and result of os.dup) is 1. Someone could have closed your stdin (0) before running the script (same goes for stderr).
The background story to file descriptors inheritance is in the PEP-446.
I've also added os.O_CREAT, since my first failure trying to reproduce your problem was non-existing kk.log.
Needless to say, unless trying to play with interfaces into the os (perhaps as a an exercise), I guess you should normally stick to subprocess itself.
I have a python program which should execute a command line (command line is a psexec command to call a batch file on the remote server)
I used popen to call the command line. The batch on the remote server produces a return code of 0.
Now I have to wait for this return code and on the basis of the return code I should continue my program execution.
I tried to use .wait() or .check_output() but for some reason did not work for me.
cmd = """psexec -u CORPORATE\user1 -p force \\\sgeinteg27 -s cmd /c "C:\\Planview\\Interfaces\\ProjectPlace_Sree\\PP_Run.bat" """
p = subprocess.Popen(cmd, bufsize=2048, shell=True,
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.wait()
print(p.returncode)
##The below block should wait until the above command runs completely.
##And depending on the return code being ZERO i should continue the rest of
##the execution.
if p.returncode ==0:
result = tr.test_readXid.readQuery(cert,planning_code)
print("This is printed depending if the return code is zero")
Here is the EOF the batch file execution and the return code
Can anybody help me with this ?
Through a command-line (/bin/sh) on a Ubuntu system, I executed a Python3 script that uses multiprocessing.Process() to start another Python3 script. I got the error message below:
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ ./Betabot #THE SECOND SCRIPT NEVER EXECUTES
/bin/sh: 1: Syntax error: "(" unexpected (expecting "}")
Traceback (most recent call last):
File "./Betabot", line 26, in <module>
JOB_CONFIG = multiprocessing.Process(os.system('./conf/set_data.py3'))
File "/usr/lib/python3.3/multiprocessing/process.py", line 72, in __init__
assert group is None, 'group argument must be None for now'
AssertionError: group argument must be None for now
#TESTING THE SECOND SCRIPT BY ITSELF IN TWO WAYS (both work)
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ python3 -c "import os; os.system('./conf/set_data.py3')" #WORKS
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ ./conf/set_data.py3 #WORKS
The question is - Why is this not working? It should start the second script and both continue executing with out issues.
I made edits to the code trying to solve the issue. The error is now on line 13. The same error occurs on line 12 "JOB_CONFIG = multiprocessing.Process(os.system('date')); JOB_CONFIG.start()" that I used as a testing line. I changed line 12 to be "os.system('date')" and that works, so the error lies in the multiprocessing command.
#!/usr/bin/env python3
import os, subprocess, multiprocessing
def write2file(openfile, WRITE):
with open(openfile, 'w') as file:
file.write(str(WRITE))
writetofile = writefile = filewrite = writer = filewriter = write2file
global BOTNAME, BOTINIT
BOTNAME = subprocess.getoutput('cat ./conf/startup.xml | grep -E -i -e \'<property name=\"botname\" value\' | ssed -r -e "s|<property name=\"botname\" value=\"(.*)\"/>|\1|gI"')
BOTINIT = os.getpid()
###Setup science information under ./mem/###
JOB_CONFIG = multiprocessing.Process(os.system('date')); JOB_CONFIG.start()
JOB_CONFIG = multiprocessing.Process(os.system('./conf/set_data.py3')); JOB_CONFIG.start()
###START###
write2file('./mem/BOTINIT_PID', BOTINIT); write2file('./mem/tty', os.ctermid()); write2file('./mem/SERVER_PID', BOTINIT)
JOB_EMOTION = multiprocessing.Process(os.system('./lib/emoterm -T Emotion -e ./lib/Emotion_System')); JOB_EMOTION.start()
JOB_SENSORY = multiprocessing.Process(os.system('./lib/Sensory_System')); JOB_SENSORY.start()
print(BOTNAME + ' is starting'); JOB_CONFIG.join()
try:
os.system('./lib/neoterm -T' + BOTNAME + ' -e ./lib/beta_engine')
except:
print('There seems to be an error.'); JOB_EMOTION.join(); JOB_SENSORY.join(); exit()
JOB_EMOTION.join(); JOB_SENSORY.join(); exit()
When starting a Python3 script from a Python3 script that is to be run while the main script continues, a command like this must be done:
JOB_CONFIG = subprocess.Popen([sys.executable, './conf/set_data.py3'])
The filename string is the script. This is save to a variable to allow me to manipulate the process later. For instance, I could use the command "JOB_CONFIG.wait()" when the main script should wait for the other script.
As for that hashpling in the first line of the error message, that is due to a syntax error in the first subprocess command used.