Starting a Python script from within another - odd behavior - multithreading

Through a command-line (/bin/sh) on a Ubuntu system, I executed a Python3 script that uses multiprocessing.Process() to start another Python3 script. I got the error message below:
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ ./Betabot #THE SECOND SCRIPT NEVER EXECUTES
/bin/sh: 1: Syntax error: "(" unexpected (expecting "}")
Traceback (most recent call last):
File "./Betabot", line 26, in <module>
JOB_CONFIG = multiprocessing.Process(os.system('./conf/set_data.py3'))
File "/usr/lib/python3.3/multiprocessing/process.py", line 72, in __init__
assert group is None, 'group argument must be None for now'
AssertionError: group argument must be None for now
#TESTING THE SECOND SCRIPT BY ITSELF IN TWO WAYS (both work)
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ python3 -c "import os; os.system('./conf/set_data.py3')" #WORKS
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ ./conf/set_data.py3 #WORKS
The question is - Why is this not working? It should start the second script and both continue executing with out issues.
I made edits to the code trying to solve the issue. The error is now on line 13. The same error occurs on line 12 "JOB_CONFIG = multiprocessing.Process(os.system('date')); JOB_CONFIG.start()" that I used as a testing line. I changed line 12 to be "os.system('date')" and that works, so the error lies in the multiprocessing command.
#!/usr/bin/env python3
import os, subprocess, multiprocessing
def write2file(openfile, WRITE):
with open(openfile, 'w') as file:
file.write(str(WRITE))
writetofile = writefile = filewrite = writer = filewriter = write2file
global BOTNAME, BOTINIT
BOTNAME = subprocess.getoutput('cat ./conf/startup.xml | grep -E -i -e \'<property name=\"botname\" value\' | ssed -r -e "s|<property name=\"botname\" value=\"(.*)\"/>|\1|gI"')
BOTINIT = os.getpid()
###Setup science information under ./mem/###
JOB_CONFIG = multiprocessing.Process(os.system('date')); JOB_CONFIG.start()
JOB_CONFIG = multiprocessing.Process(os.system('./conf/set_data.py3')); JOB_CONFIG.start()
###START###
write2file('./mem/BOTINIT_PID', BOTINIT); write2file('./mem/tty', os.ctermid()); write2file('./mem/SERVER_PID', BOTINIT)
JOB_EMOTION = multiprocessing.Process(os.system('./lib/emoterm -T Emotion -e ./lib/Emotion_System')); JOB_EMOTION.start()
JOB_SENSORY = multiprocessing.Process(os.system('./lib/Sensory_System')); JOB_SENSORY.start()
print(BOTNAME + ' is starting'); JOB_CONFIG.join()
try:
os.system('./lib/neoterm -T' + BOTNAME + ' -e ./lib/beta_engine')
except:
print('There seems to be an error.'); JOB_EMOTION.join(); JOB_SENSORY.join(); exit()
JOB_EMOTION.join(); JOB_SENSORY.join(); exit()

When starting a Python3 script from a Python3 script that is to be run while the main script continues, a command like this must be done:
JOB_CONFIG = subprocess.Popen([sys.executable, './conf/set_data.py3'])
The filename string is the script. This is save to a variable to allow me to manipulate the process later. For instance, I could use the command "JOB_CONFIG.wait()" when the main script should wait for the other script.
As for that hashpling in the first line of the error message, that is due to a syntax error in the first subprocess command used.

Related

Is it possible to have an output feedback to an input in a single nextflow process?

I am trying to make a simple feedback loop in my nextflow script. I am getting a weird error message that I do not know how to debug. My attempt is modeled after the NextFlow design pattern described here. I need a value to be calculated from a python3 script that operates on an image but pass that value on to subsequent executions of the script. At this stage I just want to get the structure right by adding numbers but I cannot get that to work yet.
my script
feedback_ch = Channel.create()
input_ch = Channel.from(feedback_ch)
process test {
echo true
input:
val chan_a from Channel.from(1,2,3)
val feedback_val from input_ch
output:
stdout output_val into feedback_ch
shell:
'''
#!/usr/bin/python3
new_val = !{chan_a} + !{feedback_val}
print(new_val)
'''
}
The error message I get
Error executing process > 'test (1)'
Caused by:
Process `test (1)` terminated with an error exit status (1)
Command executed:
#!/usr/bin/python3
new_val = 1 + DataflowQueue(queue=[])
print(new_val)
Command exit status:
1
Command output:
(empty)
Command error:
Traceback (most recent call last):
File ".command.sh", line 3, in <module>
new_val = 1 + DataflowQueue(queue=[])
NameError: name 'DataflowQueue' is not defined
Work dir:
executor > local (1)
[cd/67768e] process > test (1) [100%] 1 of 1, failed: 1 ✘
Error executing process > 'test (1)'
Caused by:
Process `test (1)` terminated with an error exit status (1)
Command executed:
#!/usr/bin/python3
new_val = 1 + DataflowQueue(queue=[])
print(new_val)
Command exit status:
1
Command output:
(empty)
Command error:
Traceback (most recent call last):
File ".command.sh", line 3, in <module>
new_val = 1 + DataflowQueue(queue=[])
NameError: name 'DataflowQueue' is not defined
Work dir:
/home/cv_proj/work/cd/67768e706f50d7675ae93645a0ce6e
Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
Anyone have any ideas?
The problem you have says, that you are passing empty DataflowQueue object with input_ch. Nextflow tries to execute it, so it substitutes your python code with variables, resulting in:
#!/usr/bin/python3
new_val = 1 + DataflowQueue(queue=[])
print(new_val)
What is nonsense (You want some number instead of DataflowQueue(queue=[]), don't you?).
Second problem is, that you don't have channels mixed, what seems to be important in this pattern. Anyway, I fixed it, to have proof of concept, working solution:
condition = { it.trim().toInteger() > 10 } // As your output is stdout, you need to trim() to get rid of newline. Then cast to Integer to compare.
feedback_ch = Channel.create()
input_ch = Channel.from(1,2,3).mix( feedback_ch.until(condition) ) // Mixing channel, so we have feedback
process test {
input:
val chan_a from input_ch
output:
stdout output_val into feedback_ch
shell:
var output_val_trimmed = chan_a.toString().trim()
// I am using double quotes, so nextflow interpolates variable above.
"""
#!/usr/bin/python3
new_val = ${output_val_trimmed} + ${output_val_trimmed}
print(new_val)
"""
}
I hope, that it at least set you on right track :)

subprocess.Popen: does not retun complete output , when run through crontab

I am calling some java binary in unix environment wrapped inside python script
When I call script from bash, output comes clean and also being stored in desired variable , However when i run the same script from Cron, Output stored(in a Variable) is incomplete
my code:
command = '/opt/HP/BSM/PMDB/bin/abcAdminUtil -abort -streamId ETL_' \
'SystemManagement_PA#Fact_SCOPE_OVPAGlobal'
proc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
(output, err) = proc.communicate() # Storing Output in output variable
Value of output variable when running from shell:
Abort cmd output:PID:8717
Executing abort function
hibernateConfigurationFile = /OBRHA/HPE-OBR/PMDB/lib/hibernate-core-4.3.8.Final.jar
Starting to Abort Stream ETL_SystemManagement_PA#Fact_SCOPE_OVPAGlobal
Aborting StreamETL_SystemManagement_PA#Fact_SCOPE_OVPAGlobal
Value of output variable when running from cron:
PID:830
It seems output after creating new process is not being stored inside variable , i don't know why ?
Kintul.
You question seems to be very similar to this one: Capture stdout stderr of python subprocess, when it runs from cron or rc.local
See if that helps you.
This happened because Java utility was trowing exception which is not being cached by subprocess.Popen
However exception is catched by subprocess.check_output
Updated Code :
try:
output = subprocess.check_output(command, shell=True, stderr=subprocess.STDOUT, stdin=subprocess.PIPE)
except subprocess.CalledProcessError as exc:
print("Status : FAIL", exc.returncode, exc.output)
else:
print("Output of Resume cmd: \n{}\n".format(output))
file.write("Output of Resume cmd: \n{}\n".format(output) + "\n")
Output of code:
('Status : FAIL', -11, 'PID:37319\n')
('Status : FAIL', -11, 'PID:37320\n')
Hence , command is throwing exception is being cached by subprocess.check_output but not by subprocess.Popen
Extract form official page of subprocess.check_output
If the return code was non-zero it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute and any output in the output attribute.

How to execute an exe file supply the input text file and get the output in a variable in Python 3 on Windows?

I have an exe file mycode.exe at "D:\projFolder\mycode.exe" and an input text file in.txt at "D:\projFolder\in.txt".
I am writing a Python3 script which will execute this exe file with the supplied input text and compare the output.
To achieve it, I am just trying to execute Windows command as :
cmd> "D:\projFolder\mycode.exe" < "D:\projFolder\in.txt"
and want to save the result of the above command in a variable say, resultstdout, and later use it to compare with an expected output file out.txt.
My problem, how to execute the Windows command "D:\projFolder\mycode.exe" < "D:\projFolder\in.txt" in Python3 script?
I was previously working on Python2 and I was achieving it as follows:
baseDirectory = "D:/projFolder"
( stat, consoleoutput ) = subprocess.getstatusoutput(baseDirectory + "/mycode.exe"+ " < " + contestDirectory+ "/" +in.txt)
if(stat == 0):
# Perform result comparision
else:
# Some execption while executing the command.
However, I am not sure on how to refactor the above code in Python 3.

python subprocess.run if you re-direct stdout/stderr, it error exits instead of working

#!/usr/bin/env python3
import subprocess
import os
if False:
# create log file.
kfd = os.open( 'kk.log', os.O_WRONLY )
# redirect stdout & err to a log file.
os.close(1)
os.dup(kfd)
os.close(2)
os.dup(kfd)
subprocess.run([ "echo", "hello world"], check=True )
% ./kk.py
hello world
%
The above works fine, but if you try edit file, and replace False with true:
% ./kk.py
% more kk.log
Traceback (most recent call last):
File "./kk.py", line 16, in <module>
subprocess.run([ "echo", "hello world"], check=True )
File "/usr/lib/python3.6/subprocess.py", line 418, in run
output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['echo', 'hello world']' returned
non-zero exit status 1.
%
We don't get the output, and the process error exits...
I would have expected it to just work, writing to kk.log.
You probably want to say something like this instead:
#!/usr/bin/env python3
import subprocess
import os
if True:
# create log file.
kfd = os.open( 'kk.log', os.O_WRONLY | os.O_CREAT )
# redirect stdout & err to a log file.
os.dup2(kfd, 1)
os.dup2(kfd, 2)
subprocess.run([ "echo", "hello world"], check=True )
Notice the use of os.dup2. There are two reasons for that. First and foremost, resulting file descriptors are inheritable. Your echo had no open stdout/-err actually and hence failed (You can run the following in shell to echo with just stdout closed to check out the behavior: /bin/echo hello world 1>&-). Also note, it may not always hold true that if I closed stdout (1), the lowest descriptor (and result of os.dup) is 1. Someone could have closed your stdin (0) before running the script (same goes for stderr).
The background story to file descriptors inheritance is in the PEP-446.
I've also added os.O_CREAT, since my first failure trying to reproduce your problem was non-existing kk.log.
Needless to say, unless trying to play with interfaces into the os (perhaps as a an exercise), I guess you should normally stick to subprocess itself.

problems with optparse and python 3.4

Since upgrading to Python 3.4.3 optparse doesn't appear to recognise command line options. As a simple test i run this (from the optparse examples)
# test_optparse.py
def main():
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-f", "--file", dest="filename",
help="write report to FILE", metavar="FILE")
parser.add_option("-q", "--quiet",
action="store_false", dest="verbose", default=True,
help="don't print status messages to stdout")
(options, args) = parser.parse_args()
print(options)
if __name__ == '__main__':
main()
When I run test_optparse.py -f test I get
{'verbose': True, 'filename': None}
But running within my IDE i get
{'filename': 'test', 'verbose': True}
I first noted this in a script where I concatenated a run command, for example;
run_cmd = 'python.exe ' + '<path to script> + ' -q ' + '<query_name>'
res = os.system(run_cmd)
But when I displayed the run_cmd string it displayed in the interpreter over 2 lines
print(run_cmd)
'python.exe <path to script> -q '
' <query_name>'
So it may be that the passing of the command line is being fragmented by something and only the first section is being passed (hence no query name) and so the run python script fails with 'no query specified'.
I've changed all this to use subprocess.call to get around this, but it useful to have the run_query script for command line use as was. Any ideas or suggestions?

Resources