problems with optparse and python 3.4 - python-3.x

Since upgrading to Python 3.4.3 optparse doesn't appear to recognise command line options. As a simple test i run this (from the optparse examples)
# test_optparse.py
def main():
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-f", "--file", dest="filename",
help="write report to FILE", metavar="FILE")
parser.add_option("-q", "--quiet",
action="store_false", dest="verbose", default=True,
help="don't print status messages to stdout")
(options, args) = parser.parse_args()
print(options)
if __name__ == '__main__':
main()
When I run test_optparse.py -f test I get
{'verbose': True, 'filename': None}
But running within my IDE i get
{'filename': 'test', 'verbose': True}
I first noted this in a script where I concatenated a run command, for example;
run_cmd = 'python.exe ' + '<path to script> + ' -q ' + '<query_name>'
res = os.system(run_cmd)
But when I displayed the run_cmd string it displayed in the interpreter over 2 lines
print(run_cmd)
'python.exe <path to script> -q '
' <query_name>'
So it may be that the passing of the command line is being fragmented by something and only the first section is being passed (hence no query name) and so the run python script fails with 'no query specified'.
I've changed all this to use subprocess.call to get around this, but it useful to have the run_query script for command line use as was. Any ideas or suggestions?

Related

Python subprocess Popen not outputting sqlplus error strings

When I run sqlplus in Python using subprocess I get no output when there are SQL errors, or update or insert statements returning number of rows updated or inserted. When I run select statements with no errors I do get the output.
Here is my code:
This creates a string with newlines that are then written to process.stdin.write()
def write_sql_string(process, args):
sql_commands = ''
sql_commands += "WHENEVER SQLERROR EXIT SQL.SQLCODE;\n"
sql_line = '#' + args.sql_file_name
if crs_debug:
print('DEBUG: ' + 'sys.argv', ' '.join(sys.argv))
if len(args.sql_args) > 0:
len_argv = len(args.sql_args)
sql_commands += "SET VERIFY OFF\n"
for i in range(0, len_argv):
sql_line += ' "' + args.sql_args[i] + '"'
sql_commands += sql_line + "\n"
# if prod_env:
sql_commands += "exit;\n"
if crs_debug:
print('DEBUG: ' + 'sql_line: ' + sql_line)
process.stdin.write(sql_commands)
This code executes the SQL commands
def execute_sql_file(username, dbpass, args):
db_conn_str = username + '/' + dbpass + '#' + args.dbname
# '-S', - Silent
sqlplus_cmd = ['sqlplus', '-S', '-L', db_conn_str]
if crs_debug:
print('DEBUG: ' + ' '.join(sqlplus_cmd))
process = subprocess.Popen(sqlplus_cmd,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
write_sql_string(process, args)
stdout, stderr = process.communicate()
# Get return code of sql query
stdout_lines = stdout.split("\n")
print('STDOUT')
for line in stdout_lines:
line = line.rstrip()
print(line)
stderr_lines = stderr.split("\n")
print('STDERR')
for line in stderr_lines:
line = line.rstrip()
print(line)
sqlplus_rc = process.poll()
# Check if sqlplus returned an error
if sqlplus_rc != 0:
print("FAILURE in " + script_name + " in connecting to Oracle exit code: " + str(sqlplus_rc))
print(stderr_data)
sys.exit(sqlplus_rc)
When I run I run my code for a SQL file that requires parameters, if I run it with missing parameters I get no output. If I run it with parameters I get the correct output.
Here is an example SQL file sel_dual.sql:
SELECT 'THIS IS TEXT &1 &2' FROM dual;
As an example command line:
run_sql_file.py dbname sql_file [arg1]...[argn]
If I run the script with
run_sql_file.py dbname sel_dual.py
I get no output, even though it should ask for a parameter and give other error output.
If I run the script with
run_sql_file.py dbname sel_dual.py Seth F
I get the proper output:
'THISISTEXTSETHF'
----------------------------------------------------------------------------
THIS IS TEXT Seth F
The args referred to is the result of processing args with the argparse module:
parser = argparse.ArgumentParser(description='Run a SQL file with ptional arguments using SQLPlus')
parser.add_argument('dbname', help='db (environment) name')
parser.add_argument('sql_file_name', help='sql file')
parser.add_argument('sql_args', nargs='*', help='arguments for sql file')
args = parser.parse_args()
Does anybody know what could be causing this? I've omitted the rest of the script since it basically gets command arguments and validates that the SQL file exists.
I am running sqlplus version Release 12.1.0.2.0 Production. I am running Python version 3.7.6. I am running on Linux (not sure what version). The kernel release is 4.1.12-124.28.5.el7uek.x86_64.

subprocess.Popen: does not retun complete output , when run through crontab

I am calling some java binary in unix environment wrapped inside python script
When I call script from bash, output comes clean and also being stored in desired variable , However when i run the same script from Cron, Output stored(in a Variable) is incomplete
my code:
command = '/opt/HP/BSM/PMDB/bin/abcAdminUtil -abort -streamId ETL_' \
'SystemManagement_PA#Fact_SCOPE_OVPAGlobal'
proc = subprocess.Popen(command, shell=True, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
(output, err) = proc.communicate() # Storing Output in output variable
Value of output variable when running from shell:
Abort cmd output:PID:8717
Executing abort function
hibernateConfigurationFile = /OBRHA/HPE-OBR/PMDB/lib/hibernate-core-4.3.8.Final.jar
Starting to Abort Stream ETL_SystemManagement_PA#Fact_SCOPE_OVPAGlobal
Aborting StreamETL_SystemManagement_PA#Fact_SCOPE_OVPAGlobal
Value of output variable when running from cron:
PID:830
It seems output after creating new process is not being stored inside variable , i don't know why ?
Kintul.
You question seems to be very similar to this one: Capture stdout stderr of python subprocess, when it runs from cron or rc.local
See if that helps you.
This happened because Java utility was trowing exception which is not being cached by subprocess.Popen
However exception is catched by subprocess.check_output
Updated Code :
try:
output = subprocess.check_output(command, shell=True, stderr=subprocess.STDOUT, stdin=subprocess.PIPE)
except subprocess.CalledProcessError as exc:
print("Status : FAIL", exc.returncode, exc.output)
else:
print("Output of Resume cmd: \n{}\n".format(output))
file.write("Output of Resume cmd: \n{}\n".format(output) + "\n")
Output of code:
('Status : FAIL', -11, 'PID:37319\n')
('Status : FAIL', -11, 'PID:37320\n')
Hence , command is throwing exception is being cached by subprocess.check_output but not by subprocess.Popen
Extract form official page of subprocess.check_output
If the return code was non-zero it raises a CalledProcessError. The CalledProcessError object will have the return code in the returncode attribute and any output in the output attribute.

Setting timeout when using os.system function

Firstly, I'd like to say I just begin to learn python, And I want to execute maven command inside my python script (see the partial code below)
os.system("mvn surefire:test")
But unfortunately, sometimes this command will time out, So I wanna to know how to set a timeout threshold to control this command.
That is to say, if the executing time is beyond X seconds, the program will skip the command.
What's more, can other useful solution deal with my problem? Thanks in advance!
use the subprocess module instead. By using a list and sticking with the default shell=False, we can just kill the process when the timeout hits.
p = subprocess.Popen(['mvn', 'surfire:test'])
try:
p.wait(my_timeout)
except subprocess.TimeoutExpired:
p.kill()
Also, you can use in terminal timeout:
Do like that:
import os
os.system('timeout 5s [Type Command Here]')
Also, you can use s, m, h, d for second, min, hours, day.
You can send different signal to command. If you want to learn more, see at:
https://linuxize.com/post/timeout-command-in-linux/
Simple answer
os.system not support timeout.
you can use Python 3's subprocess instead, which support timeout parameter
such as:
yourCommand = "mvn surefire:test"
timeoutSeconds = 5
subprocess.check_output(yourCommand, shell=True, timeout=timeoutSeconds)
Detailed Explanation
in further, I have encapsulate to a function getCommandOutput for you:
def getCommandOutput(consoleCommand, consoleOutputEncoding="utf-8", timeout=2):
"""get command output from terminal
Args:
consoleCommand (str): console/terminal command string
consoleOutputEncoding (str): console output encoding, default is utf-8
timeout (int): wait max timeout for run console command
Returns:
console output (str)
Raises:
"""
# print("getCommandOutput: consoleCommand=%s" % consoleCommand)
isRunCmdOk = False
consoleOutput = ""
try:
# consoleOutputByte = subprocess.check_output(consoleCommand)
consoleOutputByte = subprocess.check_output(consoleCommand, shell=True, timeout=timeout)
# commandPartList = consoleCommand.split(" ")
# print("commandPartList=%s" % commandPartList)
# consoleOutputByte = subprocess.check_output(commandPartList)
# print("type(consoleOutputByte)=%s" % type(consoleOutputByte)) # <class 'bytes'>
# print("consoleOutputByte=%s" % consoleOutputByte) # b'640x360\n'
consoleOutput = consoleOutputByte.decode(consoleOutputEncoding) # '640x360\n'
consoleOutput = consoleOutput.strip() # '640x360'
isRunCmdOk = True
except subprocess.CalledProcessError as callProcessErr:
cmdErrStr = str(callProcessErr)
print("Error %s for run command %s" % (cmdErrStr, consoleCommand))
# print("isRunCmdOk=%s, consoleOutput=%s" % (isRunCmdOk, consoleOutput))
return isRunCmdOk, consoleOutput
demo :
isRunOk, cmdOutputStr = getCommandOutput("mvn surefire:test", timeout=5)

Execute python scripts from another python script opening another shell

I'm using python 3, I need one script to call the other and run it in a different shell, without passing arguments, I'm using mac os x, but I need it to be cross platform.
I tried with
os.system('script2.py')
subprocess.Popen('script2.py', shell=true)
os.execl(sys.executable, "python3", 'script2.py')
But none of them accomplish what I need.
I use the second script to get inputs, while the first one handles the outputs...
EDIT
This is the code on my second script:
import sys
import os
import datetime
os.remove('Logs/consoleLog.txt')
try:
os.remove('Temp/commands.txt')
except:
...
stopSim = False
command = ''
okFile = open('ok.txt', 'w')
okFile.write('True')
consoleLog = open('Logs/consoleLog.txt', 'w')
okFile.close()
while not stopSim:
try:
sysTime = datetime.datetime.now()
stringT = str(sysTime)
split1 = stringT.split(" ")
split2 = split1[0].split("-")
split3 = split1[1].split(":")
for i in range(3):
split2.append(split3[i])
timeString = "{0}-{1}-{2} {3}:{4}".format(split2[2], split2[1], split2[0], split2[3], split2[4])
except:
timestring = "Time"
commandFile = open('Temp/commands.txt', 'w')
command = input(timeString + ": ")
command = command.lower()
consoleLog.write(timeString + ': ' + command + "\n")
commandFile.write(command)
commandFile.close()
if command == 'stop simulation' or command == 'stop sim':
stopSim = True
consoleLog.close()
os.remove('Temp/commands.txt')
and this is where I call and what for the other script to be operative in script 1:
#Open console
while not consoleOpent:
try:
okFile = open('ok.txt', 'r')
c = okFile.read()
if c == 'True':
consoleOpent = True
except:
...
Sorry for the long question...
Any suggestion to improve the code is welcomed.
Probably the easiest solution is to make the contents of you second script a function in the first script, and execute it as a multiprocessing Process. Note that you can use e.q. multiprocessing.Pipe or multiprocessing.Queue to exchange data between the different processes. You can also share values and arrays via multiprocessing.sharedctypes.
This will be platform-dependent. Here a solution for Mac OS X.
Create new file run_script2 with this content:
/full/path/to/python /full/path/to/script2.py
Make it executable.: chmod +x run_script2
Run from Python with:
os.system('open -a Terminal run_script2')
Alternatively you can use: subprocess.call.
subprocess.call(['open -a Terminal run_script2'], shell=True)
On Windows you can do something similar with (untested):
os.system('start cmd /D /C "python script2.py && pause"')

Starting a Python script from within another - odd behavior

Through a command-line (/bin/sh) on a Ubuntu system, I executed a Python3 script that uses multiprocessing.Process() to start another Python3 script. I got the error message below:
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ ./Betabot #THE SECOND SCRIPT NEVER EXECUTES
/bin/sh: 1: Syntax error: "(" unexpected (expecting "}")
Traceback (most recent call last):
File "./Betabot", line 26, in <module>
JOB_CONFIG = multiprocessing.Process(os.system('./conf/set_data.py3'))
File "/usr/lib/python3.3/multiprocessing/process.py", line 72, in __init__
assert group is None, 'group argument must be None for now'
AssertionError: group argument must be None for now
#TESTING THE SECOND SCRIPT BY ITSELF IN TWO WAYS (both work)
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ python3 -c "import os; os.system('./conf/set_data.py3')" #WORKS
collier#Nacho-Laptop:/media/collier/BETABOTS/Neobot$ ./conf/set_data.py3 #WORKS
The question is - Why is this not working? It should start the second script and both continue executing with out issues.
I made edits to the code trying to solve the issue. The error is now on line 13. The same error occurs on line 12 "JOB_CONFIG = multiprocessing.Process(os.system('date')); JOB_CONFIG.start()" that I used as a testing line. I changed line 12 to be "os.system('date')" and that works, so the error lies in the multiprocessing command.
#!/usr/bin/env python3
import os, subprocess, multiprocessing
def write2file(openfile, WRITE):
with open(openfile, 'w') as file:
file.write(str(WRITE))
writetofile = writefile = filewrite = writer = filewriter = write2file
global BOTNAME, BOTINIT
BOTNAME = subprocess.getoutput('cat ./conf/startup.xml | grep -E -i -e \'<property name=\"botname\" value\' | ssed -r -e "s|<property name=\"botname\" value=\"(.*)\"/>|\1|gI"')
BOTINIT = os.getpid()
###Setup science information under ./mem/###
JOB_CONFIG = multiprocessing.Process(os.system('date')); JOB_CONFIG.start()
JOB_CONFIG = multiprocessing.Process(os.system('./conf/set_data.py3')); JOB_CONFIG.start()
###START###
write2file('./mem/BOTINIT_PID', BOTINIT); write2file('./mem/tty', os.ctermid()); write2file('./mem/SERVER_PID', BOTINIT)
JOB_EMOTION = multiprocessing.Process(os.system('./lib/emoterm -T Emotion -e ./lib/Emotion_System')); JOB_EMOTION.start()
JOB_SENSORY = multiprocessing.Process(os.system('./lib/Sensory_System')); JOB_SENSORY.start()
print(BOTNAME + ' is starting'); JOB_CONFIG.join()
try:
os.system('./lib/neoterm -T' + BOTNAME + ' -e ./lib/beta_engine')
except:
print('There seems to be an error.'); JOB_EMOTION.join(); JOB_SENSORY.join(); exit()
JOB_EMOTION.join(); JOB_SENSORY.join(); exit()
When starting a Python3 script from a Python3 script that is to be run while the main script continues, a command like this must be done:
JOB_CONFIG = subprocess.Popen([sys.executable, './conf/set_data.py3'])
The filename string is the script. This is save to a variable to allow me to manipulate the process later. For instance, I could use the command "JOB_CONFIG.wait()" when the main script should wait for the other script.
As for that hashpling in the first line of the error message, that is due to a syntax error in the first subprocess command used.

Resources