Python subprocess Popen not outputting sqlplus error strings - python-3.x

When I run sqlplus in Python using subprocess I get no output when there are SQL errors, or update or insert statements returning number of rows updated or inserted. When I run select statements with no errors I do get the output.
Here is my code:
This creates a string with newlines that are then written to process.stdin.write()
def write_sql_string(process, args):
sql_commands = ''
sql_commands += "WHENEVER SQLERROR EXIT SQL.SQLCODE;\n"
sql_line = '#' + args.sql_file_name
if crs_debug:
print('DEBUG: ' + 'sys.argv', ' '.join(sys.argv))
if len(args.sql_args) > 0:
len_argv = len(args.sql_args)
sql_commands += "SET VERIFY OFF\n"
for i in range(0, len_argv):
sql_line += ' "' + args.sql_args[i] + '"'
sql_commands += sql_line + "\n"
# if prod_env:
sql_commands += "exit;\n"
if crs_debug:
print('DEBUG: ' + 'sql_line: ' + sql_line)
process.stdin.write(sql_commands)
This code executes the SQL commands
def execute_sql_file(username, dbpass, args):
db_conn_str = username + '/' + dbpass + '#' + args.dbname
# '-S', - Silent
sqlplus_cmd = ['sqlplus', '-S', '-L', db_conn_str]
if crs_debug:
print('DEBUG: ' + ' '.join(sqlplus_cmd))
process = subprocess.Popen(sqlplus_cmd,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
write_sql_string(process, args)
stdout, stderr = process.communicate()
# Get return code of sql query
stdout_lines = stdout.split("\n")
print('STDOUT')
for line in stdout_lines:
line = line.rstrip()
print(line)
stderr_lines = stderr.split("\n")
print('STDERR')
for line in stderr_lines:
line = line.rstrip()
print(line)
sqlplus_rc = process.poll()
# Check if sqlplus returned an error
if sqlplus_rc != 0:
print("FAILURE in " + script_name + " in connecting to Oracle exit code: " + str(sqlplus_rc))
print(stderr_data)
sys.exit(sqlplus_rc)
When I run I run my code for a SQL file that requires parameters, if I run it with missing parameters I get no output. If I run it with parameters I get the correct output.
Here is an example SQL file sel_dual.sql:
SELECT 'THIS IS TEXT &1 &2' FROM dual;
As an example command line:
run_sql_file.py dbname sql_file [arg1]...[argn]
If I run the script with
run_sql_file.py dbname sel_dual.py
I get no output, even though it should ask for a parameter and give other error output.
If I run the script with
run_sql_file.py dbname sel_dual.py Seth F
I get the proper output:
'THISISTEXTSETHF'
----------------------------------------------------------------------------
THIS IS TEXT Seth F
The args referred to is the result of processing args with the argparse module:
parser = argparse.ArgumentParser(description='Run a SQL file with ptional arguments using SQLPlus')
parser.add_argument('dbname', help='db (environment) name')
parser.add_argument('sql_file_name', help='sql file')
parser.add_argument('sql_args', nargs='*', help='arguments for sql file')
args = parser.parse_args()
Does anybody know what could be causing this? I've omitted the rest of the script since it basically gets command arguments and validates that the SQL file exists.
I am running sqlplus version Release 12.1.0.2.0 Production. I am running Python version 3.7.6. I am running on Linux (not sure what version). The kernel release is 4.1.12-124.28.5.el7uek.x86_64.

Related

Python subprocess Error 22 - Works while debugging but not when running

I am trying to call a python script from another using subprocess.Popen(). It works perfectly when I debug the program line-by-line, however when I run the program normally I get the following error:
C:\Python38\python.exe: can't open file '"<filepath>\thumbs.py"': [Errno 22] Invalid argument
I'm stumped as to what the issue is as it works without fault when the program is being debugged line-by-line so not sure what changes exactly when the program is run normally.
I am trying to pass a set of arguments into the subprocesses, and am then parsing these using argparse in the child process.
This is how I am calling the process:
cmd = ['python', "\"" + pythonPath + "thumbs.py\"", '-d', "\"" + outputdb + "\"", '-c', "\"" + path + cache + "\""]
subprocess.Popen(cmd).wait()
And here is how I am parsing the arguments in the child process:
if __name__ == '__main__':
global session
args = None
parser = argparse.ArgumentParser()
parser.add_argument("-d", "--database", action="store", dest="dbname", help="Database to output results to", required=True)
parser.add_argument("-c", "--cache", action="store", dest="cachefile", help="Cache file to scrape.", required=True)
while args is None:
args = parser.parse_args()
Can anyone spot what I'm missing?
I've managed to fix this issue by creating my command and arguments as a string first and then using shlex.Split() to split it into an arguments list.
cmdStr = "python \"" + pythonPath + "thumbs.py\" -d \"" + outputdb + "\" -c \"" + path + cache + "\""
cmd = shlex.split(cmdStr)
subprocess.Popen(cmd).wait()
Further info here: https://docs.python.org/3.4/library/subprocess.html#popen-constructor

Last line of STDOUT is not getting printed if there's an STDIN after it

While trying to integrate sqlmap with my automation tool, when tried to run the command and save the output into a file, the line after which user's input is required is not getting printed on the console; It is getting printed after the arguments are passed. It is required that the console output to be printed on both the places (terminal and output file). As sqlmap requires the user's input during execution cannot use subprocess.check_output()
image
Code snippet:
[try:
cmd = cmd.rstrip()
process = subprocess.Popen(cmd,shell=True,stdout=subprocess.PIPE,stderr=subprocess.STDOUT)
while True:
out = process.stdout.readline()\[0:\]
if out == '' and process.poll() is not None:
break
if out:
output += out
print(out.strip())
except Exception as e:
exception_message = str(e)
output += exception_message
if 'exit status 1' not in exception_message:
self.utilities.print_color("\[!\] Error executing the command: " + cmd,'red')
output += '\r\n'
print(output)
return output][1]

Linecache getline does not work after my application was installed

I am creating a tool that gives an overview of hundredths of test results. This tool access a log file, checks for Pass and Fail verdicts. When it is a fail, I need to go back to previous lines of the log to capture the cause of failure.
The linecache.getline works in my workspace (Python Run via eclipse). But after I created a windows installer (.exe file) and installed the application in my computer, the linecache.getline returns nothing. Is there something I need to add into my setup.py file to fix this or is it my code issue?
Tool Code
precon:
from wx.FileDialog, access the log file
self.result_path = dlg.GetPath()
try:
with open(self.result_path, 'r') as file:
self.checkLog(self.result_path, file)
def checkLog(self, path, f):
line_no = 1
index = 0
for line in f:
n = re.search("FAIL", line, re.IGNORECASE) or re.search("PASS", line, re.IGNORECASE)
if n:
currentline = re.sub('\s+', ' ', line.rstrip())
finalresult = currentline
self.list_ctrl.InsertStringItem(index, finaltestname)
self.list_ctrl.SetStringItem(index, 1, finalresult)
if currentline == "FAIL":
fail_line1 = linecache.getline(path, int(line_no - 3)) #Get reason of failure
fail_line2 = linecache.getline(path, int(line_no - 2)) #Get reason of failure
cause = fail_line1.strip() + " " + fail_line2.strip()
self.list_ctrl.SetStringItem(index, 2, cause)
index += 1
line_no += 1
The issue was resolved by doing the get_line function from this link:
Python: linecache not working as expected?

Execute python scripts from another python script opening another shell

I'm using python 3, I need one script to call the other and run it in a different shell, without passing arguments, I'm using mac os x, but I need it to be cross platform.
I tried with
os.system('script2.py')
subprocess.Popen('script2.py', shell=true)
os.execl(sys.executable, "python3", 'script2.py')
But none of them accomplish what I need.
I use the second script to get inputs, while the first one handles the outputs...
EDIT
This is the code on my second script:
import sys
import os
import datetime
os.remove('Logs/consoleLog.txt')
try:
os.remove('Temp/commands.txt')
except:
...
stopSim = False
command = ''
okFile = open('ok.txt', 'w')
okFile.write('True')
consoleLog = open('Logs/consoleLog.txt', 'w')
okFile.close()
while not stopSim:
try:
sysTime = datetime.datetime.now()
stringT = str(sysTime)
split1 = stringT.split(" ")
split2 = split1[0].split("-")
split3 = split1[1].split(":")
for i in range(3):
split2.append(split3[i])
timeString = "{0}-{1}-{2} {3}:{4}".format(split2[2], split2[1], split2[0], split2[3], split2[4])
except:
timestring = "Time"
commandFile = open('Temp/commands.txt', 'w')
command = input(timeString + ": ")
command = command.lower()
consoleLog.write(timeString + ': ' + command + "\n")
commandFile.write(command)
commandFile.close()
if command == 'stop simulation' or command == 'stop sim':
stopSim = True
consoleLog.close()
os.remove('Temp/commands.txt')
and this is where I call and what for the other script to be operative in script 1:
#Open console
while not consoleOpent:
try:
okFile = open('ok.txt', 'r')
c = okFile.read()
if c == 'True':
consoleOpent = True
except:
...
Sorry for the long question...
Any suggestion to improve the code is welcomed.
Probably the easiest solution is to make the contents of you second script a function in the first script, and execute it as a multiprocessing Process. Note that you can use e.q. multiprocessing.Pipe or multiprocessing.Queue to exchange data between the different processes. You can also share values and arrays via multiprocessing.sharedctypes.
This will be platform-dependent. Here a solution for Mac OS X.
Create new file run_script2 with this content:
/full/path/to/python /full/path/to/script2.py
Make it executable.: chmod +x run_script2
Run from Python with:
os.system('open -a Terminal run_script2')
Alternatively you can use: subprocess.call.
subprocess.call(['open -a Terminal run_script2'], shell=True)
On Windows you can do something similar with (untested):
os.system('start cmd /D /C "python script2.py && pause"')

problems with optparse and python 3.4

Since upgrading to Python 3.4.3 optparse doesn't appear to recognise command line options. As a simple test i run this (from the optparse examples)
# test_optparse.py
def main():
from optparse import OptionParser
parser = OptionParser()
parser.add_option("-f", "--file", dest="filename",
help="write report to FILE", metavar="FILE")
parser.add_option("-q", "--quiet",
action="store_false", dest="verbose", default=True,
help="don't print status messages to stdout")
(options, args) = parser.parse_args()
print(options)
if __name__ == '__main__':
main()
When I run test_optparse.py -f test I get
{'verbose': True, 'filename': None}
But running within my IDE i get
{'filename': 'test', 'verbose': True}
I first noted this in a script where I concatenated a run command, for example;
run_cmd = 'python.exe ' + '<path to script> + ' -q ' + '<query_name>'
res = os.system(run_cmd)
But when I displayed the run_cmd string it displayed in the interpreter over 2 lines
print(run_cmd)
'python.exe <path to script> -q '
' <query_name>'
So it may be that the passing of the command line is being fragmented by something and only the first section is being passed (hence no query name) and so the run python script fails with 'no query specified'.
I've changed all this to use subprocess.call to get around this, but it useful to have the run_query script for command line use as was. Any ideas or suggestions?

Resources