My python script executes a .bat fie which in turn runs a Microsoft (MS) Access program. The MS Access app populates the DB then quits. Sometimes Access fails due a corrupt DB or whatever issue that may cause it to hang. I want to avoid this and kill the process if it runs longer than some seconds variable. Testing my script I set timeout=5 (seconds) for a process that takes at least 10 seconds. When the timeout fires it waits. It doesn’t kill the process but instead waits until the MS Access process completes (as per the subprocess.run python docs). How can I kill the MS Access app. The code works as expected for bat files that don’t call Access.
if( args.verbose ):
print( "Starting timer for " + str( alarmTime ) + " mintues" )
try:
proc = subprocess.run( [ batchFile ],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
universal_newlines=True,
shell=False,
check=True,
timeout=5 )
except subprocess.TimeoutExpired:
if( args.verbose ):
print ( "FAIL: Ran batch: ", batchFile, " and it timed out!" )
if( args.email ):
email( "FAIL: A batch script process is running longer than 15 minutes\nSomething is wrong\nquitting", mailTo )
returnStatus = returnStatus + " The following batch job timed out: " + batchFile + "\n"
dependancy[ batchFile ][ 1 ] = None
continue
dependancy[ batchFile ][ 1 ] = RunStatus( True, ( True if proc.returncode == 0 else False ) )
if( proc.returncode != 0 ):
returnStatus = returnStatus + " The following batch job failed: " + batchFile + "\n"
Related
This question already has answers here:
Execute multiple dependent commands individually with Paramiko and find out when each command finishes
(1 answer)
Executing command using "su -l" in SSH using Python
(1 answer)
Closed 5 days ago.
I wanted to wait the given command execution has been completed on remote machines. this case it just executed and return and not waiting till its completed.
import paramiko
import re
import time
def scp_switch(host, username, PasswdValue):
ssh = paramiko.SSHClient()
try:
# Logging into remote host as my credentials
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(host, username=username, password=PasswdValue ,timeout=30)
try:
# switcing to powerbroker/root mode
command = "pbrun xyz -u root\n"
channel = ssh.invoke_shell()
channel.send(command)
time.sleep(3)
while not re.search('Password',str(channel.recv(9999), 'utf-8')):
time.sleep(1)
print('Waiting...')
channel.send("%s\n" % PasswdValue)
time.sleep(3)
#Executing the command on remote host with root (post logged as root)
# I dont have any specific keyword to search in given output hence I am not using while loop here.
cmd = "/tmp/slp.sh cool >/tmp/slp_log.txt \n"
print('Executing %s' %cmd)
channel.send(cmd) # its not waiting here till the process completed,
time.sleep(3)
res = str(channel.recv(1024), 'utf-8')
print(res)
print('process completed')
except Exception as e:
print('Error while switching:', str(e))
except Exception as e:
print('Error while SSH : %s' % (str(e)))
ssh.close()
""" Provide the host and credentials here """
HOST = 'abcd.us.domain.com'
username = 'heyboy'
password = 'passcode'
scp_switch(HOST, username, password)
As per my research, it will not return any status code, is there any logic to get the return code and wait till the process completed?
I know this is an old post, but leaving this here in case someone has the same problem.
You can use an echo that will run in case your command executes successfully, for example if you are doing an scp ... && echo 'transfer complete', then you can catch this output with a loop
while True:
s = chan.recv(4096)
s = s.decode()
if 'transfer done' in s:
break
time.sleep(1)
When I run sqlplus in Python using subprocess I get no output when there are SQL errors, or update or insert statements returning number of rows updated or inserted. When I run select statements with no errors I do get the output.
Here is my code:
This creates a string with newlines that are then written to process.stdin.write()
def write_sql_string(process, args):
sql_commands = ''
sql_commands += "WHENEVER SQLERROR EXIT SQL.SQLCODE;\n"
sql_line = '#' + args.sql_file_name
if crs_debug:
print('DEBUG: ' + 'sys.argv', ' '.join(sys.argv))
if len(args.sql_args) > 0:
len_argv = len(args.sql_args)
sql_commands += "SET VERIFY OFF\n"
for i in range(0, len_argv):
sql_line += ' "' + args.sql_args[i] + '"'
sql_commands += sql_line + "\n"
# if prod_env:
sql_commands += "exit;\n"
if crs_debug:
print('DEBUG: ' + 'sql_line: ' + sql_line)
process.stdin.write(sql_commands)
This code executes the SQL commands
def execute_sql_file(username, dbpass, args):
db_conn_str = username + '/' + dbpass + '#' + args.dbname
# '-S', - Silent
sqlplus_cmd = ['sqlplus', '-S', '-L', db_conn_str]
if crs_debug:
print('DEBUG: ' + ' '.join(sqlplus_cmd))
process = subprocess.Popen(sqlplus_cmd,
stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
write_sql_string(process, args)
stdout, stderr = process.communicate()
# Get return code of sql query
stdout_lines = stdout.split("\n")
print('STDOUT')
for line in stdout_lines:
line = line.rstrip()
print(line)
stderr_lines = stderr.split("\n")
print('STDERR')
for line in stderr_lines:
line = line.rstrip()
print(line)
sqlplus_rc = process.poll()
# Check if sqlplus returned an error
if sqlplus_rc != 0:
print("FAILURE in " + script_name + " in connecting to Oracle exit code: " + str(sqlplus_rc))
print(stderr_data)
sys.exit(sqlplus_rc)
When I run I run my code for a SQL file that requires parameters, if I run it with missing parameters I get no output. If I run it with parameters I get the correct output.
Here is an example SQL file sel_dual.sql:
SELECT 'THIS IS TEXT &1 &2' FROM dual;
As an example command line:
run_sql_file.py dbname sql_file [arg1]...[argn]
If I run the script with
run_sql_file.py dbname sel_dual.py
I get no output, even though it should ask for a parameter and give other error output.
If I run the script with
run_sql_file.py dbname sel_dual.py Seth F
I get the proper output:
'THISISTEXTSETHF'
----------------------------------------------------------------------------
THIS IS TEXT Seth F
The args referred to is the result of processing args with the argparse module:
parser = argparse.ArgumentParser(description='Run a SQL file with ptional arguments using SQLPlus')
parser.add_argument('dbname', help='db (environment) name')
parser.add_argument('sql_file_name', help='sql file')
parser.add_argument('sql_args', nargs='*', help='arguments for sql file')
args = parser.parse_args()
Does anybody know what could be causing this? I've omitted the rest of the script since it basically gets command arguments and validates that the SQL file exists.
I am running sqlplus version Release 12.1.0.2.0 Production. I am running Python version 3.7.6. I am running on Linux (not sure what version). The kernel release is 4.1.12-124.28.5.el7uek.x86_64.
I have a simple function which is aimed to parse IP and execute ipwhois on windows machine, printing output in serveral txt files:
for ip in unique_ip:
os.system('c:\whois64 -v ' + ip + ' > ' + 'C:\\ipwhois\\' + ip +'.txt' )
It happen that this os.system call get stuck, and entire process froze.
Question: is it possible to set max time execution on os.system command?
EDIT
This works:
def timeout_test():
command_line = 'whois64 -v xx.xx.xx.xx'
args = shlex.split(command_line)
print(args)
try:
with open('c:\test\iptest.txt', 'w') as fp:
subprocess.run(args, stdout=fp, timeout=5)
except subprocess.TimeoutExpired:
print('process ran too long')
return (True)
test = timeout_test()
You can add a timeout argument to subprocess.call which works almost the same way as os.system. the timeout is measured in seconds since first attempted execution
I would like to do a packet capture using tshark, a command-line flavor of Wireshark, while connecting to a remote host device on telnet. I would like to invoke the function I wrote for capture:
def wire_cap(ip1,ip2,op_fold,file_name,duration): # invoke tshark to capture traffic during session
if duration == 0:
cmd='"tshark" -i 1 -P -w '+ op_fold+file_name+'.pcap src ' + str(ip1) + ' or src '+ str(ip2)
else:
cmd='"tshark" -i 1 -a duration:'+str(duration)+' -P -w '+ op_fold+file_name+'.pcap src ' + str(ip1) + ' or src '+ str(ip2)
p = subprocess.Popen(cmd, shell=True,stderr=subprocess.PIPE)
while True:
out = p.stderr.read(1)
if out == '' and p.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
For debugging purpose, I would like to run this function in the background by calling it as and when required and stopping it when I've got the capture. Something like:
Start a thread or a background process called wire_capture
//Do something here
Stop the thread or the background process wire_capture
By reading a bit, I realized that thread.start_new_thread() and threading.Thread() seems to be suitable only when I know the duration of the capture (an exit condition). I tried using thread.exit() but it acted like sys.exit() and stopped the execution of the program completely. I also tried threading.Event() as follows:
if cap_flg:
print "Starting a packet capture thread...."
th_capture = threading.Thread(target=wire_cap, name='Thread_Packet_Capture', args=(IP1, IP2, output, 'wire_capture', 0, ))
th_capture.setDaemon(True)
th_capture.start()
.
.
.
.
.
if cap_flg:
thread_kill = threading.Event()
print "Exiting the packet capture thread...."
thread_kill.set()
th_capture.join()
I would like to know how can I make the process stop when I feel like stopping it (Like an exit condition that can be added so that I can exit the thread execution). The above code I tried doesn't seem to work.
The threading.Event() approach is on the right track, but you need the event to be visible in both threads, so you need to create it before you start the second thread and pass it in:
if cap_flg:
print "Starting a packet capture thread...."
thread_kill = threading.Event()
th_capture = threading.Thread(target=wire_cap, name='Thread_Packet_Capture', args=(IP1, IP2, output, 'wire_capture', 0, thread_kill))
th_capture.setDaemon(True)
th_capture.start()
In that while loop, have the watching thread check the event on every iteration, and stop the loop (and also probably kill the tshark it started) if it is set. You also need to make sure that the process doesn't sit waiting forever for output from the process, and ignoring the termination event, by only reading from the pipe if there is data available:
def wire_cap(ip1,ip2,op_fold,file_name,duration,event): # invoke tshark to capture traffic during session
if duration == 0:
cmd='"tshark" -i 1 -P -w '+ op_fold+file_name+'.pcap src ' + str(ip1) + ' or src '+ str(ip2)
else:
cmd='"tshark" -i 1 -a duration:'+str(duration)+' -P -w '+ op_fold+file_name+'.pcap src ' + str(ip1) + ' or src '+ str(ip2)
p = subprocess.Popen(cmd, shell=True,stderr=subprocess.PIPE)
while not event.is_set():
# Make sure to not block forever waiting for
# the process to say things, so we can see if
# the event gets set. Only read if data is available.
if len(select.select([p.stderr], [], [], 0.1)) > 0:
out = p.stderr.read(1)
if out == '' and p.poll() != None:
break
if out != '':
sys.stdout.write(out)
sys.stdout.flush()
p.kill()
And then to actually tell the thread to stop you just set the event:
if cap_flg:
print "Exiting the packet capture thread...."
thread_kill.set()
th_capture.join()
def generate_Dump_File(type_name, server_name):
#print 'Server Name:'+ server_name
server = '/Server:'+ server_name
# Set the Node ID
serverID = AdminConfig.getid(server)
#print 'Server ID:' + serverID
if serverID == "" :
print "Server Name you have entered does not exist"
else :
jvm = AdminControl.queryNames('type='+type_name+',process='+server_name+',*')
print "####################################"
print "Generating Heap Dump..................\n"
AdminControl.invoke(jvm, 'generateHeapDump')
print "Generating Java Core Dump..................\n"
AdminControl.invoke(jvm, 'dumpThreads')
print "Generating System Core Dump..................\n"
AdminControl.invoke(jvm, 'generateSystemDump')
generate_Dump_File(type_name, server_name)
This is the code I am executing in WAS
/opt/IBM/WebSphere/AppServer/profiles/AppSrv01/bin/
The above path where i am executing this script
But i need execute those script for every 120 seconds, above script i am getting input from user.. In cron tab is not possible..
You could loop within the script after you've done a one-time lookup of the server ID.
import time
while true:
time.sleep(120)
... existing code...
I have added the code below to execute jython script for every 120 seconds to generate dump in IBM Websphere
def print_time( threadName, delay):
while 1:
time.sleep(delay)
print "%s: %s" % ( threadName, time.ctime(time.time()) )
execute code here
try:
thread.start_new_thread( print_time, ("Thread", 120, ) )
except:
print "Error: unable to start thread"
while 1:
pass
Above code works fine for me..