Run linux shell commands in Python3 - python-3.x

I am creating a service manager to manage services such as apache , tomcat .. etc.
I can enable/disable services by srvmanage.sh enable <service_name> in shell.
I want to do this using python script. How to do it?
service_info = ServiceDB.query.filter_by(service_id=service_id).first()
service_name = service_info.service
subprocess.run(['/home/service_manager/bin/srvmanage.sh enable', service_name],shell=True)
what is the problem with this code ?

I'm guessing if you want to do this in python you may want more functionality. If not #programandoconro answer would do. However, you could also use the subprocess module to get more functionality. It will allow you to run a command with arguments and return a CompletedProcess instance. For example
import subprocess
# call to shell script
process = subprocess.run(['/path/to/script', options/variables], capture_output=True, text=True)
You can add in additional functionality by capturing the stderr/stdout and return code. For example:
# call to shell script
process = subprocess.run(['/path/to/script', options/variables], capture_output=True, text=True)
# return code of 0 test case. Successful run should return 0
if process.returncode != 0:
print('There was a problem')
exit(1)
Docs for subprocess is here

You can use os module to access system commands.
import os
os.system("srvmanage.sh enable <service_name>")

I fixed this issue .
operation = 'enable'
service_operation_script = '/home/service_manager/bin/srvmanage.sh'
service_status = subprocess.check_output(
"sudo " +"/bin/bash " + service_operation_script + " " + operation + " " + service_name, shell=True)
response = service_status.decode("utf-8")
print(response)

Related

gsutil without -m multithreading / parallel default behavior

I am trying to find out if gsutil mv is called without the -m option, what the defaults are. I see in the config.py source code that it looks like even without the -m option the default would be to calculate the number of CPU cores and set that along with 5 threads. So by default if you had a 4 core machine you would get 4 processes and 5 threads, basically multi-threaded out of the box. How would we find out what -m does, I think I saw in some documentation that -m defaults to 10 threads, but how many processes are spawned? I know you can override these settings but whats default with -m?
should_prohibit_multiprocessing, unused_os =ShouldProhibitMultiprocessing()
if should_prohibit_multiprocessing:
DEFAULT_PARALLEL_PROCESS_COUNT = 1
DEFAULT_PARALLEL_THREAD_COUNT = 24
else:
DEFAULT_PARALLEL_PROCESS_COUNT = min(multiprocessing.cpu_count(), 32)
DEFAULT_PARALLEL_THREAD_COUNT = 5
Also would a mv command in a for loop take advantage of -m or will it just feed the gsutil command one at a time rendering parallel obsolete? The reason I ask because using the below loop with 50000 files took 24 hours to complete, I wanted to know if I used the -m option if it would of helped? Not sure if calling the gsutil command each iteration would allow full threading or would it just do it with 10 processes and 10 threads making it twice as fast?
#!/bin/bash
for files in $(cat listing2.txt) ; do
echo "Renaming: $files --> ${files#removeprefix-}"
gsutil mv gs://testbucket/$files gs://testbucket/${files#removeprefix-}
done
Thanks to the commenters #guillaume blaquiere,
I engineered a python program that would multi process the API calls to move the files in the cloud with 25 concurrent processes. I will share the code here to hopefully help others.
import time
import subprocess
import multiprocessing
class GsRenamer:
def __init__(self):
self.gs_cmd = '~/google-cloud-sdk/bin/gsutil'
def execute_jobs(self, cmd):
try:
print('RUNNING PARALLEL RENAME: [{0}]'.format(cmd))
print(cmd)
subprocess.run(cmd, check=True, shell=True)
except subprocess.CalledProcessError as e:
print('[{0}] FATAL: Command failed with error [{1}]').format(cmd,
e)
def get_filenames_from_gs(self):
self.file_list = []
cmd = [self.gs_cmd, 'ls',
'gs://gs-bucket/jason_testing']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output = p.stdout.readlines()
for files in output:
files = files.decode('utf-8').strip()
tokens = files.split('/')[-1]
self.file_list.append(tokens)
self.file_list = list(filter(None, self.file_list))
def rename_files(self, string_original, string_replace):
final_rename_list = []
for files in self.file_list:
renamed_files = files.replace(string_original,
string_replace)
rename_command = "{0} mv gs://gs-bucket/jason_testing/{1} " \
"gs://gs-bucket/jason_testing/{2}".format(
self.gs_cmd, files, renamed_files)
final_rename_list.append(rename_command)
final_rename_list.sort()
multiprocessing.pool = multiprocessing.Pool(
processes=25)
multiprocessing.pool.map(self.execute_jobs, final_rename_list)
def main():
gsr = GsRenamer()
gsr.get_filenames_from_gs()
#gsr.rename_files('sample', 'jason')
gsr.rename_files('jason', 'sample')
if __name__ == "__main__":
main()

Python os.system - set max time execution

I have a simple function which is aimed to parse IP and execute ipwhois on windows machine, printing output in serveral txt files:
for ip in unique_ip:
os.system('c:\whois64 -v ' + ip + ' > ' + 'C:\\ipwhois\\' + ip +'.txt' )
It happen that this os.system call get stuck, and entire process froze.
Question: is it possible to set max time execution on os.system command?
EDIT
This works:
def timeout_test():
command_line = 'whois64 -v xx.xx.xx.xx'
args = shlex.split(command_line)
print(args)
try:
with open('c:\test\iptest.txt', 'w') as fp:
subprocess.run(args, stdout=fp, timeout=5)
except subprocess.TimeoutExpired:
print('process ran too long')
return (True)
test = timeout_test()
You can add a timeout argument to subprocess.call which works almost the same way as os.system. the timeout is measured in seconds since first attempted execution

how do I make my python program to wait for the subprocess to be completed

I have a python program which should execute a command line (command line is a psexec command to call a batch file on the remote server)
I used popen to call the command line. The batch on the remote server produces a return code of 0.
Now I have to wait for this return code and on the basis of the return code I should continue my program execution.
I tried to use .wait() or .check_output() but for some reason did not work for me.
cmd = """psexec -u CORPORATE\user1 -p force \\\sgeinteg27 -s cmd /c "C:\\Planview\\Interfaces\\ProjectPlace_Sree\\PP_Run.bat" """
p = subprocess.Popen(cmd, bufsize=2048, shell=True,
stdin=subprocess.PIPE, stdout=subprocess.PIPE)
p.wait()
print(p.returncode)
##The below block should wait until the above command runs completely.
##And depending on the return code being ZERO i should continue the rest of
##the execution.
if p.returncode ==0:
result = tr.test_readXid.readQuery(cert,planning_code)
print("This is printed depending if the return code is zero")
Here is the EOF the batch file execution and the return code
Can anybody help me with this ?

Execute python scripts from another python script opening another shell

I'm using python 3, I need one script to call the other and run it in a different shell, without passing arguments, I'm using mac os x, but I need it to be cross platform.
I tried with
os.system('script2.py')
subprocess.Popen('script2.py', shell=true)
os.execl(sys.executable, "python3", 'script2.py')
But none of them accomplish what I need.
I use the second script to get inputs, while the first one handles the outputs...
EDIT
This is the code on my second script:
import sys
import os
import datetime
os.remove('Logs/consoleLog.txt')
try:
os.remove('Temp/commands.txt')
except:
...
stopSim = False
command = ''
okFile = open('ok.txt', 'w')
okFile.write('True')
consoleLog = open('Logs/consoleLog.txt', 'w')
okFile.close()
while not stopSim:
try:
sysTime = datetime.datetime.now()
stringT = str(sysTime)
split1 = stringT.split(" ")
split2 = split1[0].split("-")
split3 = split1[1].split(":")
for i in range(3):
split2.append(split3[i])
timeString = "{0}-{1}-{2} {3}:{4}".format(split2[2], split2[1], split2[0], split2[3], split2[4])
except:
timestring = "Time"
commandFile = open('Temp/commands.txt', 'w')
command = input(timeString + ": ")
command = command.lower()
consoleLog.write(timeString + ': ' + command + "\n")
commandFile.write(command)
commandFile.close()
if command == 'stop simulation' or command == 'stop sim':
stopSim = True
consoleLog.close()
os.remove('Temp/commands.txt')
and this is where I call and what for the other script to be operative in script 1:
#Open console
while not consoleOpent:
try:
okFile = open('ok.txt', 'r')
c = okFile.read()
if c == 'True':
consoleOpent = True
except:
...
Sorry for the long question...
Any suggestion to improve the code is welcomed.
Probably the easiest solution is to make the contents of you second script a function in the first script, and execute it as a multiprocessing Process. Note that you can use e.q. multiprocessing.Pipe or multiprocessing.Queue to exchange data between the different processes. You can also share values and arrays via multiprocessing.sharedctypes.
This will be platform-dependent. Here a solution for Mac OS X.
Create new file run_script2 with this content:
/full/path/to/python /full/path/to/script2.py
Make it executable.: chmod +x run_script2
Run from Python with:
os.system('open -a Terminal run_script2')
Alternatively you can use: subprocess.call.
subprocess.call(['open -a Terminal run_script2'], shell=True)
On Windows you can do something similar with (untested):
os.system('start cmd /D /C "python script2.py && pause"')

how to execute python or bash script through ssh connection and get the return code

I have a python file at the location \tmp\ this file print something and return with exit code 22. I'm able to run this script perfectly with putty but not able to do it with paramiko module.
this is my execution code
import paramiko
def main():
remote_ip = '172.xxx.xxx.xxx'
remote_username = 'root'
remote_password = 'xxxxxxx'
remote_path = '/tmp/ab.py'
sub_type = 'py'
commands = ['echo $?']
ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(remote_ip, username=remote_username,password=remote_password)
i,o,e = ssh_client.exec_command('/usr/bin/python /tmp/ab.py')
print o.read(), e.read()
i,o,e = ssh_client.exec_command('echo $?')
print o.read(), e.read()
main()
this is my python script to be executed on remote machine
#!/usr/bin/python
import sys
print "hello world"
sys.exit(20)
I'm not able to understand what is actually wrong in my logic. Also when i do cd \tmp and then ls, i'll still be in root folder.
The following example runs a command via ssh and then get command stdout, stderr and return code:
import paramiko
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hostname='hostname', username='username', password='password')
channel = client.get_transport().open_session()
command = "import sys; sys.stdout.write('stdout message'); sys.stderr.write(\'stderr message\'); sys.exit(22)"
channel.exec_command('/usr/bin/python -c "%s"' % command)
channel.shutdown_write()
stdout = channel.makefile().read()
stderr = channel.makefile_stderr().read()
exit_code = channel.recv_exit_status()
channel.close()
client.close()
print 'stdout:', stdout
print 'stderr:', stderr
print 'exit_code:', exit_code
hope it helps
Each time you run exec_command, a new bash subprocess is being initiated.
That's why when you run something like:
exec_command("cd /tmp");
exec_command("mkdir hello");
The dir "hello" is created in dir, and not inside tmp.
Try to run several commands in the same exec_command call.
A different way is to use python's os.chdir()

Resources