Disconnect subprocess from main process - linux

I have a Python 3 script which, among other things, launches Chrome with certain command-line parameters. The relevant portion of the code looks like this:
import multiprocessing as mp
from subprocess import call
import time
import logging
def launch_tab(datadir, url):
# Constructs the command line and launches Chrome via subprocess.call()
def open_browser(urls):
'''Arranges for Chrome to be launched in a subprocess with the specified
URLs, using the already-configured profile directory.
urls: a list of URLs to be loaded one at a time.'''
first_run = True
for tab in urls:
logging.debug('open_browser: {}'.format(tab))
proc = mp.Process(name=tab, target=launch_tab, args=(config.chromedir, tab))
proc.start()
if first_run:
first_run = False
time.sleep(10)
else:
time.sleep(0.5)
What Happens
When I run the script with Chrome already running as launched by the script:
Chrome launches as expected, sees that it is already running, follows the instructions provided on the command line, then terminates the process started on the command line.
Since the process is now terminated, my script also terminates. This is the behavior I want.
When I run the script while Chrome is not running:
Chrome sees that it is not already running, and so doesn't terminate the process started by the command line.
Because the process hasn't terminated, my script doesn't exit until Chrome does, despite its having nothing to do. I have to remember to place the script in the background.
What I Want
I want the subprocess which launches Chrome to be completely independent of the main process, such that I get my command prompt back immediately after Chrome is launched. After launching Chrome, my script has completed its job. I thought of using the daemonize module, but it apparently doesn't work with Python 3.
I'm not wedded to the multiprocessing module. Any reasonable approach which produces the desired end result is acceptable. Switching to Python 2 so I can try daemonize would be too difficult.
The script will only ever run on Linux, so portability isn't an issue.

Related

Terminate subprocess created with shell=True and it's children on Windows

A subprocess is being created that requires use of the internal cmd.exe command start, so a shell is required.
>>> p = subprocess.Popen(['start', 'MCC', '/wait'], shell=True)
The 'MCC' takes place of the title parameter of the start command, and tells the Windows Console Host to load the settings for the console from the registry at HKCU\Console\MCC to open the customized console. Once opened, using the Popen interface, the console will remain after a call to terminate:
>>> p.poll()
>>> p.terminate()
>>> p.poll()
1
My initial diagnosis is that the call to terminate is terminating the subprocess correctly, but its "invisible", and the console that is showing is another process created by start that is not directly accessible from the variable p. I have been successful in terminating the program and closing the console but by running a secondary subprocess:
>>> pp = subprocess.Popen(['taskkill', '/f', '/pid', f'{p.pid}', '/t'])
While it works and is documented, ideally I'd like to avoid invoking another process that could yield zombie issues; thereby leaving me wondering how to terminate a subprocess p and any children it may have created.
For reference I have reviewed the following:
How to terminate a python subprocess launched with shell=True: Linux resolution, gave me the taskkill idea
Environment details:
Windows 10
CPython 3.6.0

How can i stop and start windows services using subprocess with admin permissions?

I am building a python tool to update an application. To do so i have to stop the apache service, do some update related stuff and then start it again, after the update ist finished.
Im currently using python 3.7.2 on Windows 10.
I have tried to somehow build a working process using these questions as a reference:
Run process as admin with subprocess.run in python
Windows can't find the file on subprocess.call()
Python subprocess call returns "command not found", Terminal executes correctly
def stopApache():
processName = config["apacheProcess"]
#stopstr = f'stop-service "{processName}"'
# the line abpove should be used once im finished, but for testing
# purposes im going with the one below
stopstr = 'stop-service "Apache2.4(x64)"'
print(stopstr)
try:
subprocess.run(stopstr, shell=True)
#subprocess.run(stopstr)
# the commented line here is for explanatory purposes, and also to
# show where i started.
except:
print('subprocess failed', sys.exc_info())
stopExecution()
From what i have gathered so far, the shell=TRUE option, is a must, since python does not check PATH.
Given the nature of what im trying to do, i expected the service to get stoppped. In reality the console error looks like this :
stopstr = get-service "Apache2.4(x64)"
Der Befehl "get-service" ist entweder falsch geschrieben oder
konnte nicht gefunden werden.
Which roughly translates to : the command "stop-service" is either incorrectly spelled or could not be found.
If i run the same command directly in powershell i get a similar error. If i open the shell as admin, and run i again, everything works fine.
But, if i use the very same admin shell to run the python code, i am back to square one.
What am i missing here, aparently there is some issue with permissions but i cannot wrap my head arround it
The command for stopping a service in MS Windows is net stop [servicename]
So change stopstr to 'net stop "Apache2.4(x64)"'
You will need to run your script as admin.
shell=True runs commands against cmd, not powershell.
Powershell is different to the regular command line. So to get stop-service to work you'd have to pass it to an instance of powershell.
stopstr = 'stop-service \\"Apache2.4(x64)\\"'
subprocess.run(['powershell', stopstr], shell=True)
As you noted in a comment, it's neccessary to escape the " around the service name, as first python will unescape them, then powershell will use them.

How to run two shell scripts at startup?

I am working with Ubuntu 16.04 and I have two shell scripts:
run_roscore.sh : This one fires up a roscore in one terminal.
run_detection_node.sh : This one starts an object detection node in another terminal and should start up once run_roscore.sh has initialized the roscore.
I need both the scripts to execute as soon as the system boots up.
I made both scripts executable and then added the following command to cron:
#reboot /path/to/run_roscore.sh; /path/to/run_detection_node.sh, but it is not running.
I have also tried adding both scripts to the Startup Applications using this command for roscore: sh /path/to/run_roscore.sh and following command for detection node: sh /path/to/run_detection_node.sh. And it still does not work.
How do I get these scripts to run?
EDIT: I used the following command to see the system log for the CRON process: grep CRON /var/log/syslog and got the following output:
CRON[570]: (CRON) info (No MTA installed, discarding output).
So I installed MTA and then systemlog shows:
CRON[597]: (nvidia) CMD (/path/to/run_roscore.sh; /path/to/run_detection_node.sh)
I am still not able to see the output (which is supposed to be a camera stream with detections, as I see it when I run the scripts directly in a terminal). How should I proceed?
Since I got this working eventually, I am gonna answer my own question here.
I did the following steps to get the script running from startup:
Changed the type of the script from shell to bash (extension .bash).
Changed the shebang statement to be #!/bin/bash.
In Startup Applications, give the command bash path/to/script to run the script.
Basically when I changed the shell type from sh to bash, the script starts running as soon as the system boots up.
Note, in case this helps someone: My intention to have run_roscore.bash as a separate script was to run roscore as a background process. One can run it directly from a single script (which is also running the detection node) by having roscore& as a command before the rosnode starts. This command will fire up the master as a background process and leave the same terminal open for following commands to be executed.
If you could install immortal you could use the require option to start in sequence your services, for example, this is could be the run config for /etc/immortal/script1.yml:
cmd: /path/to/script1
log:
file: /var/log/script1.log
wait: 1
require:
- script2
And for /etc/immortal/script2.yml
cmd: /path/to/script2
log:
file: /var/log/script2.log
What this will do it will try to start both scripts on boot time, the first one script1 will wait 1 second before starting and also wait for script2 to be up and running, see more about the wait and require option here: https://immortal.run/post/immortal/
Based on your operating system you will need to configure/setup immortaldir, her is how to do it for Linux: https://immortal.run/post/how-to-install/
Going more deep in the topic of supervisors there are more alternatives here you could find some: https://en.wikipedia.org/wiki/Process_supervision
If you want to make sure that "Roscore" (whatever it is) gets started when your Ubuntu starts up then you should start it as a service (not via cron).
See this question/answer.

Execute Long running jobs from bottle web server

What I am trying to do
I have a front end system that is generating output. I am accessing this data(JSON) with a post request using bottle. My post receives the json without issue. I need to execute a backend python program(blender automation) and pass this JSON data to that program.
What I have tried to do
Subprocess - Using subprocess call the program and pass the input. In appearance seems to execute but when i check System Monitor the program is not starting but my server continues to run as it should. This subprocess command runs perfectly fine when executed independently from the server.
blender, script, and json are all string objects with absolute file paths
sub = subprocess.Popen([blender + " -b -P " + script + " -- " + json], stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=False)
C Style os.fork() - Same as above which i found reading pydoc that subprocess operates using these methods
Double Fork - From a posting on here tried forking from the server and then calling subprocess from that fork and terminating the parent of the subprocess to create an orphan. My subprocess command still does not execute and never shows up in System Monitor.
What I need
I need a solution that will run from the bottle server in its own process. It will handle multiple requests so the subprocess cannot block in the server. The process being called is fully automated and just requires sending the JSON data in the execution command. The result of the subprocess program will be string path to a file created on the server.
The above subprocess works perfectly fine when called from my test driver program. I just need to connect the execution to the webservice so my front end can trigger its execution.
My bottle post method - prints json when called without issue.
#post('/getData')
def getData():
json_text = request.json
print(json_text)
I am not sure where to go from here. From what i have read thus far, subprocess should work. Any help or suggestions would be very much appreciated. If additional information is needed please let me know. I will edit with more details. Thank you.
Relevant Information:
OS: Ubuntu 16.04 LTS,
Python 3.x
*EDIT
This isn't an elegant solution but my subprocess call works now.
cmd = blender
cmd += " -b -P "
cmd += script
cmd += " -- "
cmd += str(json)
sub = subprocess.Popen([cmd], shell=True)
It seems by setting shell=True and removing the stdout, stderr=PIPE allowed me to see output where i was throwing an unhandled exception because my json data was a list and not a string.
When using python for executing your scripts a process created by Popen.subprocess will unintentionally inherited and keeps open a file descriptor.
You need to close that so that process can run independently. (close_fds=True)
subprocess.Popen(['python', "-u", Constant.WEBAPPS_FOLDER + 'convert_file.py', src, username], shell=False, bufsize=-1, close_fds=True)
Alsso, u dont have to use shell for creating another process. It might have unintended consequences.
I had the exact same problem where bottle was not returning/hangs. It works now.

After running java app in linux script then return console

There is a linux script that contain a statement used to run a java application.
Script (runServer.sh) is like:
java ServerApp &
Since java application is a server , it keeps running forever until gets stopped. Therefore after running runServer.sh it does not return console automatically and keeps waiting to press return key.
And same problem couses remote script call via Runtime api waiting forever.
proc = rt.exec(runScript);
exitVal = proc.waitFor();
Even When running remote script via ssh say from machine1, crtl+c has to be used to exit from remote script execution.
When I insert following statement into runServer.sh, problem is resolved. But in that case I could not write process id into a file via "echo $? >pid"
exec > "\tmp\outlog.txt" 2>&1
Is there a way of returning console automatically by modifiying linux script.
Change the script to:
nohup java ServerApp &

Resources