I have a short Python code (main.py):
#!/usr/bin/bash
import os
import subprocess
print(os.getpid())
os.execvp("ls", ["ls", "-a"])
print("hello")
When I run it I can see the terminal output of os.getpid() and os.execvp commands, but no print("hello").
However when I have another file (another.py) with the content of:
#!/usr/bin/bash
print("hello")
And then change main.py to be:
#!/usr/bin/bash
import os
import subprocess
print(os.getpid())
os.execvp("python3", ["python3", "another.py"])
Then I can see the output of os.getpid() and print("hello")
What is the idea behind execvp?
A very simple script that illustrates fork exec and wait
import os
print('this will run once')
pid = os.fork()
# duplicates the current process after this point
if pid < 0:
print('error forking')
exit()
print('this will run twice')
if pid == 0:
# we are inside child process
print('hello from child')
os.execvp("echo", ["echo", "hello from echo"])
print('this will not run because child process has been completely replaced by echo process')
else:
os.wait()
# wait child process to exit
print('hello from parent')
Related
I wrote a Python3 script (shown below, repo here https://gitlab.com/papiris/gcode-motor-stutter-generator)
After I execute it on Linux (Raspberry Pi OS bullseye 32-bit) and either exit by ctrl+c or let it finish; I can't see what I write in that respective terminal tab anymore. The terminal (kde konsole) responds to commands, the text just isn't visible. I can open a new terminal tab and keep working, but the terminal tabs I run this script in never show the text I input again.
Why is this, and how can I fix it?
I tried searching for this topic, but couldn't find anything similar.
#!/usr/bin/env python3
from sys import stdout, stdin
from curtsies import Input
from threading import Thread
from queue import Queue, Empty
### non-blocking read of stdin
def enqueue_input(stdin, queue):
try:
with Input(keynames='curses') as input_generator:
for _input in iter(input_generator):
queue.put(_input)
except keyboardInterrupt:
sys.exit(1)
q=Queue()
t = Thread(target=enqueue_input, args=(stdin, q))
t.daemon = True # thread dies with the program
t.start()
def main():
while True:
try:
input_key = q.get(timeout=2)
except Empty:
print(f'printing continuously')
pass
else:
if input_key == 'n':
print('extrusion loop stopped, moving on')
break
if __name__ == "__main__":
main()
I have 2 scripts - one is running and when I press a button, a second script is launched.
That is working; but I would like to kill that second script when I release the button again.
The killing of the second script seems not to be working
below code of script 1
import RPi.GPIO as GPIO
import time
from time import sleep
import subprocess, os
from subprocess import check_call
buttonPin = 5
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False) #Display no error messages
GPIO.setup(buttonPin, GPIO.IN) # initialize button pin We have a physical 10k Resistor so no pull-up is required
try:
run = 0
while True:
if GPIO.input(buttonPin)==0 and run == 0:
print("Button pressed start second script")
subprocess.call(['python3', "2ndPython.py"])
run=1
while GPIO.input(buttonPin)==0:
time.sleep(0.01)
if GPIO.input(buttonPin)==1 and run == 1:
run = 0
print("Button NOT pressed - kill second script")
check_call(["pkill", "-9", "-f", "python3", "2ndPython.py"]) # Stop script
while GPIO.input(buttonPin)==1:
time.sleep(0.01)
except KeyboardInterrupt:
GPIO.cleanup()
code of my second script that I would like to kill when I release the button.
import time
from time import sleep
def main():
count = 0
while True:
count = count +1
# do whatever the script does
print("You have started this program X " + str(count) + " times now")
time.sleep(2)
if __name__ == "__main__":
main()
Can't seem to find why the second script is not killed.
I have an interesting set of requirements that I am trying to conduct using the Python subprocess module and docker-compose. This whole setup is possible in one docker-compose but due to requirement this is what I would like to setup:
call the docker-compose using python subprocess to activate the
test-servers
print all the std-out of above docker-compose running.
as soon as the test-server up and running via docker-compose; call the testing scripts for that server.
This is my docker-compose.py looks like:
import subprocess
from subprocess import PIPE
import os
from datetime import datetime
class MyLog:
def my_log(self, message):
date_now = datetime.today().strftime('%d-%m-%Y %H:%M:%S')
print("{0} || {1}".format(date_now, message))
class DockercomposeRun:
log = MyLog()
def __init__(self):
dir_name, _ = os.path.split(os.path.abspath(__file__))
self.dirname = dir_name
def run_docker_compose(self, filename):
command_name = ["docker-compose", "-f", self.dirname + filename, "up"]
popen = subprocess.Popen(command_name, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True)
return popen
now in my test.py
as soon as my stdout is blank I would like to break the loop of printing and run the rest of the test in test.py.
docker_compose_run = DockercomposeRun()
rc = docker_compose_run.run_docker_compose('/docker-compose.yml.sas-viya-1')
for line in iter(rc.stdout.readline, ''):
print(line, end='')
if line == '':
break
popen.stdout.close()
# start here actual test cases
.......
But for me the loop is never broken even though the stdout of docker-compose goes blank after the server is up and running. And, the test cases are never executed.
Is it the right approach or how I can achieve this?
I think the issue here is because you are not running docker-compose in detached mode and its blocking the application run. Can you try adding "-d" to command_name?
I want to capture all output into variables that subprocess prints out. Here is my code:
#!/usr/bin/env python3
import subprocess # Subprocess management
import sys # System-specific parameters and functions
try:
args = ["svn", "info", "/directory/that/does/not/exist"]
output = subprocess.check_output(args).decode("utf-8")
except subprocess.CalledProcessError as e:
error = "CalledProcessError: %s" % str(e)
except:
error = "except: %s" % str(sys.exc_info()[1])
else:
pass
This script still prints this into the terminal:
svn: E155007: '/directory/that/does/not/exist' is not a working copy
How can I capture this into a variable?
check_output only captures stdout and NOT stderr (according to https://docs.python.org/3.6/library/subprocess.html#subprocess.check_output )
In order to capture stderr you should use
>>> subprocess.check_output(
... "ls non_existent_file; exit 0",
... stderr=subprocess.STDOUT, ...)
I recommend reading the docs prior to asking here by the way.
I'm curious, why the code below freezes. When I kill python3 interpreter, "cat" process remains as a zombie. I expect the subprocess will be terminated before main process finished.
When I send manually SIGTERM to cat /dev/zero, the process is correctly finished (almost immediately)
#!/usr/bin/env python3
import subprocess
import re
import os
import sys
import time
from PyQt4 import QtCore
class Command(QtCore.QThread):
# stateChanged = QtCore.pyqtSignal([bool])
def __init__(self):
QtCore.QThread.__init__(self)
self.__runned = False
self.__cmd = None
print("initialize")
def run(self):
self.__runned = True
self.__cmd = subprocess.Popen(["cat /dev/zero"], shell=True, stdout=subprocess.PIPE)
try:
while self.__runned:
print("reading via pipe")
buf = self.__cmd.stdout.readline()
print("Buffer:{}".format(buf))
except:
logging.warning("Can't read from subprocess (cat /dev/zero) via pipe")
finally:
print("terminating")
self.__cmd.terminate()
self.__cmd.kill()
def stop(self):
print("Command::stop stopping")
self.__runned = False
if self.__cmd:
self.__cmd.terminate()
self.__cmd.kill()
print("Command::stop stopped")
def exitApp():
command.stop()
time.sleep(1)
sys.exit(0)
if __name__ == "__main__":
app = QtCore.QCoreApplication(sys.argv)
command = Command()
# command.daemon = True
command.start()
timer = QtCore.QTimer()
QtCore.QObject.connect(timer, QtCore.SIGNAL("timeout()"), exitApp)
timer.start(2 * 1000)
sys.exit(app.exec_())
As you noted yourself, the reason for zombie is that signal is caught by shell and doesn't effect process created by it. However there is a way to kill shell and all processes created by it; you have to use process group feature. See How to terminate a python subprocess launched with shell=True Having said that if you can manage without shell=True that's always preferable - see my answer here.
I solved this problem in a different way, so here's the result:
I have to call subprocess.Popen with shell=False, because otherwise it creates 2 processes (shell and the process) and __cmd.kill() send signal to shell while "process" remains as a zombie