subprocess.popen-process stops running while using it with SMACH - python-3.x

I'm simply trying to start a rosbag-command from python in a SMACH. I figured out that one way to do so is to use subprocesses. My goal is that as soon as the rosbag starts, the state machine transitions to state T2 (and stays there).
However, when starting a rosbag using subprocess.popen inside a SMACH-state and then using rostopic echo 'topic' , the rosbag appears to first properly publishing data, then suddenly stops publishing data and only as soon as I end the SMACH using Ctrl+C, the rosbag continues publishing some more data and before it stops as well.
Is there any reasonable explanation for that (did I maybe miss a parameter or is it just not possible to keep the node running that way)? Or is there maybe a better way to start the rosbag and let in run in the background?
(Btw also some other commands like some roslaunch-commands appear to stop working after they're started via subprocess.popen!)
My code looks as follows:
#!/usr/bin/env python3
import os
import signal
import subprocess
import smach
import smach_ros
import rospy
import time
from gnss_navigation.srv import *
class t1(smach.State):
def __init__(self, outcomes=['successful', 'failed', 'preempted']):
smach.State.__init__(self, outcomes)
def execute(self, userdata):
if self.preempt_requested():
self.service_preempt()
return 'preempted'
try:
process1 = subprocess.Popen('rosbag play /home/faps/bags/2020-05-07-11-18-18.bag', stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
except Exception:
return 'failed'
return 'successful'
class t2(smach.State):
def __init__(self, outcomes=['successful', 'failed', 'preempted']):
smach.State.__init__(self, outcomes)
def execute(self, userdata):
#time.sleep(2)
if self.preempt_requested():
self.service_preempt()
return 'preempted'
return 'successful'
if __name__=="__main__":
rospy.init_node('test_state_machine')
sm_1 = smach.StateMachine(outcomes=['success', 'error', 'preempted'])
with sm_1:
smach.StateMachine.add('T1', t1(), transitions={'successful': 'T2', 'failed': 'error'})
smach.StateMachine.add('T2', t2(), transitions={'successful': 'T2', 'failed': 'error', 'preempted':'preempted'})
# Execute SMACH plan
outcome = sm_1.execute()
print('exit-outcome:' + outcome)
# Wait for ctrl-c to stop the application
rospy.spin()

As explained in the answer's comment section of this thread the problem appears when using subprocess.PIPE as stdout.
Therefore, the two possible solutions I used to solve the problem are:
If you don't care about print-outs and stuff -> use devnull as output:
FNULL = open(os.devnull, 'w')
process = subprocess.Popen('your command', stdout=FNULL, stderr=subprocess.STDOUT,
shell=True, preexec_fn=os.setsid)
If you do need print-outs and stuff -> create a log-file and use it as output:
log_file = open('path_to_log/log.txt', 'w')
process = subprocess.Popen('your command', stdout=log_file, stderr=subprocess.STDOUT,
shell=True, preexec_fn=os.setsid)

Related

Why can't I see terminal input (stdout) in Linux after executing this Python3 script?

I wrote a Python3 script (shown below, repo here https://gitlab.com/papiris/gcode-motor-stutter-generator)
After I execute it on Linux (Raspberry Pi OS bullseye 32-bit) and either exit by ctrl+c or let it finish; I can't see what I write in that respective terminal tab anymore. The terminal (kde konsole) responds to commands, the text just isn't visible. I can open a new terminal tab and keep working, but the terminal tabs I run this script in never show the text I input again.
Why is this, and how can I fix it?
I tried searching for this topic, but couldn't find anything similar.
#!/usr/bin/env python3
from sys import stdout, stdin
from curtsies import Input
from threading import Thread
from queue import Queue, Empty
### non-blocking read of stdin
def enqueue_input(stdin, queue):
try:
with Input(keynames='curses') as input_generator:
for _input in iter(input_generator):
queue.put(_input)
except keyboardInterrupt:
sys.exit(1)
q=Queue()
t = Thread(target=enqueue_input, args=(stdin, q))
t.daemon = True # thread dies with the program
t.start()
def main():
while True:
try:
input_key = q.get(timeout=2)
except Empty:
print(f'printing continuously')
pass
else:
if input_key == 'n':
print('extrusion loop stopped, moving on')
break
if __name__ == "__main__":
main()

Killing all processes and threads in python3.X

I'm writing a UI wrapper for reading some info using esptool.py
I have two active threads: UI and procesing - SerialReader.
UI class has reference to the SerialReader and should stop SerialReader when it gets the exit command.
The problem is that I call esptool command which gets stuck in trying to read data over serial connection.
class SerialReaderProcess(threading.Thread):
def __init__(self, window):
super().__init__()
self.window = window
self.logger = window.logger
self.window.set_thread(self)
self._stop_event = threading.Event()
def run(self):
...
#read chip id
esptool.main(['chip_id'])
...
def stop(self):
self._stop_event.set()
def stopped(self):
return self._stop_event.is_set()
What I want is to kill all active process of this program. When I call close the UI and call serialReaderProcess.stop() it doesn't stop the process. I can see the output of esptool on the console.
I don't care if I interrupt anything, no data can be corrupted.
I've tried sys.exit(0) to no avail.
I've researched the problem but couldn't find a solution.
The OS is Ubuntu and I don't care about cross-platform features, but they would be nice
First import os library:
Import os
Then you can write the following code in your exit_event method:
def closeEvent(self, event):
output,errors = p1.communicate()
bashCommand = "killall python3"
sudoPassword = 'your password'
p = os.system('echo %s|sudo -S %s' % (sudoPassword, bashCommand))
As stated in comments, setting the thread as Daemon solved the problem:
super().__init__(daemon=True)
Daemon threads are automatically killed when the program quits.
More about daemons:
Daemon Threads Explanation

Python subprocess for docker-compose

I have an interesting set of requirements that I am trying to conduct using the Python subprocess module and docker-compose. This whole setup is possible in one docker-compose but due to requirement this is what I would like to setup:
call the docker-compose using python subprocess to activate the
test-servers
print all the std-out of above docker-compose running.
as soon as the test-server up and running via docker-compose; call the testing scripts for that server.
This is my docker-compose.py looks like:
import subprocess
from subprocess import PIPE
import os
from datetime import datetime
class MyLog:
def my_log(self, message):
date_now = datetime.today().strftime('%d-%m-%Y %H:%M:%S')
print("{0} || {1}".format(date_now, message))
class DockercomposeRun:
log = MyLog()
def __init__(self):
dir_name, _ = os.path.split(os.path.abspath(__file__))
self.dirname = dir_name
def run_docker_compose(self, filename):
command_name = ["docker-compose", "-f", self.dirname + filename, "up"]
popen = subprocess.Popen(command_name, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True)
return popen
now in my test.py
as soon as my stdout is blank I would like to break the loop of printing and run the rest of the test in test.py.
docker_compose_run = DockercomposeRun()
rc = docker_compose_run.run_docker_compose('/docker-compose.yml.sas-viya-1')
for line in iter(rc.stdout.readline, ''):
print(line, end='')
if line == '':
break
popen.stdout.close()
# start here actual test cases
.......
But for me the loop is never broken even though the stdout of docker-compose goes blank after the server is up and running. And, the test cases are never executed.
Is it the right approach or how I can achieve this?
I think the issue here is because you are not running docker-compose in detached mode and its blocking the application run. Can you try adding "-d" to command_name?

Why python asyncio process in a thread seems unstable on Linux?

I try to run a python3 asynchronous external command from a Qt Application. Before I was using a multiprocessing thread to do it without freezing the Qt Application. But now, I would like to do it with a QThread to be able to pickle and give a QtWindows as argument for some other functions (not presented here). I did it and test it with success on my Windows OS, but I tried the application on my Linux OS, I get the following error :RuntimeError: Cannot add child handler, the child watcher does not have a loop attached
From that point I tried to isolate the problem, and I obtain the minimal (as possible as I could) example below that replicates the problem.
Of course, as I mentioned before, if I replace QThreadPool by a list of multiprocessing.thread this example is working well. I also realized something that astonished me: if I uncomment the line rc = subp([sys.executable,"./HelloWorld.py"]) in the last part of the example, it works also. I couldn't explain myself why.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
## IMPORTS ##
from functools import partial
from PyQt5 import QtCore
from PyQt5.QtCore import QThreadPool, QRunnable, QCoreApplication
import sys
import asyncio.subprocess
# Global variables
Qpool = QtCore.QThreadPool()
def subp(cmd_list):
""" """
if sys.platform.startswith('linux'):
new_loop = asyncio.new_event_loop()
asyncio.set_event_loop(new_loop)
elif sys.platform.startswith('win'):
new_loop = asyncio.ProactorEventLoop() # for subprocess' pipes on Windows
asyncio.set_event_loop(new_loop)
else :
print('[ERROR] OS not available for encodage... EXIT')
sys.exit(2)
rc, stdout, stderr= new_loop.run_until_complete(get_subp(cmd_list) )
new_loop.close()
if rc!=0 :
print('Exit not zero ({}): {}'.format(rc, sys.exc_info()[0]) )#, exc_info=True)
return rc, stdout, stderr
async def get_subp(cmd_list):
""" """
print('subp: '+' '.join(cmd_list) )
# Create the subprocess, redirect the standard output into a pipe
create = asyncio.create_subprocess_exec(*cmd_list, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE) #
proc = await create
# read child's stdout/stderr concurrently (capture and display)
try:
stdout, stderr = await asyncio.gather(
read_stream_and_display(proc.stdout),
read_stream_and_display(proc.stderr))
except Exception:
proc.kill()
raise
finally:
rc = await proc.wait()
print(" [Exit {}] ".format(rc)+' '.join(cmd_list))
return rc, stdout, stderr
async def read_stream_and_display(stream):
""" """
async for line in stream:
print(line, flush=True)
class Qrun_from_job(QtCore.QRunnable):
def __init__(self, job, arg):
super(Qrun_from_job, self).__init__()
self.job=job
self.arg=arg
def run(self):
code = partial(self.job)
code()
def ThdSomething(job,arg):
testRunnable = Qrun_from_job(job,arg)
Qpool.start(testRunnable)
def testThatThing():
rc = subp([sys.executable,"./HelloWorld.py"])
if __name__=='__main__':
app = QCoreApplication([])
# rc = subp([sys.executable,"./HelloWorld.py"])
ThdSomething(testThatThing,'tests')
sys.exit(app.exec_())
with the HelloWorld.py file:
#!/usr/bin/env python3
import sys
if __name__=='__main__':
print('HelloWorld')
sys.exit(0)
Therefore I have two questions: How to make this example working properly with QThread ? And why a previous call of an asynchronous task (with a call of subp function) change the stability of the example on Linux ?
EDIT
Following advices of #user4815162342, I tried with a run_coroutine_threadsafe with the code below. But it is not working and returns the same error ie RuntimeError: Cannot add child handler, the child watcher does not have a loop attached. I also tried to change the threading command by its equivalent in the module mutliprocessing ; and with the last one, the command subp is never launched.
The code :
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
## IMPORTS ##
import sys
import asyncio.subprocess
import threading
import multiprocessing
# at top-level
loop = asyncio.new_event_loop()
def spin_loop():
asyncio.set_event_loop(loop)
loop.run_forever()
def subp(cmd_list):
# submit the task to asyncio
fut = asyncio.run_coroutine_threadsafe(get_subp(cmd_list), loop)
# wait for the task to finish
rc, stdout, stderr = fut.result()
return rc, stdout, stderr
async def get_subp(cmd_list):
""" """
print('subp: '+' '.join(cmd_list) )
# Create the subprocess, redirect the standard output into a pipe
proc = await asyncio.create_subprocess_exec(*cmd_list, stdout=asyncio.subprocess.PIPE, stderr=asyncio.subprocess.PIPE) #
# read child's stdout/stderr concurrently (capture and display)
try:
stdout, stderr = await asyncio.gather(
read_stream_and_display(proc.stdout),
read_stream_and_display(proc.stderr))
except Exception:
proc.kill()
raise
finally:
rc = await proc.wait()
print(" [Exit {}] ".format(rc)+' '.join(cmd_list))
return rc, stdout, stderr
async def read_stream_and_display(stream):
""" """
async for line in stream:
print(line, flush=True)
if __name__=='__main__':
threading.Thread(target=spin_loop, daemon=True).start()
# multiprocessing.Process(target=spin_loop, daemon=True).start()
print('thread passed')
rc = subp([sys.executable,"./HelloWorld.py"])
print('end')
sys.exit(0)
As a general design principle, it's unnecessary and wasteful to create new event loops only to run a single subroutine. Instead, create an event loop, run it in a separate thread, and use it for all your asyncio needs by submitting tasks to it using asyncio.run_coroutine_threadsafe.
For example:
# at top-level
loop = asyncio.new_event_loop()
def spin_loop():
asyncio.set_event_loop(loop)
loop.run_forever()
asyncio.get_child_watcher().attach_loop(loop)
threading.Thread(target=spin_loop, daemon=True).start()
# ... the rest of your code ...
With this in place, you can easily execute any asyncio code from any thread whatsoever using the following:
def subp(cmd_list):
# submit the task to asyncio
fut = asyncio.run_coroutine_threadsafe(get_subp(cmd_list), loop)
# wait for the task to finish
rc, stdout, stderr = fut.result()
return rc, stdout, stderr
Note that you can use add_done_callback to be notified when the future returned by asyncio.run_coroutine_threadsafe finishes, so you might not need a thread in the first place.
Note that all interaction with the event loop should go either through the afore-mentioned run_coroutine_threadsafe (when submitting coroutines) or through loop.call_soon_threadsafe when you need the event loop to call an ordinary function. For example, to stop the event loop, you would invoke loop.call_soon_threadsafe(loop.stop).
I suspect that what you are doing is simply unsupported - according to the documentation:
To handle signals and to execute subprocesses, the event loop must be run in the main thread.
As you are trying to execute a subprocess, I do not think running a new event loop in another thread works.
Thing is, Qt already has an event loop, and what you really need is to convince asyncio to use it. That means that you need an event loop implementation that provides the "event loop interface for asyncio" implemented on top of "Qt's event loop".
I believe that asyncqt provides such an implementation. You may want to try to use QEventLoop(app) in place of asyncio.new_event_loop().

Terminate subprocess

I'm curious, why the code below freezes. When I kill python3 interpreter, "cat" process remains as a zombie. I expect the subprocess will be terminated before main process finished.
When I send manually SIGTERM to cat /dev/zero, the process is correctly finished (almost immediately)
#!/usr/bin/env python3
import subprocess
import re
import os
import sys
import time
from PyQt4 import QtCore
class Command(QtCore.QThread):
# stateChanged = QtCore.pyqtSignal([bool])
def __init__(self):
QtCore.QThread.__init__(self)
self.__runned = False
self.__cmd = None
print("initialize")
def run(self):
self.__runned = True
self.__cmd = subprocess.Popen(["cat /dev/zero"], shell=True, stdout=subprocess.PIPE)
try:
while self.__runned:
print("reading via pipe")
buf = self.__cmd.stdout.readline()
print("Buffer:{}".format(buf))
except:
logging.warning("Can't read from subprocess (cat /dev/zero) via pipe")
finally:
print("terminating")
self.__cmd.terminate()
self.__cmd.kill()
def stop(self):
print("Command::stop stopping")
self.__runned = False
if self.__cmd:
self.__cmd.terminate()
self.__cmd.kill()
print("Command::stop stopped")
def exitApp():
command.stop()
time.sleep(1)
sys.exit(0)
if __name__ == "__main__":
app = QtCore.QCoreApplication(sys.argv)
command = Command()
# command.daemon = True
command.start()
timer = QtCore.QTimer()
QtCore.QObject.connect(timer, QtCore.SIGNAL("timeout()"), exitApp)
timer.start(2 * 1000)
sys.exit(app.exec_())
As you noted yourself, the reason for zombie is that signal is caught by shell and doesn't effect process created by it. However there is a way to kill shell and all processes created by it; you have to use process group feature. See How to terminate a python subprocess launched with shell=True Having said that if you can manage without shell=True that's always preferable - see my answer here.
I solved this problem in a different way, so here's the result:
I have to call subprocess.Popen with shell=False, because otherwise it creates 2 processes (shell and the process) and __cmd.kill() send signal to shell while "process" remains as a zombie

Resources