Python subprocess for docker-compose - python-3.x

I have an interesting set of requirements that I am trying to conduct using the Python subprocess module and docker-compose. This whole setup is possible in one docker-compose but due to requirement this is what I would like to setup:
call the docker-compose using python subprocess to activate the
test-servers
print all the std-out of above docker-compose running.
as soon as the test-server up and running via docker-compose; call the testing scripts for that server.
This is my docker-compose.py looks like:
import subprocess
from subprocess import PIPE
import os
from datetime import datetime
class MyLog:
def my_log(self, message):
date_now = datetime.today().strftime('%d-%m-%Y %H:%M:%S')
print("{0} || {1}".format(date_now, message))
class DockercomposeRun:
log = MyLog()
def __init__(self):
dir_name, _ = os.path.split(os.path.abspath(__file__))
self.dirname = dir_name
def run_docker_compose(self, filename):
command_name = ["docker-compose", "-f", self.dirname + filename, "up"]
popen = subprocess.Popen(command_name, stdin=PIPE, stdout=PIPE, stderr=PIPE, universal_newlines=True)
return popen
now in my test.py
as soon as my stdout is blank I would like to break the loop of printing and run the rest of the test in test.py.
docker_compose_run = DockercomposeRun()
rc = docker_compose_run.run_docker_compose('/docker-compose.yml.sas-viya-1')
for line in iter(rc.stdout.readline, ''):
print(line, end='')
if line == '':
break
popen.stdout.close()
# start here actual test cases
.......
But for me the loop is never broken even though the stdout of docker-compose goes blank after the server is up and running. And, the test cases are never executed.
Is it the right approach or how I can achieve this?

I think the issue here is because you are not running docker-compose in detached mode and its blocking the application run. Can you try adding "-d" to command_name?

Related

Missing output after execvp on Ubuntu 22?

I have a short Python code (main.py):
#!/usr/bin/bash
import os
import subprocess
print(os.getpid())
os.execvp("ls", ["ls", "-a"])
print("hello")
When I run it I can see the terminal output of os.getpid() and os.execvp commands, but no print("hello").
However when I have another file (another.py) with the content of:
#!/usr/bin/bash
print("hello")
And then change main.py to be:
#!/usr/bin/bash
import os
import subprocess
print(os.getpid())
os.execvp("python3", ["python3", "another.py"])
Then I can see the output of os.getpid() and print("hello")
What is the idea behind execvp?
A very simple script that illustrates fork exec and wait
import os
print('this will run once')
pid = os.fork()
# duplicates the current process after this point
if pid < 0:
print('error forking')
exit()
print('this will run twice')
if pid == 0:
# we are inside child process
print('hello from child')
os.execvp("echo", ["echo", "hello from echo"])
print('this will not run because child process has been completely replaced by echo process')
else:
os.wait()
# wait child process to exit
print('hello from parent')

Why can't I see terminal input (stdout) in Linux after executing this Python3 script?

I wrote a Python3 script (shown below, repo here https://gitlab.com/papiris/gcode-motor-stutter-generator)
After I execute it on Linux (Raspberry Pi OS bullseye 32-bit) and either exit by ctrl+c or let it finish; I can't see what I write in that respective terminal tab anymore. The terminal (kde konsole) responds to commands, the text just isn't visible. I can open a new terminal tab and keep working, but the terminal tabs I run this script in never show the text I input again.
Why is this, and how can I fix it?
I tried searching for this topic, but couldn't find anything similar.
#!/usr/bin/env python3
from sys import stdout, stdin
from curtsies import Input
from threading import Thread
from queue import Queue, Empty
### non-blocking read of stdin
def enqueue_input(stdin, queue):
try:
with Input(keynames='curses') as input_generator:
for _input in iter(input_generator):
queue.put(_input)
except keyboardInterrupt:
sys.exit(1)
q=Queue()
t = Thread(target=enqueue_input, args=(stdin, q))
t.daemon = True # thread dies with the program
t.start()
def main():
while True:
try:
input_key = q.get(timeout=2)
except Empty:
print(f'printing continuously')
pass
else:
if input_key == 'n':
print('extrusion loop stopped, moving on')
break
if __name__ == "__main__":
main()

How to run and cancel a Linux command using Flask Python API?

I am working on a Flask based Python api. It has two api, run_cmd and stop_cmd. Run cmd will execute a command in the terminal. This command will keep on going until someone manually cancels it. So to cancel it, we have stop_cmd api. Below is the code:
from flask import Flask, jsonify, request
from threading import Thread
from subprocess import call
app = Flask(__name__)
def RunCmd():
call('while true; do echo "hello"; sleep 2s; done', shell=True)
#app.route('/run_cmd', methods=['GET'])
def run_cmd():
Thread(target=RunCmd).start()
return jsonify({"status": "ok"}), 200
#app.route('/stop_cmd', methods=['GET'])
def stop_cmd():
# This api will stop the cmd running in RunCmd
As you can see in the above code, if we hit the /run_cmd, it starts and keeps printing hello in the terminal. I wanted to know how can we cancel this ongoing session of the command so that we can write it in stop cmd api. Is this possible?
This is how I solved it
from flask import Flask, jsonify, request
from threading import Thread
import subprocess
import psutil
from subprocess import call
from command_runner import command_runner
app = Flask(__name__)
proc = ""
def kill(proc_pid):
process = psutil.Process(proc_pid)
for proc in process.children(recursive=True):
proc.kill()
process.kill()
def RunCmd():
global proc
proc = subprocess.Popen(['while true; do echo "hello"; sleep 2s; done'], shell=True)
#app.route('/run_cmd', methods=['GET'])
def run_cmd():
Thread(target=RunCmd).start()
return jsonify({"status": "ok"}), 200
#app.route('/stop_cmd', methods=['GET'])
def stop_cmd():
global proc
kill(proc.pid)
return jsonify({"status": True}), 200
if __name__ == '__main__':
app.run(host='127.0.0.1', port=5000)
Subprocess.call is a part of an older, deprecated API if I am informed correctly. Instead, you should probably use subprocess.Popen(). Then you could start your command by running
proc = subprocess.Popen(["somecommand", "-somearg", "somethingelse"])
This will return a Popen object which you can terminate by sending it a signal, for example proc.terminate() or proc.kill().

subprocess.popen-process stops running while using it with SMACH

I'm simply trying to start a rosbag-command from python in a SMACH. I figured out that one way to do so is to use subprocesses. My goal is that as soon as the rosbag starts, the state machine transitions to state T2 (and stays there).
However, when starting a rosbag using subprocess.popen inside a SMACH-state and then using rostopic echo 'topic' , the rosbag appears to first properly publishing data, then suddenly stops publishing data and only as soon as I end the SMACH using Ctrl+C, the rosbag continues publishing some more data and before it stops as well.
Is there any reasonable explanation for that (did I maybe miss a parameter or is it just not possible to keep the node running that way)? Or is there maybe a better way to start the rosbag and let in run in the background?
(Btw also some other commands like some roslaunch-commands appear to stop working after they're started via subprocess.popen!)
My code looks as follows:
#!/usr/bin/env python3
import os
import signal
import subprocess
import smach
import smach_ros
import rospy
import time
from gnss_navigation.srv import *
class t1(smach.State):
def __init__(self, outcomes=['successful', 'failed', 'preempted']):
smach.State.__init__(self, outcomes)
def execute(self, userdata):
if self.preempt_requested():
self.service_preempt()
return 'preempted'
try:
process1 = subprocess.Popen('rosbag play /home/faps/bags/2020-05-07-11-18-18.bag', stdout=subprocess.PIPE,
shell=True, preexec_fn=os.setsid)
except Exception:
return 'failed'
return 'successful'
class t2(smach.State):
def __init__(self, outcomes=['successful', 'failed', 'preempted']):
smach.State.__init__(self, outcomes)
def execute(self, userdata):
#time.sleep(2)
if self.preempt_requested():
self.service_preempt()
return 'preempted'
return 'successful'
if __name__=="__main__":
rospy.init_node('test_state_machine')
sm_1 = smach.StateMachine(outcomes=['success', 'error', 'preempted'])
with sm_1:
smach.StateMachine.add('T1', t1(), transitions={'successful': 'T2', 'failed': 'error'})
smach.StateMachine.add('T2', t2(), transitions={'successful': 'T2', 'failed': 'error', 'preempted':'preempted'})
# Execute SMACH plan
outcome = sm_1.execute()
print('exit-outcome:' + outcome)
# Wait for ctrl-c to stop the application
rospy.spin()
As explained in the answer's comment section of this thread the problem appears when using subprocess.PIPE as stdout.
Therefore, the two possible solutions I used to solve the problem are:
If you don't care about print-outs and stuff -> use devnull as output:
FNULL = open(os.devnull, 'w')
process = subprocess.Popen('your command', stdout=FNULL, stderr=subprocess.STDOUT,
shell=True, preexec_fn=os.setsid)
If you do need print-outs and stuff -> create a log-file and use it as output:
log_file = open('path_to_log/log.txt', 'w')
process = subprocess.Popen('your command', stdout=log_file, stderr=subprocess.STDOUT,
shell=True, preexec_fn=os.setsid)

Terminate subprocess

I'm curious, why the code below freezes. When I kill python3 interpreter, "cat" process remains as a zombie. I expect the subprocess will be terminated before main process finished.
When I send manually SIGTERM to cat /dev/zero, the process is correctly finished (almost immediately)
#!/usr/bin/env python3
import subprocess
import re
import os
import sys
import time
from PyQt4 import QtCore
class Command(QtCore.QThread):
# stateChanged = QtCore.pyqtSignal([bool])
def __init__(self):
QtCore.QThread.__init__(self)
self.__runned = False
self.__cmd = None
print("initialize")
def run(self):
self.__runned = True
self.__cmd = subprocess.Popen(["cat /dev/zero"], shell=True, stdout=subprocess.PIPE)
try:
while self.__runned:
print("reading via pipe")
buf = self.__cmd.stdout.readline()
print("Buffer:{}".format(buf))
except:
logging.warning("Can't read from subprocess (cat /dev/zero) via pipe")
finally:
print("terminating")
self.__cmd.terminate()
self.__cmd.kill()
def stop(self):
print("Command::stop stopping")
self.__runned = False
if self.__cmd:
self.__cmd.terminate()
self.__cmd.kill()
print("Command::stop stopped")
def exitApp():
command.stop()
time.sleep(1)
sys.exit(0)
if __name__ == "__main__":
app = QtCore.QCoreApplication(sys.argv)
command = Command()
# command.daemon = True
command.start()
timer = QtCore.QTimer()
QtCore.QObject.connect(timer, QtCore.SIGNAL("timeout()"), exitApp)
timer.start(2 * 1000)
sys.exit(app.exec_())
As you noted yourself, the reason for zombie is that signal is caught by shell and doesn't effect process created by it. However there is a way to kill shell and all processes created by it; you have to use process group feature. See How to terminate a python subprocess launched with shell=True Having said that if you can manage without shell=True that's always preferable - see my answer here.
I solved this problem in a different way, so here's the result:
I have to call subprocess.Popen with shell=False, because otherwise it creates 2 processes (shell and the process) and __cmd.kill() send signal to shell while "process" remains as a zombie

Resources