psutil hangs with a psutil.NoSuchProcess process no longer exists - python-3.x

Ive been messing with this but keep getting an error, Im assuming the pid is changing right before Im asking for the cpu_percent.
Heres my little test program, it basically is opening a file waiting for the the program to finish loading close the program and repeats. After a few loads Ill get the "psutil.NoSuchProcess: psutil.NoSuchProcess process no longer exists (pid=10144)". Any guidance on this would be great.
import psutil
from time import gmtime, strftime
import time
import os
def Monitor():
i = "True"
while i == "True":
process = [proc for proc in psutil.process_iter()]
for object in process:
if 'MSACCESS.EXE' in object.name():
if object.name() == 'MSACCESS.EXE':
a=object.cpu_percent(interval=1)
time.sleep(10)
print(a)
if a > 0:
i = "True"
else:
i = "False"
print("Finished")
repeat = "repeat"
while repeat == "repeat":
os.startfile('\\\\revvedupoffice\\Revved Up\\Collective 10.0.accdb')
Monitor()
os.system("TASKKILL /F /IM MSACCESS.EXE")
time.sleep(10)

Related

torch.distributed.barrier() added on all processes not working

import torch
import os
torch.distributed.init_process_group(backend="nccl")
local_rank = int(os.environ["LOCAL_RANK"])
if local_rank >0:
torch.distributed.barrier()
print(f"Entered process {local_rank}")
if local_rank ==0:
torch.distributed.barrier()
The above code gets hanged forever but if I remove both torch.distributed.barrier() then both print statements get executed. Am I missing something here?
On the command line I execute the process using torchrun --nnodes=1 --nproc_per_node 2 test.py where test.py is the name of the script
tried the above code with and without the torch.distributed.barrier()
With the barrier() statements expecting the statement to print for one gpu and exit -- not as expected
Without the barrier() statements expecting both to print -- as expected
Am I missing something here?
It is better to put your multiprocessing initialization code inside the if __name__ == "__main__": to avoid endless process generation and re-design the control flow to fit your purpose:
if __name__ == "__main__":
import torch
import os
torch.distributed.init_process_group(backend="nccl")
local_rank = int(os.environ["LOCAL_RANK"])
if local_rank > 0:
torch.distributed.barrier()
else:
print(f"Entered process {local_rank}")
torch.distributed.barrier()

Why can't I see terminal input (stdout) in Linux after executing this Python3 script?

I wrote a Python3 script (shown below, repo here https://gitlab.com/papiris/gcode-motor-stutter-generator)
After I execute it on Linux (Raspberry Pi OS bullseye 32-bit) and either exit by ctrl+c or let it finish; I can't see what I write in that respective terminal tab anymore. The terminal (kde konsole) responds to commands, the text just isn't visible. I can open a new terminal tab and keep working, but the terminal tabs I run this script in never show the text I input again.
Why is this, and how can I fix it?
I tried searching for this topic, but couldn't find anything similar.
#!/usr/bin/env python3
from sys import stdout, stdin
from curtsies import Input
from threading import Thread
from queue import Queue, Empty
### non-blocking read of stdin
def enqueue_input(stdin, queue):
try:
with Input(keynames='curses') as input_generator:
for _input in iter(input_generator):
queue.put(_input)
except keyboardInterrupt:
sys.exit(1)
q=Queue()
t = Thread(target=enqueue_input, args=(stdin, q))
t.daemon = True # thread dies with the program
t.start()
def main():
while True:
try:
input_key = q.get(timeout=2)
except Empty:
print(f'printing continuously')
pass
else:
if input_key == 'n':
print('extrusion loop stopped, moving on')
break
if __name__ == "__main__":
main()

Stop a running script in python3 from another script

I have 2 scripts - one is running and when I press a button, a second script is launched.
That is working; but I would like to kill that second script when I release the button again.
The killing of the second script seems not to be working
below code of script 1
import RPi.GPIO as GPIO
import time
from time import sleep
import subprocess, os
from subprocess import check_call
buttonPin = 5
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False) #Display no error messages
GPIO.setup(buttonPin, GPIO.IN) # initialize button pin We have a physical 10k Resistor so no pull-up is required
try:
run = 0
while True:
if GPIO.input(buttonPin)==0 and run == 0:
print("Button pressed start second script")
subprocess.call(['python3', "2ndPython.py"])
run=1
while GPIO.input(buttonPin)==0:
time.sleep(0.01)
if GPIO.input(buttonPin)==1 and run == 1:
run = 0
print("Button NOT pressed - kill second script")
check_call(["pkill", "-9", "-f", "python3", "2ndPython.py"]) # Stop script
while GPIO.input(buttonPin)==1:
time.sleep(0.01)
except KeyboardInterrupt:
GPIO.cleanup()
code of my second script that I would like to kill when I release the button.
import time
from time import sleep
def main():
count = 0
while True:
count = count +1
# do whatever the script does
print("You have started this program X " + str(count) + " times now")
time.sleep(2)
if __name__ == "__main__":
main()
Can't seem to find why the second script is not killed.

How to stop a function outside of it in python

I want to know how to stop a running function outside of it. Here is how it should be:
def smth():
time.sleep(5) # Just an example
smth.stop()
Thanks for your help
Here's an example using the multiprocessing library:
from multiprocessing import Process
import time
def foo():
print('Starting...')
time.sleep(5)
print('Done')
p = Process(target=foo) #make process
p.start() #start function
time.sleep(2) #wait 2 secs
p.terminate() #kill it
print('Killed')
Output:
Starting...
Killed
Basically, what this code does is:
Create a process p which runs the function foo when started
Wait 2 seconds to simulate doing other stuff
End the process p with p.terminate()
Since p never passes time.sleep(5) in foo, it doesn't print 'Done'
Run this code online

Terminate subprocess

I'm curious, why the code below freezes. When I kill python3 interpreter, "cat" process remains as a zombie. I expect the subprocess will be terminated before main process finished.
When I send manually SIGTERM to cat /dev/zero, the process is correctly finished (almost immediately)
#!/usr/bin/env python3
import subprocess
import re
import os
import sys
import time
from PyQt4 import QtCore
class Command(QtCore.QThread):
# stateChanged = QtCore.pyqtSignal([bool])
def __init__(self):
QtCore.QThread.__init__(self)
self.__runned = False
self.__cmd = None
print("initialize")
def run(self):
self.__runned = True
self.__cmd = subprocess.Popen(["cat /dev/zero"], shell=True, stdout=subprocess.PIPE)
try:
while self.__runned:
print("reading via pipe")
buf = self.__cmd.stdout.readline()
print("Buffer:{}".format(buf))
except:
logging.warning("Can't read from subprocess (cat /dev/zero) via pipe")
finally:
print("terminating")
self.__cmd.terminate()
self.__cmd.kill()
def stop(self):
print("Command::stop stopping")
self.__runned = False
if self.__cmd:
self.__cmd.terminate()
self.__cmd.kill()
print("Command::stop stopped")
def exitApp():
command.stop()
time.sleep(1)
sys.exit(0)
if __name__ == "__main__":
app = QtCore.QCoreApplication(sys.argv)
command = Command()
# command.daemon = True
command.start()
timer = QtCore.QTimer()
QtCore.QObject.connect(timer, QtCore.SIGNAL("timeout()"), exitApp)
timer.start(2 * 1000)
sys.exit(app.exec_())
As you noted yourself, the reason for zombie is that signal is caught by shell and doesn't effect process created by it. However there is a way to kill shell and all processes created by it; you have to use process group feature. See How to terminate a python subprocess launched with shell=True Having said that if you can manage without shell=True that's always preferable - see my answer here.
I solved this problem in a different way, so here's the result:
I have to call subprocess.Popen with shell=False, because otherwise it creates 2 processes (shell and the process) and __cmd.kill() send signal to shell while "process" remains as a zombie

Resources