Jython How to stop script from thread? - multithreading

I'm looking for some exit code that will be run from a thread but will be able to kill the main script. It's in Jython but I can't use java.lang.System.exit() because I still want the Java app I'm in to run, and sys.exit() isn't working. Ideally I would like to output a message then exit.
My code uses the threading.Timer function to run a function after a certain period of time. Here I'm using it to end a for loop that is executing for longer than 1 sec. Here is my code:
import threading
def exitFunct():
#exit code here
t = threading.Timer(1.0, exitFunct)
t.start()
for i in range(1, 2000):
print i

Well, if you had to, you could call mainThread.stop(). But you shouldn't.
This article explains why what you're trying to do is considered a bad idea.

If you want to kill the current process and you don't care about flushing IO buffers or reseting the terminal, you can use os._exit().
I don't know why they made this so hard.

Related

Proper way to end continuous Python process

What is proper practice for stopping a continuously running Python process? Consider the following code:
from multiprocessing import Process
run_process = Process(target=self.run)
def start_running():
run_process.start()
def run():
while True:
# Do stuff
def stop_running():
# ???
run_process.join()
I would expect that the ideal situation would be to have the run() process end on its own when stop_running() is called. One idea is to signal a semaphore in stop_running(), which is checked in the run() loop, so it knows to break. But I would like to know what common practice is.
There is no "proper" way of doing much of anything in Python. As you are running a process instead of a thread, you have more options, which is good.
If your process does not have a risk of being stuck completely and it does not have the risk of being stuck on IO waiting for input potentially indefinitely (for example from a queue), I would use a semaphore or a variable to signal the process it should exit now.
If there is a risk of the process being stuck in wait, you can get rid of it by run_process.kill() or run_process.terminate(). Kill equals kill -9 in shell and is guaranteed to get the job done.
The drawback in killing/terminating a process is that if the process holds any shared objects (queues, shared variables etc), those become corrupted also in the other processes that share them. It is safe to discard them but if you keep reading from them, you may encounter occasionally obscure exceptions that are hard to debug.
So as always it depends. The variable/semaphore method has its strengths but if there is a risk of the subprocess being stuck in sleep or wait and not checking the condition, you do not achieve anything. If your subprocess does not share any resources with other processes, kill may be simpler and a guaranteed way of getting rid of your process.

Can I do work in another thread while waiting for subprocess Popen

I have a Python 3.7 project
It is using a library which uses subprocess Popen to call out to a shell script.
I am wondering: if were to put the library calls in a separate thread, would I be able to do work in the main thread while waiting for the result from Popen in the other thread?
There is an answer here https://stackoverflow.com/a/33352871/202168 which says:
The way Python threads work with the GIL is with a simple counter.
With every 100 byte codes executed the GIL is supposed to be released
by the thread currently executing in order to give other threads a
chance to execute code. This behavior is essentially broken in Python
2.7 because of the thread release/acquire mechanism. It has been fixed in Python 3.
Either way does not sound particularly hopeful for what I want to do. It sounds like if the "library calls" thread has not hit the 100 bytecode trigger point when the call to Popen.wait is made then probably it will not switch to my other thread and the whole app will wait for the subprocess?
Maybe this info is wrong however.
Here is another answer https://stackoverflow.com/a/16262657/202168 which says:
...the interpreter can always release the GIL; it will give it to some
other thread after it has interpreted enough instructions, or
automatically if it does some I/O. Note that since recent Python 3.x,
the criteria is no longer based on the number of executed
instructions, but on whether enough time has elapsed.
This sounds more hopeful, since presumably communicating with the subprocess would involve I/O and might therefore allow a context switch for my main thread to be able to proceed in the meantime. (or perhaps just elapsed time waiting on the wait would cause a context switch)
I am aware of https://docs.python.org/3/library/asyncio-subprocess.html which explicitly solves this problem, but I am calling a 3rd-party library which just uses plain subprocess.Popen.
Can anyone confirm if the "subprocess calls in a separate thread" idea is likely to be useful to me, in Python 3.7 specifically?
I had time to make an experiment, so I will answer my own question...
I set up two files:
mainthread.py
#!/usr/bin/env python
import subprocess
import threading
import time
def run_busyproc():
print(f'{time.time()} Starting busyprocess...')
subprocess.run(["python", "busyprocess.py"])
print(f'{time.time()} busyprocess done.')
if __name__ == "__main__":
thread = threading.Thread(target=run_busyproc)
print("Starting thread...")
thread.start()
while thread.is_alive():
print(f"{time.time()} Main thread doing its thing...")
time.sleep(0.5)
print("Thread is done (?)")
print("Exit main.")
and busyprocess.py:
#!/usr/bin/env python
from time import sleep
if __name__ == "__main__":
for _ in range(100):
print("Busy...")
sleep(0.5)
print("Done")
Running mainthread.py from the command-line I can see that there is the context-switch that you would hope to see - main thread is able to do work while waiting on the result of the subprocess:
Starting thread...
1555970578.20475 Main thread doing its thing...
1555970578.204679 Starting busyprocess...
Busy...
1555970578.710308 Main thread doing its thing...
Busy...
1555970579.2153869 Main thread doing its thing...
Busy...
1555970579.718168 Main thread doing its thing...
Busy...
1555970580.2231748 Main thread doing its thing...
Busy...
1555970580.726122 Main thread doing its thing...
Busy...
1555970628.009814 Main thread doing its thing...
Done
1555970628.512945 Main thread doing its thing...
1555970628.518155 busyprocess done.
Thread is done (?)
Exit main.
Good news everybody, python threading works :)

How to idiomatically end an Asyncio Operation in Python

I'm working on code where I have a long running shell command whose output is sent to disk. This command will generate hundreds of GBs per file. I have successfully written code that calls this command asynchronously and successfully yields control (awaits) for it to complete.
I also have code that can asynchronously read that file as it is being written to so that I can process the data contained therein. The problem I'm running into is that I can't find a way to stop the file reader once the shell command completes.
I guess I'm looking for some sort of interrupt I can pass into my writer function once the shell command ends that I can use to tell it to close the file and wrap up the event loop.
Here is my writer function. Right now, it runs forever waiting for new data to be written to the file.
import asyncio
PERIOD = 0.5
async def readline(f):
while True:
data = f.readline()
if data:
return data
await asyncio.sleep(PERIOD)
async def read_zmap_file():
with open('/data/largefile.json'
, mode = 'r+t'
, encoding = 'utf-8'
) as f:
i = 0
while True:
line = await readline(f)
print('{:>10}: {!s}'.format(str(i), line.strip()))
i += 1
loop = asyncio.get_event_loop()
loop.run_until_complete(read_zmap_file())
loop.close()
If my approach is off, please let me know. I'm relatively new to asynchronous programming. Any help would be appreciated.
So, I'd do something like
reader = loop.create_task(read_zmap_file)
Then in your code that manages the shell process once the shell process exits, you can do
reader.cancel()
You can do
loop.run_until_complete(reader)
Alternatively, you could simply set a flag somewhere and use that flag in your while statement. You don't need to use asyncio primitives when something simpler works.
That said, I'd look into ways that your reader can avoid the periodic sleep. if your reader will be able to keep up with the shell command, I'd recommend a pipe, because pipes can be used with select (and thus added to an event loop). Then in your reader you can write to to a file if you need a permanent log. I realize the discussion of avoiding the periodic sleep is beyond the scope of this question, and I don't want to go into more detail than I have, but you di ask for hints on how best to approach async programming.

python escaping an infinity loop of pyqt

I'm using pyqt as an infinity loop, but I don't know how to escape from it programmatically, or pythonically. below is my code.
from PyQt4.QtGui import QApplication
loop = QApplication([])
main()
loop.exec_()
I want to write in my main() function like, if some condition is satisfied, then escape.
I'm absolutely new to programming, I've been trying to find any clue on google, like close() or something, nothing works.
any help, hint would be appreciated. thank you.
Before I give you my solution, can I ask why you are intentionally using an infinite loop?
An infinite loop is exactly what it states, meaning it continues infinitely. Unless you have some conditional check in your loop that can check if some number or value is hit, then closes out, your loop will continue indefinitely.
Now for a solution:
Pressing Ctrl-C in your terminal (or where-ever you're running this loop) will stop the program running. This is a universal command as well.
Program-wise, using break will break your loop. I hope this answers your question.
Here is a code snippet that might help with what you're doing:
def main():
while(Some Boolean Value):
#Things you want to do in your loop
if(Some Boolean Value):
break
The best course of action for your issue would be multi-threading. Here are two links that address what you're wanting to do:
Stopping a thread after a certain amount of time
Is there any way to kill a Thread in Python?

Lua Script coroutine

Hi need some help on my lua script. I have a script here that will run a server like application (infinite loop). Problem here is it doesn't execute the second coroutine.
Could you tell me whats wrong Thank you.
function startServer()
print( "...Running server" )
--run a server like application infinite loop
os.execute( "server.exe" )
end
function continue()
print("continue")
end
co = coroutine.create( startServer() )
co1 = coroutine.create( continue() )
Lua have cooperative multithreading. Threads are not swtiched automatically, but must yield to others. When one thread is running, every other thread is waiting for it to finish or yield. Your first thread in this example seems to run server.exe, which, I assume, never finishes until interrupted. Thus second thread never gets its turn to run.
You also run threads wrong. In your example you're not running any threads at all. You execute function and then would try to create coroutine with its output, which naturally would fail. But since you never get back from server.exe you didn't notice this problem yet. Remove those brackets after startServer and continue to fix it.
As already noted, there are several issues with the script that prevent you from getting what you want:
os.execute("...") is blocked until the command is completed and in your case it doesn't complete (as it runs an infinite loop). Solution: you need to detach that process from yours by using something like io.popen() instead of os.execute()
co = coroutine.create( startServer() ) doesn't create a coroutine in your case. coroutine.create call accepts a function reference and you pass it the result of startServer call, which is nil. Solution: use co = coroutine.create( startServer ) (note that parenthesis are dropped, so it's not a function call anymore).
You are not yielding from your coroutines; if you want several coroutines to work together, they need to be cooperating by giving control to each other when appropriate. That's what yield command is for and that's why it's called non-preemptive multithreading. Solution: you need to use a combination of resume and yield calls after you create your coroutine.
startServer doesn't need to be a coroutine as you are not giving control back to it; its only purpose is to start the server.
In your case, the solution may not even need coroutines as all you need to do is: (1) start the server and let it detach from your process (for example, using popen) and (2) work with your process using whatever communication protocol it requires (pipes, sockets, etc.).
There are more complex and complete solutions (like LuaLanes) and also several good descriptions on creating simple coroutine dispatchers.
Your coroutine is not yielding

Resources