well recently I encountered some freezing in my applications in Long run.
my program uses an infinite while loop to constantly check for new processes from a redis db and if there is any job to work on it will spawns a new process to run it in the background.
so I had issue with its freezing after 20 minutes, sometimes 10 minutes. it took me one week to figure it out that the problem rise from lack of this line before my while loop:
multiprocessing.set_start_method('spawn')
it looks like python does not do that on Windows and since windows does not support fork it's gonna stuck.
anyway, it seems this will solve my problem but I have another question.
in order to make a exe file for this program with something like pyinstaller I need to add another line as below to make sure its not freezing in the exe execution:
multiprocessing.freeze_support()
I want to know does this freeze_support() automatically sets the start method to 'spawn' too? I mean should I use both of these lines or just running one of them is ok? if so which one should I use from now on?
In the case of windows, spawn is already the default method so it would not be necessary to run the set_start_method ('spawn') line of code.
The freeze_support () is a different thing that does not affect the definition of start methods. You must use it in this scenario to generate an .exe.
I use time.sleep for lots of reasons
But how do I use a time.sleep on only 1 variable?
Is there a:
import time
time.sleep(j(10)) # <- Focus Here
Without me knowing
Do I have to use another command?
Or is it not available on python at all?
time.sleep will have python pause for the amount of time that you say. It will pause the entire program, so using time.sleep won't work on one variable. I'm assuming you want to maybe stop working on one variable but continuing working with another? Since python goes line by line through your code you would just stop writing code that effects the variable. Then once you want to start working on it again you can start writing code that impacts it again.
I'm trying to create a GUI for some of my Python scripts at work using PyQt5.
I'm interested in running a series of tasks on separate processes (not threads). I've been using the concurrent futures ProcessPoolExecutor to execute the jobs. I've tried using the iterators from concurrent.futures.as_completed() to update the value in my QProgressBar.
def join(self):
for fut in concurrent.futures.as_completed(self._tasks):
try:
self.results.put(fut.result())
self.dialogBox.setValue(self.results.qsize())
except concurrent.futures.CancelledError:
break
However, it seems like my method seems to block the gui even though the work is running on another processes.
Is it possible?
I'm learning python so I'm not expert.
I have 3 different scripts that basically do the same thing.
Each script attaches a consumer to a RabbitMQ queue and processes the queue.
I would like to build a wrapper to run these 3 scripts and build a deamon that start automatically with the system.
My wrapper also should have the logic for manage the errors and start a child process if one of the subprocesses dies and collect the output for each subprocess.
The structure is something like that:
main.py
|-->consumer_one.py
|-->consumer_two.py
|-->consumer_three.py
Could you suggest if exist a package that manages the process forking in a simple way?
Thank you so mutch
You may want to use the concurrent.future standard library module.
It is quite simple to use and very easy to manage
Here is a quick and dirty example:
from concurrent.futures import ProcessPoolExecutor
#import consumer_one
#import consumer_two
import a
import time
consumer_one, consumer_two = a,a
if __name__ == '__main__':
pool = ProcessPoolExecutor()
jobs = [pool.submit(module.start) for module in (consumer_one, consumer_two)]
print(jobs)
j1, j2 = jobs
print(j1.running())
while all(j.running() for j in jobs):
time.sleep(1)
print("all is well...")
print("some one has died!") #I guess now you can do something much more clever :)
pool.shutdown()
exit(1)
Read the docs for more info:
https://docs.python.org/3/library/concurrent.futures.html
After a few test, I think the best solution is to install the package http://supervisord.org/
In my scenario, I can manage easier the restart if the services die, also, I can manage differents logs for process and attach specifics event listeners.
Supervisor has a lot of good functions to manage asynchronous services.
OK, so I am writing an app, which plays music with the pyGST bindings.
This requires the use of threads to handle playback. The bindings library handles most of the thread control for me, which is nice(and what I was looking for in them).
Now, I don't have a full grasp on this concept, so I would be eager for some references. But the way I understand it, is I have to basically inform the app that it can use multiple threads.
I gathered this from the examples on the gstreamer site, where they use this call:
gtk.gdk.threads_init()
gtk.main()
according to here, this tells the app it can use multiple threads(more or less), which is where my above assumption came from.
That is the background. Now get this. I have placed those lines in my code, and they work fine. My app plays music rather than crashing whenever it tries. But something doesn't feel right.
In the examples that I got those lines from, they use gtk for the whole GUI, but I want to use wxWidgets, so it feels wrong calling this gtk function to do this.
Is there a wx equivalent to this? or is it ok to use this, and will it still work cross platform?
Also, I have to figure out how to kill all these threads on exit(which it does not do right now) I see how they do it in the example using a gtk method again, so again, looking for a wx equivalent.
PS: I think this(or the solution) may be related to the wx.App.MainLoop() function, but I am lost in trying to understand how this loop works, so again, good references about this would be appreciated, but I suppose not necessary as long as I have a good solution.
Try using this instead:
import gobject
gobject.threads_init()
I wonder how come it is not written in large print at the start of every python gstreamer plugin piece of documentation: it only took me several hours to find it.
A bit more details here.
I have no experience with pyGST, but the general advice for using threads and wxPython is to only update the GUI from the main thread (i.e. the thread that starts the MainLoop). See http://wiki.wxpython.org/LongRunningTasks for more information.
I have no experience with the python bindings, but I have had success using wxWidgets and GStreamer together on Windows. The problem is that wxWidgets runs a Windows event loop while GStreamer uses a GLib event loop. If you don't care about any of the GStreamer events, you shouldn't need to do anything. However, if you care to receive any of the GStreamer events, you will have to run your own GLib event loop (GMainLoop) in a separate thread with a separate GMainContext. Use gst_bus_create_watch to create a GST event source, add a callback to the source with g_source_set_callback, and then attach it to the main context of your GLib event loop with g_source_attach. You can then handle the GST in the callback, for example, to forward the events to the wx main event loop.