As a user, I want a status bar (or similar) to notify me that a job is working when using a Wx.Python gui app - multithreading

Can someone recommend a straight forward way of adding some type of graphical notification (status bar, spinning clocks, etc...) to my wx.Python gui application? Currently, it searches logs on a server for unique strings, and often times takes upwards to 3-4 minutes to complete. However, it would be convenient to have some type of display letting a user know that the status of the job towards finishing/completion. If I added a feature like this, I'm not sure, but I'm afraid I may have to look at using threads ... and I'm a complete newbie to Python? Any help and direction will be appreciated.

Yes, you'd need to use threads or queues or something similar. Fortunately, there are some excellent examples here: http://wiki.wxpython.org/LongRunningTasks and this tutorial I wrote is pretty straight-forward too: http://www.blog.pythonlibrary.org/2010/05/22/wxpython-and-threads/
Using threads isn't that hard. Basically you put the long running part in the thread and every so often, you send a status update to your GUI using a thread-safe method, like wx.PostEvent or wx.CallAfter. Then you can update your statusbar or a progress bar or whatever you're using.

Related

Sleep() Methods and OS - Scheduler (Camunda/Groovy)

I got a question for you guys and its not as specific as usual, which could make it a little annoying to answer.
The tool i'm working with is Camunda in combination with Groovy scripts and the goal is to reduce the maximum cpu load (or peak load). I'm doing this by "stretching" the work load over a certain time frame since the platform seems to be unhappy with huge work load inputs in a short amount of time. The resulting problem is that Camunda wont react smoothly when someone tries to operate it at the UI - Level.
So i wrote a small script which basically just lets each individual process determine his own "time to sleep" before running, if a certain threshold is exceeded. This is based on how many processes are trying to run at the same time as the individual process.
It looks like:
Process wants to start -> Process asks how many other processes are running ->
waitingTime = numberOfProcesses * timeToSleep * iterationOfMeasures
CPU-Usage Curve 1,3 without the Script. Curve 2,4 With the script
Testing it i saw that i could stretch the work load and smoothe out the UI - Levels. But now i need to describe why this is working exactly.
The Questions are:
What does a sleep method do exactly ?
What does the sleep method do on CPU - Level?
How does an OS-Scheduler react to a Sleep Method?
Namely: Does the scheduler reschedule or just simply "wait" for the time given?
How can i recreate and test the question given above?
The main goal is not for you to answer this, but could you give me a hint for finding the right Literature to answer these questions? Maybe you remember a book which helped you understand this kind of things or a Professor recommended something to you. (Mine wont answer, and i cant blame him)
I'm grateful for hints and or recommendations !
i'm sure you could use timer event
https://docs.camunda.org/manual/7.15/reference/bpmn20/events/timer-events/
it allows to postpone next task trigger for some time defined by expression.
about sleep in java/groovy: https://www.javamex.com/tutorials/threads/sleep.shtml
using sleep is blocking current thread in groovy/java/camunda.
so instead of doing something effective it's just blocked.

Best way to implement background “timer” functionality in Python/Django

I am trying to implement a Django web application (on Python 3.8.5) which allows a user to create “activities” where they define an activity duration and then set the activity status to “In progress”.
The POST action to the View writes the new status, the duration and the start time (end time, based on start time and duration is also possible to add here of course).
The back-end should then keep track of the duration and automatically change the status to “Finished”.
User actions can also change the status to “Finished” before the calculated end time (i.e. the timer no longer needs to be tracked).
I am fairly new to Python so I need some advice on the smartest way to implement such a concept?
It needs to be efficient and scalable – I’m currently using a Heroku Free account so have limited system resources, but efficiency would also be important for future production implementations of course.
I have looked at the Python threading Timer, and this seems to work on a basic level, but I’ve not been able to determine what kind of constraints this places on the system – e.g. whether the spawned Timer thread might prevent the main thread from finishing and releasing resources (i.e. Heroku Dyno threads), etc.
I have read that persistence might be a problem (if the server goes down), and I haven’t found a way to cancel the timer from another process (the .cancel() method seems to rely on having the original object to cancel, and I’m not sure if this is achievable from another process).
I was also wondering about a more “background” approach, i.e. a single process which is constantly checking the database looking for activity records which have reached their end time and swapping the status.
But what would be the best way of implementing such a server?
Is it practical to read the database every second to find records with an end time of “now”? I need the status to change in real-time when the end time is reached.
Is something like Celery a good option, or is it overkill for a single process like this?
As I said I’m fairly new to these technologies, so I may be missing other obvious solutions – please feel free to enlighten me!
Thanks in advance.
To achieve this you need some kind of scheduling tasks functionality. For a fast simpler implementation is a good solution to use the Timer object from the
Threading module.
A more complete solution is tu use Celery. If you are new, deeping in it will give you a good value start using celery as a queue manager distributing your work easily across several threads or process.
You mentioned that you want it to be efficient and scalable, so I guess you will want to implement similar functionalities that will require multiprocessing and schedule so for that reason my recommendation is to use celery.
You can integrate it into your Django application easily following the documentation Integrate Django with Celery.

Simple Qt threading mechanism with progress?

I want to look for files with given extensions recursively from a given root directory and to display the number of files currently found in my GUI.
Since this kind of processing may be long, the GUI may be blocked.
I could just wait for the end of the processing and get the file count, but I am learning Qt (PyQt), so I see this as a training.
So I have read Qt doc:
When to Use Alternatives to Threads, and I don't think it's for me.
Then I read:
Choosing an Appropriate Approach, and I think my solution is the first one:
Run a new linear function within another thread, optionally with
progress updates during the run
But in this case you have 3 choices:
Qt provides different solutions:
Place the function in a reimplementation of QThread::run() and start the QThread. Emit signals to update progress. OR
Place the function in a reimplementation of QRunnable::run() and add the QRunnable to a QThreadPool. Write to a thread-safe variable
to update progress. OR
Run the function using QtConcurrent::run(). Write to a thread-safe variable to update progress.
Could you tell me how to choose the best one?
I have read some "solutions" but I'd like to understand why you should use one methodology instead of another one.
And also since I am looking for files, I may have a directory in which many files would match the search criteria. So it would mean lots of interruptions. Is there something special to keep in mind regarding this?
Thank you!
From what I know (hopefully more can chime in).
QThread offers support with signal interaction. For example, you'd be able to stop your concurrent function with a signal. Not sure how you'd do that with the other options, if at all.
Things to keep in mind: widgets all have to live in the main thread, but can communicate with other other threads via signals & slots.
Another quick thread on the topic w/ some decent bullet-points.
https://qt-project.org/forums/viewthread/50165/
Best of luck on your project, and welcome to Qt!

What are the benefits of coroutines?

I've been learning some lua for game development. I heard about coroutines in other languages but really came up on them in lua. I just don't really understand how useful they are, I heard a lot of talk how it can be a way to do multi-threaded things but aren't they run in order? So what benefit would there be from normal functions that also run in order? I'm just not getting how different they are from functions except that they can pause and let another run for a second. Seems like the use case scenarios wouldn't be that huge to me.
Anyone care to shed some light as to why someone would benefit from them?
Especially insight from a game programming perspective would be nice^^
OK, think in terms of game development.
Let's say you're doing a cutscene or perhaps a tutorial. Either way, what you have are an ordered sequence of commands sent to some number of entities. An entity moves to a location, talks to a guy, then walks elsewhere. And so forth. Some commands cannot start until others have finished.
Now look back at how your game works. Every frame, it must process AI, collision tests, animation, rendering, and sound, among possibly other things. You can only think every frame. So how do you put this kind of code in, where you have to wait for some action to complete before doing the next one?
If you built a system in C++, what you would have is something that ran before the AI. It would have a sequence of commands to process. Some of those commands would be instantaneous, like "tell entity X to go here" or "spawn entity Y here." Others would have to wait, such as "tell entity Z to go here and don't process anymore commands until it has gone here." The command processor would have to be called every frame, and it would have to understand complex conditions like "entity is at location" and so forth.
In Lua, it would look like this:
local entityX = game:GetEntity("entityX");
entityX:GoToLocation(locX);
local entityY = game:SpawnEntity("entityY", locY);
local entityZ = game:GetEntity("entityZ");
entityZ:GoToLocation(locZ);
do
coroutine.yield();
until (entityZ:isAtLocation(locZ));
return;
On the C++ size, you would resume this script once per frame until it is done. Once it returns, you know that the cutscene is over, so you can return control to the user.
Look at how simple that Lua logic is. It does exactly what it says it does. It's clear, obvious, and therefore very difficult to get wrong.
The power of coroutines is in being able to partially accomplish some task, wait for a condition to become true, then move on to the next task.
Coroutines in a game:
Easy to use, Easy to screw up when used in many places.
Just be careful and not use it in many places.
Don't make your Entire AI code dependent on Coroutines.
Coroutines are good for making a quick fix when a state is introduced which did not exist before.
This is exactly what java does. Sleep() and Wait()
Both functions are the best ways to make it impossible to debug your game.
If I were you I would completely avoid any code which has to use a Wait() function like a Coroutine does.
OpenGL API is something you should take note of. It never uses a wait() function but instead uses a clean state machine which knows exactly what state what object is at.
If you use coroutines you end with up so many stateless pieces of code that it most surely will be overwhelming to debug.
Coroutines are good when you are making an application like Text Editor ..bank application .. server ..database etc (not a game).
Bad when you are making a game where anything can happen at any point of time, you need to have states.
So, in my view coroutines are a bad way of programming and a excuse to write small stateless code.
But that's just me.
It's more like a religion. Some people believe in coroutines, some don't. The usecase, the implementation and the environment all together will result into a benefit or not.
Don't trust benchmarks which try to proof that coroutines on a multicore cpu are faster than a loop in a single thread: it would be a shame if it were slower!
If this runs later on some hardware where all cores are always under load, it will turn out to be slower - ups...
So there is no benefit per se.
Sometimes it's convenient to use. But if you end up with tons of coroutines yielding and states that went out of scope you'll curse coroutines. But at least it isn't the coroutines framework, it's still you.
We use them on a project I am working on. The main benefit for us is that sometimes with asynchronous code, there are points where it is important that certain parts are run in order because of some dependencies. If you use coroutines, you can force one process to wait for another process to complete. They aren't the only way to do this, but they can be a lot simpler than some other methods.
I'm just not getting how different they are from functions except that
they can pause and let another run for a second.
That's a pretty important property. I worked on a game engine which used them for timing. For example, we had an engine that ran at 10 ticks a second, and you could WaitTicks(x) to wait x number of ticks, and in the user layer, you could run WaitFrames(x) to wait x frames.
Even professional native concurrency libraries use the same kind of yielding behaviour.
Lots of good examples for game developers. I'll give another in the application extension space. Consider the scenario where the application has an engine that can run a users routines in Lua while doing the core functionality in C. If the user needs to wait for the engine to get to a specific state (e.g. waiting for data to be received), you either have to:
multi-thread the C program to run Lua in a separate thread and add in locking and synchronization methods,
abend the Lua routine and retry from the beginning with a state passed to the function to skip anything, least you rerun some code that should only be run once, or
yield the Lua routine and resume it once the state has been reached in C
The third option is the easiest for me to implement, avoiding the need to handle multi-threading on multiple platforms. It also allows the user's code to run unmodified, appearing as if the function they called took a long time.

Outputting console data from a process to gui in wxwidgets

I'm running a long process in the background. I've managed to output the console data to gui. But the problem is that, the data is returned only after the process is finished. But I need to display the data at realtime. ie, I need to display the data, every time it produces some output on the console. I'm running the process with in my gui from a seperate thread.
I mean, it would be like building a gui for the ping command, where output is displayed on console after each packet is send, ie at realtime. I just need to redirect that to gui, in realtime. I'm implementing the gui in wxwidgets. Any help would be greatly appreciated.
Thanking You..
Jvc
Is the output you wish to display generated in a separate process from the process running the GUI? Or in a separate thread in the same process?
I ask because most people, when they ask this question, mean a a separate thread. Since you have tagged your question with "process" I will assume that is what you mean.
You need some inter-process communication. There is a bewildering variety of techniques to do this. Personally, I always use sockets.
wxWidgets has simple, easy to use socket classes wxSocketClient and wxSocketServer.
The background process is probably not running wxWidgets, so you will need something else there. I reccomend boost::asio. I know it looks intimidating, but in fact the tutorial code can be used as is.
There is a lot more to be said, but I risk straying away from the point, since there are so few details in your question.
You can have an output queue protected by a wxMutex. The thread doing the computation writes to the queue, then signals the GUI thread using wxQueueEvent with a custom event to let it know that the thread is not empty. The GUI thread then reads the queue and outputs the data.

Resources