I am trying to come up with a synchronization model for the following scenario:
I have a GUI thread that is responsible for CPU intensive animation + blocking I/O. The GUI thread retrieves images from the network (puts them in a shared buffer) , these images are processed (CPU intensive operation..done by a worker thread) and then these images are animated ( again CPU intensive..done by the GUI thread).
The processing of images is done by a worker thread..it retrieves images from the shared buffer processes them and puts them in an output buffer.
There is only once CPU and the GUI thread should not get scheduled out while it is animating the images (the animation has to be really smooth). This means that the work thread should get the CPU only when the GUI thread is waiting for I/O operation to complete.
How do i go about achieving this? This looks like a classic producer consumer problem...but i am not quite sure how i can guarantee that the animation will be as smooth as possible ( i am open to using more threads).
I would like to use QThreads (Qt framework) for platform independence but i can consider pthreads for more control ( as currently we are only aiming for linux).
Any ideas?
EDIT:
i guess the problems boils down to one thing..how do i ensure that the animation thread is not interrupted while it is animating the images ( the animation runs when the user goes from one page to the other..all the images in the new page are animated before shown in their proper place..this is a small operation but it must be really smooth).The worker thread can only run when the animation is over..
Just thinking out loud here, but it sounds like you have two compute-intensive tasks, animation and processing, and you want animation to always have priority over processing. If that is correct then maybe instead of having these tasks in separate threads you could have a single thread that handles both animation and processing.
For instance, the thread could have two task-queues, one for animation jobs and one for processing jobs, and it only starts a job from the processing queue when the animation queue is empty. But, this will only work well if each individual processing job is relatively small and/or interruptible at arbitrary positions (otherwise animation jobs will get delayed, which is not what you want).
The first big question is: Do I really need threads? Qt 's event system and network objects make it easy to not having the technical burden of threads and all the snags that comes with it.
Have a look at alternative ways to address issues here and here. These techniques great if you are sticking to pure Qt code and do not depend on a 3rd party library. If you must use a 3rd party lib that does blocking calls then sure, you can use threads.
Here is an example of a consumer producer.
Also have a look at Advanced Qt Programming: Creating Great Software with C++ and Qt 4
My advice is to start without threads and see how it fares. You can always refactor to threads after. So, best is to design your objects/architecture without too much coupling.
If you want you can post some code to give more context.
Related
The SDL documentation for threading states:
NOTE: You should not expect to be able to create a window, render, or receive events on any thread other than the main one.
The glfw documentation for glfwCreateWindow states:
Thread safety: This function must only be called from the main thread.
I have read about issues regarding the glut library from people who have tried to run the windowing functions on a second thread.
I could go on with these examples, but I think you get the point I'm trying to make. A lot of cross-platform libraries don't allow you to create a window on a background thread.
Now, two of the libraries I mentioned are designed with OpenGL in mind, and I get that OpenGL is not designed for multithreading and you shouldn't do rendering on multiple threads. That's fine. The thing that I don't understand is why the rendering thread (the single thread that does all the rendering) has to be the main one of the application.
As far as I know, neither Windows nor Linux nor MacOS impose any restrictions on which threads can create windows. I do know that windows have affinity to the thread that creates them (only that thread can receive input for them, etc.); but still that thread does not need to be the main one.
So, I have three questions:
Why do these libraries impose such restrictions? Is it because there is some obscure operating system that mandates that all windows be created on the main thread, and so all operating systems have to pay the price? (Or did I get it wrong?)
Why do we have this imposition that you should not do UI on a background thread? What do threads have to do with windowing, anyways? Is it not a bad abstraction to tie your logic to a specific thread?
If this is what we have and can't get rid of it, how do I overcome this limitation? Do I make a ThreadManager class and yield the main thread to it so it can schedule what needs to be done in the main thread and what can be done in a background thread?
It would be amazing if someone could shed some light on this topic. All the advice I see thrown around is to just do input and UI both on the main thread. But that's just an arbitrary restriction if there isn't a technical reason why it isn't possible to do otherwise.
PS: Please note that I am looking for a cross platform solution. If it can't be found, I'll stick to doing UI on the main thread.
While I'm not quite up to date on the latest releases of MacOS/iOS, as of 2020 Apple UIKit and AppKit were not thread safe. Only one thread can safely change UI objects, and unless you go to a lot of trouble that's going to be the main thread. Even if you do go to all the trouble of closing the window manager connection etc etc you're still going to end up with one thread only doing UI. So the limitation still applies on at least one major system.
While it's possibly unsafe to directly modify the contents of a window from any other thread, you can do software rendering to an offscreen bitmap image from any thread you like, taking as long as you like. Then hand the finished image over to the main thread for rendering. (The possibly is why cross platform toolkits disallow/tell you not to. Sometimes it might work, but you can't say why, or even that it will keep working.)
With Vulkan and DirectX 12 (and I think but am not sure Metal) you can render from multiple threads. Woohoo! Of course now you have to figure out how to do all the coordination and locking and cross-synching without making the whole thing slower than single threaded, but at least you have the option to try.
Adding to the excellent answer by Matt, with Qt programs you can use invokeMethod and postEvent to have background threads update the UI safely.
It's highly unlikely that any of these frameworks actually care about which thread is the 'main thread', i.e., the one that called the entry point to your code. The real restriction is that you have to do all your UI work on the thread that initialized the framework, i.e., the one that called SDL_Init in your case. You will usually do this in your main thread. Why not?
Multithreaded code is difficult to write and difficult to understand, and in UI work, introducing multithreading makes it difficult to reason about when things happen. A UI is a very stateful thing, and when you're writing UI code, you usually need to have a very good idea about what has happened already and what will happen next -- those things are often undefined when multithreading is involved. Also, users are slow, so multithreading the UI is not really necessary for performance in normal cases. Because of all this, making a UI framework thread-safe isn't usually considered beneficial. (multithreading compute-intensive parts of your rendering pipeline is a different thing)
Single-threaded UI frameworks have a dispatcher of some sort that you can use to enqueue activities that should happen on the main thread when it next has time. In SDL, you use SDL_PushEvent for this. You can call that from any thread.
I have wp7 app whith two background threads:
1. Planing of time
2. Play different sound samples by planed time (Possible few samples in same time).
How to repeat this logic whith unity3d engine? Is it possible?
Unity will not allow you to access its APIs from any thread other than the main one; you can't use locking primitives to get around it.
You can use the standard .NET threading APIs to start threads that do not interact directly with the Unity API, though. You could calculate samples and buffers on an extra thread, but your main thread would have to call AudioClip.SetData to submit the calculated samples to Unity.
Note that since Unity 2018.1, the Job System has been introduced which allows certain kinds of computation tasks to be performed on background threads (for example, setting transform positions). The tasks that can be performed are being gradually opened up over time.
The fact that the API is not threadsafe does not mean that you cannot use it with additional thread safety. You only need to ensure that no two threads modify the common data at the same time. You can use a simple lock variable to ensure no one reads the samples list while it is being updated.
However, instead of threads I'd recommend using coroutines, because they make things a lot easier. No thread safety is needed, the benefits are similar and the execution order is way clearer.
A simpler way to achieve a similar solution would be to update the samples list inside Update, and read it in a LateUpdate method.
No way =( Unity API is not threadsafe: link
I have a Delphi 6 application that receives and processes an image stream from an external camera. I have the code on a background thread since it is CPU heavy and I don't want it interfering with with the user interface code that runs on the main thread. I want to update a rectangular area on a form or frame with the TBitmaps I create from the camera's JPEG frames that are received at a rate of 25 frames per second.
I want to know what method will give me the best performance and what Windows API calls or Delphi calls to use to do it. I would guess I should not use a TImage or TPicture or similar VCL component because they run on the main thread and I'm pretty sure trying to get anything done via a Synchronize() call is going to be inefficient and has the potential to slow down the threads involved. I would also want a technique that provides a smooth video display like double buffered controls do without any "striping" effects. Also, any tips on proper Canvas locking or device context management, etc. would be appreciated, especially tips on avoiding common mistakes in freeing resources.
Of course, a link to a good code sample that does what I need would be great.
AFAIK TBitmap are thread-safe, if you work only on its canvas. Synchronize is needed if you send GDI messages and need to refresh the screen, but from my experiment, using TBitmap.Canvas is just a wrapper around thread-safe Windows API. If you process the bitmap with pixel arithmetics (using e.g. Scanline), one unique bitmap per thread, you can do it on background.
But I suspect using TBitmap is not the most efficient way. Give a try to http://graphics32.org or http://aggpas.org which are very fast way to work on bitmaps.
If you can, as imajoosy proposed, the best way to process your input stream is to use direct X streaming process abilities.
For thread-safe process, if each thread is about to consume 100% of its core (which is very likely for image process), it is generally assumed that you shall better create NumberOfCPU-1 threads for your processing. For instance, you could create a pool of threads, then let those consume the bitmaps from the input stream.
If anybody has had a lot of experience timing code running on the main VCL thread vs a background thread, I'd like to get an opinion. I have some code that does some heavy string processing running in my Delphi 6 application on the main thread. Each time I run an operation, the time for each operation hovers around 50 ms on a single thread on my i5 Quad core. What makes me really suspicious is that the same code running on an old Pentium 4 that I have, shows the same time for the operation when usually I see code running about 4 times slower on the Pentium 4 than the Quad Core. I am beginning to wonder if the code might be consuming significantly less time than 50 ms but that there's something about the main VCL thread, perhaps Windows message handling or executing Windows API calls, that is creating an artificial "floor" for the operation. Note, an operation is triggered by an incoming request on a socket if that matters, but the time measurement does not take place until the data is fully received.
Before I undertake the work of moving all the code on to a background thread for testing, I am wondering if anyone has any general knowledge in this area? What have your experiences been with code running on and off the main VCL thread? Note, the timing measurements are being done when there is absolutely no user triggered activity going on during the tests.
I'm also wondering if raising the priority of the thread to just below real-time would do any good. I've never seen much improvement in my run times when experimenting with those flags.
-- roschler
Given all threads have the same priority, as they normally do, there can't be a difference, for the following reasons. If you're seeing a difference, re-evaluate the code (make sure you run the same thing in both VCL and background threads) and make sure you time it properly:
The compiler generates the exact same code, it doesn't care if the code is going to run in the main thread or in a background thread. In fact you can put the whole code in a procedure and call that from both your worker thread's Execute() and from the main VCL thread.
For the CPU all cores, and all threads, are equal. Unless it's actually a Hyper Threading CPU, where not all cores are real, but then see the next bullet.
Even if not all CPU cores are equal, your thread will very unlikely run on the same core, the operating system is free to move it around at will (and does actually schedule your thread to run on different cores at different times).
Messaging overhead doesn't matter for the main VCL thread, because unless you're calling Application.ProcessMessages() manually, the message pump is simply stopped while your procedure does it's work. The message pump is passive, your thread needs to request messages from the queue, but since the thread is busy doing your work, it's not requesting any messages so no overhead there.
There's just one place where threads are not equal, and this can change the perceived speed of execution: It's the operating system that schedules threads to execution units (cores), and for the operating system threads have different priorities. You can tell the OS a certain thread needs to be treated differently using the SetThreadPriority() API (which is used by the TThread.Priority property).
Without simple source code to reproduce the issue, and how you are timing your threads, it will be difficult to understand what occurs in your software.
Sounds definitively like either:
An Architecture issue - how are your threads defined?
A measurement issue - how are you timing your threads?
A typical scaling issue of both the memory manager and the RTL string-related implementation.
About the latest point, consider this:
The current memory manager (FastMM4) is not scaling well on multi-core CPU; try with a per-thread memory manager, like our experimental SynScaleMM - note e.g. that the Free Pascal Compiler team has written a new scaling MM from scratch recently, to avoid such issue;
Try changing the string process implementation to avoid memory allocation (use static buffers), and string reference-counting (every string reference counting access produces a LOCK DEC/INC which do not scale so well on multi-code CPU - use per-thread char-level process, using e.g. PChar on static buffers instead of string).
I'm sure that without string operations, you'll find that all threads are equivalent.
In short: neither the current Delphi MM, neither the current string implementation scales well on multi-core CPU. You just found out a known issue of the current RTL. Read this SO question.
When your code has control of the VCL thread, for instance if it is in one method and doesn't call out to any VCL controls or call Application.ProcessMessages, then the run time will not be affected just because it's in the main VCL thread.
There is no overhead, since you "own" the whole processing power of the thread when you are in your own code.
I would suggest that you use a profiling tool to find where the actual bottleneck is.
Performance can't be assessed statically. For that you need to get AQTime, or some other performance profiler for Delphi. I use AQtime, and I love it, but I'm aware it's considered expensive.
Your code will not magically get faster just because you moved it to a background thread. If anything, your all-inclusive-time until you see results in your UI might get a little slower, if you have to send a lot of data from the background thread to the foreground thread via some synchronization mechanisms.
If however you could execute parts of your algorithm in parallel, that is, split your work so that you have 2 or more worker threads processing your data, and you have a quad core processor, then your total time to do a fixed load of work, could decrease. That doesn't mean the code would run any faster, but depending on a lot of factors, you might achieve a slight benefit from multithreading, up to the number of cores in your computer. It's never ever going to be a 2x performance boost, to use two threads instead of one, but you might get 20%-40% better performance, in your more-than-one-threaded parallel solutions, depending on how scalable your heap is under multithreaded loads, and how IO/memory/cache bound your workload is.
As for raising thread priorities, generally all you will do there is upset the delicate balance of your Windows system's performance. By raising the priorities you will achieve (sometimes) a nominal, but unrepeatable and non-guaranteeable increase in performance. Depending on the other things you do in your code, and your data sources, playing with priorities of threads can introduce subtle problems. See Dining Philosophers problem for more.
Your best bet for optimizing the speed of string operations is to first test it and find out exactly where it is using most of its time. Is it heap operations? Memory Copy and move operations? Without a profiler, even with advice from other people, you will still be comitting a cardinal sin of programming; premature optimization. Be results oriented. Be science based. Measure. Understand. Then decide.
Having said that, I've seen a lot of horrible code in my time, and there is one killer thing that people do that totally kills their threaded app performance; Using TThread.Synchronize too much.
Here's a pathological (Extreme) case, that sadly, occurs in the wild fairly frequently:
procedure TMyThread.Execute;
begin
while not Terminated do
Synchronize(DoWork);
end;
The problem here is that 100% of the work is really done in the foreground, other than the "if terminated" check, which executes in the thread context. To make the above code even worse, add a non-interruptible sleep.
For fast background thread code, use Synchronize sparingly or not at all, and make sure the code it calls is simple and executes quickly, or better yet, use TThread.Queue or PostMessage if you could really live with queueing main thread activity.
Are you asking if a background thread would be faster? If your background thread would run the same code as the main thread and there's nothing else going on in the main thread, you don't stand to gain anything with a background thread. Threads should be used to split and distribute processing loads that would otherwise contend with one another and/or block one another when running in the main thread. Since you seem to be dealing with a case where your main thread is otherwise idle, simply spawning a thread to run slow code will not help.
Threads aren't magic, they can't speed up slow code or eliminate processing bottlenecks in a particular segment not related to contention on the main thread. Make sure your code isn't doing something you don't know about and that your timing methodology is correct.
My first hunch would be that your interaction with the socket is affecting your timing in a way you haven't detected... (I know you said you're sure that's not involved - but maybe check again...)
So I have built a small application that has a physics engine and a display. The display is attached to a controller which handles the physics engine(well, actually a view model that handles the controller, but details).
Currently the controller is a delegate that gets activated by a begin-invoke and deactivated by a cancellation token, and then reaped by an endinvoke. Inside the lambda brushes PropertyChanged(hooked into INotifyPropertyChanged) which keeps the UI up to date.
From what I understand the BeginInvoke method activates a task rather than another thread(which on my computers does activate another thread, but this isn't a guarantee from the reading I have done,it's up to the thread pool how it wants to get the task completed), which is fine from all the testing I have done. The lambda doesn't complete until a CancellationToken is killed. It has a sleep and an update(so it is sort of simulating a real-time physics engine...it's crude, but I don't need real precision on the real time, just enough to get a feel)
The question I have is, will this work on other computers, or should I switch over to explicit threads that I start and cancel? The scenario I am thinking of is on a 1 core processor, is it possible the second task will get massively less processor time and thereby make my acceptably inaccurate model into something unacceptably inaccurate(i.e. waiting for milliseconds before switching rather than microseconds?). Or is their some better way of doing this that I haven't come up with?
In my experience, using the threadpool in the way you described will pretty much guarantee reasonably optimal performance on most computers, without you having to go to the trouble to figure out how to divvy up the threads.
A thread is not the same thing as a core; you will still get multiple threads on a single-core machine, and those threads will each take part of the processing load. You won't get the "deadlock" condition you describe, unless you do something unusual with the threads, like give one of them real-time priority.
That said, microseconds is not a lot of time for context switching between threads, so YMMV. You'll have to try it, and see how well it works; there may be some tweaking required.