I have a vector of entities. At update cycle I iterate through vector and update each entity: read it's position, calculate current speed, write updated position. Also, during updating process I can change some other objects in other part of program, but each that object related only to current entity and other entities will not touch that object.
So, I want to run this code in threads. I separate vector into few chunks and update each chunk in different threads. As I see, threads are fully independent. Each thread on each iteration works with independent memory regions and doesn't affect other threads work.
Do I need any locks here? I assume, that everything should work without any mutexes, etc. Am I right?
Short answer
No, you do not need any lock or synchronization mechanism as your problem appear to be a embarrassingly parallel task.
Longer answer
A race conditions that can only appear if two threads might access the same memory at the same time and at least one of the access is a write operation. If your program exposes this characteristic, then you need to make sure that threads access the memory in an ordered fashion. One way to do it is by using locks (it is not the only one though). Otherwise the result is UB.
It seems that you found a way to split the work among your threads s.t. each thread can work independently from the others. This is the best case scenario for concurrent programming as it does not require any synchronization. The complexity of the code is dramatically decreased and usually speedup will jump up.
Please note that as #acelent pointed out in the comment section, if you need changes made by one thread to be visible in another thread, then you might need some sort of synchronization due to the fact that depending on the memory model and on the HW changes made in one thread might not be immediately visible in the other.
This means that you might write from Thread 1 to a variable and after some time read the same memory from Thread 2 and still not being able to see the write made by Thread 1.
"I separate vector into few chunks and update each chunk in different threads" - in this case you do not need any lock or synchronization mechanism, however, the system performance might degrade considerably due to false sharing depending on how the chunks are allocated to threads. Note that the compiler may eliminate false sharing using thread-private temporal variables.
You can find plenty of information in books and wiki. Here is some info https://software.intel.com/en-us/articles/avoiding-and-identifying-false-sharing-among-threads
Also there is a stackoverflow post here does false sharing occur when data is read in openmp?
In Vulkan, it is recommended to break the API calls into separate threads for better throughput. I am unsure which category of calls are the computationally expensive one which would cause the thread to block, and thus should be used asynchronously.
As I see it, these are the potential calls/family-of-calls that could take a long time to execute.
vkAcquireImageKHR()
vkQueueSubmit()
vkQueuePresentKHR()
memcpy into mapped memory
vkBegin/EndCommandBuffer
vkCmd* calls for drawing and compute
But, the more I think about them, the more it seems that most would be fairly cheap to call. I'll explain my rational, which is probably flawed.
vkAcquireImageKHR()
This could block, if you choose a timeout. But, it's likely that a sufficiently optimized app would call this function with a 0 timeout, and just do other work if the image is not yet available. So, this function can be made instant. There's no need to wait, if the app is smart enough.
vkQueueSubmit()
This function takes a fence, which will be signaled when the GPU has finished executing the command buffers. So, it doesn't actually wait around while the GPU performs the work. I'm assuming this function is the one that starts the physical movement of the command buffer data to the GPU, but I'm assuming that it tell the hardware to read from some memory location, and then the function returns as quickly as possible. So, it wouldn't wait around while the command buffers get sent to the GPU.
vkQueuePresentKHR()
Signal to the GPU to send some image to the window/monitor. It doesn't have to wait for much, does it?
memcpy into mapped memory
This is probably slow.
vkCmd* calls
This family of calls is the one I'm most unsure about. When I read about threads and Vulkan, it's usually these calls that get put onto the threads. But, what are these calls doing, really? Are they building some opcode buffer, made up of some ints and pointers, to be sent to the GPU? If so, that should be extremely fast. The actual work would be carrying out the operations described by those opcodes.
Define "block". The traditional definition of "block"ing is to wait on some internal synchronization, and thereby taking longer than would strictly be necessary for the operation. Doing a memcpy is not doing any synchronization; it's just copying data.
So you don't seem to be concerned about "block"ing; you're merely talking about what operations are expensive.
vkQueueSubmit does not block. But that doesn't mean it's not expensive. It is not "tell[ing] the hardware to read from some memory location" Just look at its interface. It doesn't take a single command buffer; it takes an arbitrary number of them, which are grouped into batches, with each batch waiting on semaphores before execution, signaling semaphores after execution, and the whole operation signaling a fence.
You cannot reasonably expect an implementation of such a thing to merely copy some pointers around.
And that doesn't even get into issues of different types of command buffers. Submitting SIMULTANEOUS_USE command buffers may require creating temporary copies of its buffered data, so that different batches can contain the same command buffer.
Now obviously, vkQueueSubmit is going to return well before any of the work it submits actually gets executed. But don't make the mistake of thinking that it's free to ship work off to the GPU. The Vulkan specification takes time out in a note to directly tell you not to call the function any more frequently than you can get away with:
Submission can be a high overhead operation, and applications should attempt to batch work together into as few calls to vkQueueSubmit as possible.
The reason to present on the same thread that submitted the CBs that generates the image being presented is not because any of those operations are necessarily slow. It's for simple pragmatism; these three operations (acquire, submit, present) must happen in order. And the simplest and easiest way to ensure that is to do them on the same thread.
You cannot submit work that renders to a swapchain image until you have acquired it. Therefore, either you do it on the same thread, or you have to have some inter-thread communication pipe to tell the thread waiting to build the primary CB what the acquired image is. The two processes cannot overlap.
Unlike acquire, present is a queue operation. And both vkQueueSubmit and vkQueuePresent require that access to their VkQueue parameters must be "externally synchoronized". That of course means that you cannot call them both from different threads, on the same VkQueue, at the same time. So if you tried to do these in parallel, you'd need a mutex or something to synchronize CPU access to the VkQueue.
Whereas if you do them on the same thread, there's no need.
Additionally, in order to present an image, you must provide a semaphore that the present will wait on. This semaphore will get signaled by the batch that generates data for the image. Vulkan requires semaphore signal/wait pairs to be ordered; you cannot perform a queue operation that waits on a semaphore until the operation that signals that semaphore has been submitted. Therefore, either you do it on the same thread in sequence, or you use some inter-thread communication pipe to tell whatever thread is waiting to present the image that the submit operation that renders to it has been issued.
So what is to be gained by splitting these operations up onto different threads? They have to happen in sequence, so you may as well do them in sequence the easiest way that exists: on the same thread.
While timeline semaphores now allow you to call the present function before submitting the work that increments the semaphore counter, you still can't call them on separate threads (without synchronization) because they affect the same queue. So you may as well issue them on the same thread (though not necessarily in acquire, submit, present order).
Ultimately, it's not clear what the point of this exercise is. Yes, an individual vkCmd* call will be pretty fast. So what? In a real scene, you will be calling these functions thousands of times per frame. Spreading them evenly across 4 cores saves you ~4x the performance.
repository.data
.subscribeOn(Schedulers.io())
.map { data <- 'do some computations' ... }
.subscribe()
Is it better in this case to switch to the Computational Scheduler, before doing the map operation (.observeOn(Schedulers.computation())?
What if we are observing multiple sources that depend on each other? Like getting data1, mapping it, then getting data2 based on data1, then again mapping it. In this case we'd have to change threads between every computational operation and data request.
There is no straight answer for this question. You have to consider it always for specific case. Although there are some rules you can follow, based on this knowledge:
Computation thread pool has maximum number of threads and the size is based on device that you use. Most commonly used is a pool of 4 threads.
IO thread pool is basically unlimited, meaning if you start 100 operations at the same time, there will be 100 threads created, so be carefully with its usage.
Switching a thread can always creates some drop in performance, because its additional operator and it can be forced to wait in the queue.
The real question is: is this task so heavy that I have to switch thread? In most of the cases a network call or database call takes the most time and other operators are very quick. Simple mapping or iterating through array of 1000 elements is basically done in an instant.
Another question is: am I performing so many tasks in the background that I have to free this thread? Will it really help something? Is someone waiting for this Scheduler to free some thread?
Those are rules from the top of my head. There is something that I may forgot, but generally those steps help you decide if you have to switch thread. Hope it helps :)
I was going through the Netflix opensource feature Hystrix...
I saw a statement
"Today tens of billions of thread-isolated, and hundreds of billions of semaphore-isolated calls are executed via Hystrix every day at Netflix"
Would like to know the difference between these different type of calls..
First we need to see the different between thread and semaphore. In general, calling thread is more expensive than semaphore because of the overhead. So for a large number of requests/second, then semaphore will be something you can considere.
Secondly with semaphore, the command will be executed within the thread of the caller. It means that the concurrent calls are not fully isolated from other (not like when you use thread).
Lastly, with semaphore, when there is a timeout, it can't be terminated (unless you specifically set it up). If you don't know what will be the client's behaviour ,then this would be not a nice thing to have.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am applying my new found knowledge of threading everywhere and getting lots of surprises
Example:
I used threads to add numbers in an
array. And outcome was different every
time. The problem was that all of my
threads were updating the same
variable and were not synchronized.
What are some known thread issues?
What care should be taken while using
threads?
What are good multithreading resources.
Please provide examples.
sidenote:(I renamed my program thread_add.java to thread_random_number_generator.java:-)
In a multithreading environment you have to take care of synchronization so two threads doesn't clobber the state by simultaneously performing modifications. Otherwise you can have race conditions in your code (for an example see the infamous Therac-25 accident.) You also have to schedule the threads to perform various tasks. You then have to make sure that your synchronization and scheduling doesn't cause a deadlock where multiple threads will wait for each other indefinitely.
Synchronization
Something as simple as increasing a counter requires synchronization:
counter += 1;
Assume this sequence of events:
counter is initialized to 0
thread A retrieves counter from memory to cpu (0)
context switch
thread B retrieves counter from memory to cpu (0)
thread B increases counter on cpu
thread B writes back counter from cpu to memory (1)
context switch
thread A increases counter on cpu
thread A writes back counter from cpu to memory (1)
At this point the counter is 1, but both threads did try to increase it. Access to the counter has to be synchronized by some kind of locking mechanism:
lock (myLock) {
counter += 1;
}
Only one thread is allowed to execute the code inside the locked block. Two threads executing this code might result in this sequence of events:
counter is initialized to 0
thread A acquires myLock
context switch
thread B tries to acquire myLock but has to wait
context switch
thread A retrieves counter from memory to cpu (0)
thread A increases counter on cpu
thread A writes back counter from cpu to memory (1)
thread A releases myLock
context switch
thread B acquires myLock
thread B retrieves counter from memory to cpu (1)
thread B increases counter on cpu
thread B writes back counter from cpu to memory (2)
thread B releases myLock
At this point counter is 2.
Scheduling
Scheduling is another form of synchronization and you have to you use thread synchronization mechanisms like events, semaphores, message passing etc. to start and stop threads. Here is a simplified example in C#:
AutoResetEvent taskEvent = new AutoResetEvent(false);
Task task;
// Called by the main thread.
public void StartTask(Task task) {
this.task = task;
// Signal the worker thread to perform the task.
this.taskEvent.Set();
// Return and let the task execute on another thread.
}
// Called by the worker thread.
void ThreadProc() {
while (true) {
// Wait for the event to become signaled.
this.taskEvent.WaitOne();
// Perform the task.
}
}
You will notice that access to this.task probably isn't synchronized correctly, that the worker thread isn't able to return results back to the main thread, and that there is no way to signal the worker thread to terminate. All this can be corrected in a more elaborate example.
Deadlock
A common example of deadlock is when you have two locks and you are not careful how you acquire them. At one point you acquire lock1 before lock2:
public void f() {
lock (lock1) {
lock (lock2) {
// Do something
}
}
}
At another point you acquire lock2 before lock1:
public void g() {
lock (lock2) {
lock (lock1) {
// Do something else
}
}
}
Let's see how this might deadlock:
thread A calls f
thread A acquires lock1
context switch
thread B calls g
thread B acquires lock2
thread B tries to acquire lock1 but has to wait
context switch
thread A tries to acquire lock2 but has to wait
context switch
At this point thread A and B are waiting for each other and are deadlocked.
There are two kinds of people that do not use multi threading.
1) Those that do not understand the concept and have no clue how to program it.
2) Those that completely understand the concept and know how difficult it is to get it right.
I'd make a very blatant statement:
DON'T use shared memory.
DO use message passing.
As a general advice, try to limit the amount of shared state and prefer more event-driven architectures.
I can't give you examples besides pointing you at Google. Search for threading basics, thread synchronisation and you'll get more hits than you know.
The basic problem with threading is that threads don't know about each other - so they will happily tread on each others toes, like 2 people trying to get through 1 door, sometimes they will pass though one after the other, but sometimes they will both try to get through at the same time and will get stuck. This is difficult to reproduce, difficult to debug, and sometimes causes problems. If you have threads and see "random" failures, this is probably the problem.
So care needs to be taken with shared resources. If you and your friend want a coffee, but there's only 1 spoon you cannot both use it at the same time, one of you will have to wait for the other. The technique used to 'synchronise' this access to the shared spoon is locking. You make sure you get a lock on the shared resource before you use it, and let go of it afterwards. If someone else has the lock, you wait until they release it.
Next problem comes with those locks, sometimes you can have a program that is complex, so much that you get a lock, do something else then access another resource and try to get a lock for that - but some other thread has that 2nd resource, so you sit and wait... but if that 2nd thread is waiting for the lock you hold for the 1st resource.. it's going to sit and wait. And your app just sits there. This is called deadlock, 2 threads both waiting for each other.
Those 2 are the vast majority of thread issues. The answer is generally to lock for as short a time as possible, and only hold 1 lock at a time.
I notice you are writing in java and that nobody else mentioned books so Java Concurrency In Practice should be your multi-threaded bible.
-- What are some known thread issues? --
Race conditions.
Deadlocks.
Livelocks.
Thread starvation.
-- What care should be taken while using threads? --
Using multi-threading on a single-processor machine to process multiple tasks where each task takes approximately the same time isn’t always very effective.For example, you might decide to spawn ten threads within your program in order to process ten separate tasks. If each task takes approximately 1 minute to process, and you use ten threads to do this processing, you won’t have access to any of the task results for the whole 10 minutes. If instead you processed the same tasks using just a single thread, you would see the first result in 1 minute, the next result 1 minute later, and so on. If you can make use of each result without having to rely on all of the results being ready simultaneously, the single
thread might be the better way of implementing the program.
If you launch a large number of threads within a process, the overhead of thread housekeeping and context switching can become significant. The processor will spend considerable time in switching between threads, and many of the threads won’t be able to make progress. In addition, a single process with a large number of threads means that threads in other processes will be scheduled less frequently and won’t receive a reasonable share of processor time.
If multiple threads have to share many of the same resources, you’re unlikely to see performance benefits from multi-threading your application. Many developers see multi-threading as some sort of magic wand that gives automatic performance benefits. Unfortunately multi-threading isn’t the magic wand that it’s sometimes perceived to be. If you’re using multi-threading for performance reasons, you should measure your application’s performance very closely in several different situations, rather than just relying on some non-existent magic.
Coordinating thread access to common data can be a big performance killer. Achieving good performance with multiple threads isn’t easy when using a coarse locking plan, because this leads to low concurrency and threads waiting for access. Alternatively, a fine-grained locking strategy increases the complexity and can also slow down performance unless you perform some sophisticated tuning.
Using multiple threads to exploit a machine with multiple processors sounds like a good idea in theory, but in practice you need to be careful. To gain any significant performance benefits, you might need to get to grips with thread balancing.
-- Please provide examples. --
For example, imagine an application that receives incoming price information from
the network, aggregates and sorts that information, and then displays the results
on the screen for the end user.
With a dual-core machine, it makes sense to split the task into, say, three threads. The first thread deals with storing the incoming price information, the second thread processes the prices, and the final thread handles the display of the results.
After implementing this solution, suppose you find that the price processing is by far the longest stage, so you decide to rewrite that thread’s code to improve its performance by a factor of three. Unfortunately, this performance benefit in a single thread may not be reflected across your whole application. This is because the other two threads may not be able to keep pace with the improved thread. If the user interface thread is unable to keep up with the faster flow of processed information, the other threads now have to wait around for the new bottleneck in the system.
And yes, this example comes directly from my own experience :-)
DONT use global variables
DONT use many locks (at best none at all - though practically impossible)
DONT try to be a hero, implementing sophisticated difficult MT protocols
DO use simple paradigms. I.e share the processing of an array to n slices of the same size - where n should be equal to the number of processors
DO test your code on different machines (using one, two, many processors)
DO use atomic operations (such as InterlockedIncrement() and the like)
YAGNI
The most important thing to remember is: do you really need multithreading?
I agree with pretty much all the answers so far.
A good coding strategy is to minimise or eliminate the amount of data that is shared between threads as much as humanly possible. You can do this by:
Using thread-static variables (although don't go overboard on this, it will eat more memory per thread, depending on your O/S).
Packaging up all state used by each thread into a class, then guaranteeing that each thread gets exactly one state class instance to itself. Think of this as "roll your own thread-static", but with more control over the process.
Marshalling data by value between threads instead of sharing the same data. Either make your data transfer classes immutable, or guarantee that all cross-thread calls are synchronous, or both.
Try not to have multiple threads competing for the exact same I/O "resource", whether it's a disk file, a database table, a web service call, or whatever. This will cause contention as multiple threads fight over the same resource.
Here's an extremely contrived OTT example. In a real app you would cap the number of threads to reduce scheduling overhead:
All UI - one thread.
Background calcs - one thread.
Logging errors to a disk file - one thread.
Calling a web service - one thread per unique physical host.
Querying the database - one thread per independent group of tables that need updating.
Rather than guessing how to do divvy up the tasks, profile your app and isolate those bits that are (a) very slow, and (b) could be done asynchronously. Those are good candidates for a separate thread.
And here's what you should avoid:
Calcs, database hits, service calls, etc - all in one thread, but spun up multiple times "to improve performance".
Don't start new threads unless you really need to. Starting threads is not cheap and for short running tasks starting the thread may actually take more time than executing the task itself. If you're on .NET take a look at the built in thread pool, which is useful in a lot of (but not all) cases. By reusing the threads the cost of starting threads is reduced.
EDIT: A few notes on creating threads vs. using thread pool (.NET specific)
Generally try to use the thread pool. Exceptions:
Long running CPU bound tasks and blocking tasks are not ideal run on the thread pool cause they will force the pool to create additional threads.
All thread pool threads are background threads, so if you need your thread to be foreground, you have to start it yourself.
If you need a thread with different priority.
If your thread needs more (or less) than the standard 1 MB stack space.
If you need to be able to control the life time of the thread.
If you need different behavior for creating threads than that offered by the thread pool (e.g. the pool will throttle creating of new threads, which may or may not be what you want).
There are probably more exceptions and I am not claiming that this is the definitive answer. It is just what I could think of atm.
I am applying my new found knowledge of threading everywhere
[Emphasis added]
DO remember that a little knowledge is dangerous. Knowing the threading API of your platform is the easy bit. Knowing why and when you need to use synchronisation is the hard part. Reading up on "deadlocks", "race-conditions", "priority inversion" will start you in understanding why.
The details of when to use synchronisation are both simple (shared data needs synchronisation) and complex (atomic data types used in the right way don't need synchronisation, which data is really shared): a lifetime of learning and very solution specific.
An important thing to take care of (with multiple cores and CPUs) is cache coherency.
I am surprised that no one has pointed out Herb Sutter's Effective Concurrency columns yet. In my opinion, this is a must read if you want to go anywhere near threads.
a) Always make only 1 thread responsible for a resource's lifetime. That way thread A won't delete a resource thread B needs - if B has ownership of the resource
b) Expect the unexpected
DO think about how you will test your code and set aside plenty of time for this. Unit tests become more complicated. You may not be able to manually test your code - at least not reliably.
DO think about thread lifetime and how threads will exit. Don't kill threads. Provide a mechanism so that they exit gracefully.
DO add some kind of debug logging to your code - so that you can see that your threads are behaving correctly both in development and in production when things break down.
DO use a good library for handling threading rather than rolling your own solution (if you can). E.g. java.util.concurrency
DON'T assume a shared resource is thread safe.
DON'T DO IT. E.g. use an application container that can take care of threading issues for you. Use messaging.
In .Net one thing that surprised me when I started trying to get into multi-threading is that you cannot straightforwardly update the UI controls from any thread other than the thread that the UI controls were created on.
There is a way around this, which is to use the Control.Invoke method to update the control on the other thread, but it is not 100% obvious the first time around!
Don't be fooled into thinking you understand the difficulties of concurrency until you've split your head into a real project.
All the examples of deadlocks, livelocks, synchronization, etc, seem simple, and they are. But they will mislead you, because the "difficulty" in implementing concurrency that everyone is talking about is when it is used in a real project, where you don't control everything.
While your initial differences in sums of numbers are, as several respondents have pointed out, likely to be the result of lack of synchronisation, if you get deeper into the topic, be aware that, in general, you will not be able to reproduce exactly the numeric results you get on a serial program with those from a parallel version of the same program. Floating-point arithmetic is not strictly commutative, associative, or distributive; heck, it's not even closed.
And I'd beg to differ with what, I think, is the majority opinion here. If you are writing multi-threaded programs for a desktop with one or more multi-core CPUs, then you are working on a shared-memory computer and should tackle shared-memory programming. Java has all the features to do this.
Without knowing a lot more about the type of problem you are tackling, I'd hesitate to write that 'you should do this' or 'you should not do that'.