Is there ever any reason to add blocks to a serial dispatch queue asynchronously as opposed to synchronously?
As I understand it a serial dispatch queue only starts executing the next task in the queue once the preceding task has completed executing. If this is the case, I can't see what you would you gain by submitting some blocks asynchronously - the act of submission may not block the thread (since it returns straight-away), but the task won't be executed until the last task finishes, so it seems to me that you don't really gain anything.
This question has been prompted by the following code - taken from a book chapter on design patterns. To prevent the underlying data array from being modified simultaneously by two separate threads, all modification tasks are added to a serial dispatch queue. But note that returnToPool adds tasks to this queue asynchronously, whereas getFromPool adds its tasks synchronously.
class Pool<T> {
private var data = [T]();
// Create a serial dispath queue
private let arrayQ = dispatch_queue_create("arrayQ", DISPATCH_QUEUE_SERIAL);
private let semaphore:dispatch_semaphore_t;
init(items:[T]) {
data.reserveCapacity(data.count);
for item in items {
data.append(item);
}
semaphore = dispatch_semaphore_create(items.count);
}
func getFromPool() -> T? {
var result:T?;
if (dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER) == 0) {
dispatch_sync(arrayQ, {() in
result = self.data.removeAtIndex(0);
})
}
return result;
}
func returnToPool(item:T) {
dispatch_async(arrayQ, {() in
self.data.append(item);
dispatch_semaphore_signal(self.semaphore);
});
}
}
Because there's no need to make the caller of returnToPool() block. It could perhaps continue on doing other useful work.
The thread which called returnToPool() is presumably not just working with this pool. It presumably has other stuff it could be doing. That stuff could be done simultaneously with the work in the asynchronously-submitted task.
Typical modern computers have multiple CPU cores, so a design like this improves the chances that CPU cores are utilized efficiently and useful work is completed sooner. The question isn't whether tasks submitted to the serial queue operate simultaneously — they can't because of the nature of serial queues — it's whether other work can be done simultaneously.
Yes, there are reasons why you'd add tasks to serial queue asynchronously. It's actually extremely common.
The most common example would be when you're doing something in the background and want to update the UI. You'll often dispatch that UI update asynchronously back to the main queue (which is a serial queue). That way the background thread doesn't have to wait for the main thread to perform its UI update, but rather it can carry on processing in the background.
Another common example is as you've demonstrated, when using a GCD queue to synchronize interaction with some object. If you're dealing with immutable objects, you can dispatch these updates asynchronously to this synchronization queue (i.e. why have the current thread wait, but rather instead let it carry on). You'll do reads synchronously (because you're obviously going to wait until you get the synchronized value back), but writes can be done asynchronously.
(You actually see this latter example frequently implemented with the "reader-writer" pattern and a custom concurrent queue, where reads are performed synchronously on concurrent queue with dispatch_sync, but writes are performed asynchronously with barrier with dispatch_barrier_async. But the idea is equally applicable to serial queues, too.)
The choice of synchronous v asynchronous dispatch has nothing to do with whether the destination queue is serial or concurrent. It's simply a question of whether you have to block the current queue until that other one finishes its task or not.
Regarding your code sample code, that is correct. The getFromPool should dispatch synchronously (because you have to wait for the synchronization queue to actually return the value), but returnToPool can safely dispatch asynchronously. Obviously, I'm wary of seeing code waiting for semaphores if that might be called from the main thread (so make sure you don't call getFromPool from the main thread!), but with that one caveat, this code should achieve the desired purpose, offering reasonably efficient synchronization of this pool object, but with a getFromPool that will block if the pool is empty until something is added to the pool.
Related
I am doing an IO wait operation inside a for loop now the thing is when all of the operations terminates I want to send the response to the server. Now I was just wondering that suppose two IO operation terminates exactly at the same time now can they execute code at the same time(parallel) or will they execute serially?
As far as I know, as Node is Concurrent but not the Parallel language so I don't think they will execute at the same time.
node.js runs Javascript with a single thread. That means that two pieces of Javascript can never be running at the exact same moment.
node.js processes I/O completion using an event queue. That means when an I/O operation completes, it places an event in the event queue and when that event gets to the front of the event queue and the JS interpreter has finished whatever else it was doing, then it will pull that event from the event queue and call the callback associated with it.
Because of this process, even if two I/O operations finish at basically the same moment, one of them will put its completion event into the event queue before the other (access to the event queue internally is likely controlled by a mutex so one will get the mutex before the other) and that one's completion callback will get into the event queue first and then called before the other. The two completion callbacks will not run at the exact same time.
Keep in mind that more than one piece of Javascript can be "in flight" or "in process" at the same time if it contains non-blocking I/O operations or other asynchronous operations. This is because when you "wait" for an asynchronous operation to complete in Javscript, you return control back to the system and you then resume processing only when your completion callback is called. While the JS interpreter is waiting for an asynchronous I/O operation to complete and the associated callback to be called, then other Javascript can run. But, there's still only one piece of Javascript actually ever running at a time.
As far as I know, as Node is Concurrent but not the Parallel language so I don't think they will execute at the same time.
Yes, that's correct. That's not exactly how I'd describe it since "concurrent" and "parallel" don't have strict technical definitions, but based on what I think you mean by them, that is correct.
you can use Promise.all :
let promises = [];
for(...)
{
promises.push(somePromise); // somePromise represents your IO operation
}
Promise.all(promises).then((results) => { // here you send the response }
You don't have to worry about the execution order.
Node.js is designed to be single thread. So basically there is no way that 'two IO operation terminates exactly at the same time' could happen. They will just finish one by one.
Vert.x have many thread pool, eventLoopGroup,acceptorEventLoopGroup,internalBlockingPool,workerPool.
Why need so many?
FileSystem read file will use internalBlockingPool, but like this code executeBlocking will use workerPool.
And in this code why resultHandler execute in eventLoop thread not
workpool?
vertx.executeBlocking(future -> {
System.out.println(Thread.currentThread().getName());
future.complete();
}, r -> {
System.out.println(Thread.currentThread().getName());
});
In my understanding eventloop just a single thread is endless loop for channel.If nothing to do with network, no need to use eventLoopGroup.
how to understand event in Vert.x, can give some Vert.x code not netty code?
Event loops: there can be more than one event loop thread. There typically will be more than one event loop thread (it depends on your number of cores). For example,if you start N instances of a verticle, you will want it to spread across multiple cores using multiple event loops. In the docs, look up the multi-reactor pattern.
Vert.x works differently here. Instead of a single event loop, each
Vertx instance maintains several event loops. By default we choose the
number based on the number of available cores on the machine, but this
can be overridden.
http://vertx.io/docs/vertx-core/java/#_reactor_and_multi_reactor
Regarding your question about the result handler: The execute blocking function will run on a worker thread, but once it is all done, it will be pushed over to the event loop thread to finish the result handler. This behavior helps with keeping certain logic on the event loop thread.
Regarding the other thread groups, they just handle specific functionality in vert.x. If you are stressed about the number of threads in vert.x, I would not worry about it. Vert.x does a good job keeping the OS threads to a minimum while maintaining high functionality and throughput.
I sort of understand threads, correct me if I'm wrong.
Is a single thread allocated to a piece of code until that code has completed?
Are the threads prioritised to whichever piece of code is run first?
What is the difference between main queue and thread?
My most important question:
Can threads run at the same time? If so how can I specify which parts of my code should run at a selected thread?
Let me start this way. Unless you are writing a special kind of application (and you will know if you are), forget about threads. Working with threads is complex and tricky. Use dispatch queues… it's simpler and easier.
Dispatch queues run tasks. Tasks are closures (blocks) or functions. When you need to run a task off the main dispatch queue, you call one of the dispatch_ functions, the primary one being dispatch_async(). When you call dispatch_async(), you need to specify which queue to run the task on. To get a queue, you call one of the dispatch_queue_create() or dispatch_get_, the primary one being dispatch_get_global_queue.
NOTE: Swift 3 changed this from a function model to an object model. The dispatch_ functions are instance methods of DispatchQueue. The dispatch_get_ functions are turned into class methods/properties of DispatchQueue
// Swift 3
DispatchQueue.global(qos: .background).async {
var calculation = arc4random()
}
// Swift 2
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0)) {
var calculation = arc4random()
}
The trouble here is any and all tasks which update the UI must be run on the main thread. This is usually done by calling dispatch_async() on the main queue (dispatch_get_main_queue()).
// Swift 3
DispatchQueue.global(qos: .background).async {
var calculation = arc4random()
DispatchQueue.main.async {
print("\(calculation)")
}
}
// Swift 2
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0)) {
var calculation = arc4random()
dispatch_async(dispatch_get_main_queue()) {
print("\(calculation)")
}
}
The gory details are messy. To keep it simple, dispatch queues manage thread pools. It is up to the dispatch queue to create, run, and eventually dispose of threads. The main queue is a special queue which has only 1 thread. The operating system is tasked with assigning threads to a processor and executing the task running on the thread.
With all that out of the way, now I will answer your questions.
Is a single thread allocated to a piece of code until that code has completed?
A task will run in a single thread.
Are the threads prioritised to whichever piece of code is run first?
Tasks are assigned to a thread. A task will not change which thread it runs on. If a task needs to run in another thread, then it creates a new task and assigns that new task to the other thread.
What is the difference between main queue and thread?
The main queue is a dispatch queue which has 1 thread. This single thread is also known as the main thread.
Can threads run at the same time?
Threads are assigned to execute on processors by the operating system. If your device has multiple processors (they all do now-a-days), then multiple threads are executing at the same time.
If so how can I specify which parts of my code should run at a selected thread?
Break you code into tasks. Dispatch the tasks on a dispatch queue.
You can find on here a very good explanation about what is a race condition.
I have seen recently many people making confusing statements about race conditions and threads.
I have learned that race conditions could only occur between threads. But I saw code that looked like race conditions, in event and asynchronous based languages, even if the program was single thread, like in Node.js, in GTK+, etc.
Can we have a race condition in a single thread program?
All examples are in a fictional language very close to Javascript.
Short:
A race condition can only occur between two or more threads / external state (one of them can be the OS). We cannot have race conditions inside a single thread process, non I/O doing program.
But a single thread program can in many cases :
give situations which looks similar to race conditions, like in event based program with an event loop, but are not real race conditions
trigger a race condition between or with other thread(s), for example, or because the execution of some parts of the program depends on external state :
other programs, like clients
library threads or servers
system clock
I) Race conditions can only occur with two or more threads
A race condition can only occur when two or more threads try to access a shared resource without knowing it is modified at the same time by unknown instructions from the other thread(s). This gives an undetermined result. (This is really important.)
A single thread process is nothing more than a sequence of known instructions which therefore results in a determined result, even if the execution order of instructions is not easy to read in the code.
II) But we are not safe
II.1) Situations similar to race conditions
Many programming languages implements asynchronous programming features through events or signals, handled by a main loop or event loop which check for the event queue and trigger the listeners. Example of this are Javascript, libuevent, reactPHP, GNOME GLib... Sometimes, we can find situations which seems to be race conditions, but they are not.
The way the event loop is called is always known, so the result is determined, even if the execution order of instructions is not easy to read (or even cannot be read if we do not know the library).
Example:
setTimeout(
function() { console.log("EVENT LOOP CALLED"); },
1
); // We want to print EVENT LOOP CALLED after 1 milliseconds
var now = new Date();
while(new Date() - now < 10) //We do something during 10 milliseconds
console.log("EVENT LOOP NOT CALLED");
in Javascript output is always (you can test in node.js) :
EVENT LOOP NOT CALLED
EVENT LOOP CALLED
because, the event loop is called when the stack is empty (all functions have returned).
Be aware that this is just an example and that in languages that implements events in a different way, the result might be different, but it would still be determined by the implementation.
II.2) Race condition between other threads, for example :
II.2.i) With other programs like clients
If other processes are requesting our process, that our program do not treat requests in an atomic way, and that our process share some resources between the requests, there might be a race condition between clients.
Example:
var step;
on('requestOpen')(
function() {
step = 0;
}
);
on('requestData')(
function() {
step = step + 1;
}
);
on('requestEnd')(
function() {
step = step +1; //step should be 2 after that
sendResponse(step);
}
);
Here, we have a classical race condition setup. If a request is opened just before another ends, step will be reset to 0. If two requestData events are triggered before the requestEnd because of two concurrent requests, step will reach 3. But this is because we take the sequence of events as undetermined. We expect that the result of a program is most of the time undetermined with an undetermined input.
In fact, if our program is single thread, given a sequence of events the result is still always determined. The race condition is between clients.
There is two ways to understand the thing :
We can consider clients as part of our program (why not ?) and in this case, our program is multi thread. End of the story.
More commonly we can consider that clients are not part of our program. In this case they are just input. And when we consider if a program has a determined result or not, we do that with input given. Otherwise even the simplest program return input; would have a undetermined result.
Note that :
if our process treat request in an atomic way, it is the same as if there was a mutex between client, and there is no race condition.
if we can identify request and attach the variable to a request object which is the same at every step of the request, there is no shared resource between clients and no race condition
II.2.ii) With library thread(s)
In our programs, we often use libraries which spawn other processes or threads, or that just do I/O with other processes (and I/O is always undetermined).
Example :
databaseClient.sendRequest('add Me to the database');
databaseClient.sendRequest('remove Me from the database');
This can trigger a race condition in an asynchronous library. This is the case if sendRequest() returns after having sent the request to the database, but before the request is really executed. We immediately send another request and we cannot know if the first will be executed before the second is evaluated, because database works on another thread. There is a race condition between the program and the database process.
But, if the database was on the same thread as the program (which in real life does not happen often) is would be impossible that sendRequest returns before the request is processed. (Unless the request is queued, but in this case, the result is still determined as we know exactly how and when the queue is read.)
II.2.i) With system clock
#mingwei-samuel answer gives an example of a race condition with a single thread JS program, between to setTimeout callback. Actually, once both setTimeout are called, the execution order is already determined. This order depends on the system clock state (so, an external thread) at the time of setTimeout call.
Conclusion
In short, single-thread programs are not free from trigerring race conditions. But they can only occur with or between other threads of external programs. The result of our program might be undetermined, because the input our program receive from those other programs is undetermined.
Race conditions can occur with any system that has concurrently executing processes that create state changes in external processes, examples of which include :
multithreading,
event loops,
multiprocessing,
instruction level parallelism where out-of-order execution of instructions has to take care to avoid race conditions,
circuit design,
dating (romance),
real races in e.g. the olympic games.
Yes.
A "race condition" is a situation when the result of a program can change depending on the order operations are run (threads, async tasks, individual instructions, etc).
For example, in Javascript:
setTimeout(() => console.log("Hello"), 10);
setTimeout(() => setTimeout(() => console.log("World"), 4), 4);
// VM812:1 Hello
// VM812:2 World
setTimeout(() => console.log("Hello"), 10);
setTimeout(() => setTimeout(() => console.log("World"), 4), 4);
// VM815:2 World
// VM815:1 Hello
So clearly this code depends on how the JS event loop works, how tasks are ordered/chosen, what other events occurred during execution, and even how your operating system chose to schedule the JS runtime process.
This is contrived, but a real program could have a situation where "Hello" needs to be run before "World", which could result in some nasty non-deterministic bugs. How people could consider this not a "real" race condition, I'm not sure.
Data Races
It is not possible to have data races in single threaded code.
A "data race" is multiple threads accessing a shared resource at the same time in an inconstant way, or specifically for memory: multiple threads accessing the same memory, where one (or more) is writing. Of course, with a single thread this is not possible.
This seems to be what #jillro's answer is talking about.
Note: the exact definitions of "race condition" and "data race" are not agreed upon. But if it looks like a race condition, acts like a race condition, and causes nasty non-deterministic bugs like a race condition, then I think it should be called a race condition.
I am trying to model a system where there are multiple threads producing data, and a single thread consuming the data. The trick is that I don't want a dedicated thread to consume the data because all of the threads live in a pool. Instead, I want one of the producers to empty the queue when there is work, and yield if another producer is already clearing the queue.
The basic idea is that there is a queue of work, and a lock around the processing. Each producer pushes its payload onto the queue, and then attempts to enter the lock. The attempt is non-blocking and returns either true (the lock was acquired), or false (the lock is held by someone else).
If the lock is acquired, then that thread then processes all of the data in the queue until it is empty (including any new payloads introduced by other producers during processing). Once all of the work has been processed, the thread releases the lock and quits out.
The following is C++ code for the algorithm:
void Process(ITask *task) {
// queue is a thread safe implementation of a regular queue
queue.push(task);
// crit_sec is some handle to a critical section like object
// try_scoped_lock uses RAII to attempt to acquire the lock in the constructor
// if the lock was acquired, it will release the lock in the
// destructor
try_scoped_lock lock(crit_sec);
// See if this thread won the lottery. Prize is doing all of the dishes
if (!lock.Acquired())
return;
// This thread got the lock, so it needs to do the work
ITask *currTask;
while (queue.try_pop(currTask)) {
... execute task ...
}
}
In general this code works fine, and I have never actually witnessed the behavior I am about to describe below, but that implementation makes me feel uneasy. It stands to reason that a race condition is introduced between when the thread exits the while loop and when it releases the critical section.
The whole algorithm relies on the assumption that if the lock is being held, then a thread is servicing the queue.
I am essentially looking for enlightenment on 2 questions:
Am I correct that there is a race condition as described (bonus for other races)
Is there a standard pattern for implementing this mechanism that is performant and doesn't introduce race conditions?
Yes, there is a race condition.
Thread A adds a task, gets the lock, processes itself, then asks for a task from the queue. It is rejected.
Thread B at this point adds a task to the queue. It then attempts to get the lock, and fails, because thread A has the lock. Thread B exits.
Thread A then exits, with the queue non-empty, and nobody processing the task on it.
This will be difficult to find, because that window is relatively narrow. To make it more likely to find, after the while loop introduce a "sleep for 10 seconds". In the calling code, insert a task, wait 5 seconds, then insert a second task. After 10 more seconds, check that both insert tasks are finished, and there is still a task to be processed on the queue.
One way to fix this would be to change try_pop to try_pop_or_unlock, and pass in your lock to it. try_pop_or_unlock then atomically checks for an empty queue, and if so unlocks the lock and returns false.
Another approach is to improve the thread pool. Add a counting semaphore based "consume" task launcher to it.
semaphore_bool bTaskActive;
counting_semaphore counter;
when (counter || !bTaskActive)
if (bTaskActive)
return
bTaskActive = true
--counter
launch_task( process_one_off_queue, when_done( [&]{ bTaskActive=false ) );
When the counting semaphore is active, or when poked by the finished consume task, it launches a consume task if there is no consume task active.
But that is just off the top of my head.