Trying to understanding the difference between the TPL & async/await when it comes to thread creation.
I believe the TPL (TaskFactory.StartNew) works similar to ThreadPool.QueueUserWorkItem in that it queues up work on a thread in the thread pool. That's of course unless you use TaskCreationOptions.LongRunning which creates a new thread.
I thought async/await would work similarly so essentially:
TPL:
Factory.StartNew( () => DoSomeAsyncWork() )
.ContinueWith(
(antecedent) => {
DoSomeWorkAfter();
},TaskScheduler.FromCurrentSynchronizationContext());
Async/Await:
await DoSomeAsyncWork();
DoSomeWorkAfter();
would be identical. From what I've been reading it seems like async/await only "sometimes" creates a new thread. So when does it create a new thread and when doesn't it create a new thread? If you were dealing with IO completion ports i can see it not having to create a new thread but otherwise I would think it would have to. I guess my understanding of FromCurrentSynchronizationContext always was a bit fuzzy also. I always throught it was, in essence, the UI thread.
I believe the TPL (TaskFactory.Startnew) works similar to ThreadPool.QueueUserWorkItem in that it queues up work on a thread in the thread pool.
Pretty much.
From what i've been reading it seems like async/await only "sometimes" creates a new thread.
Actually, it never does. If you want multithreading, you have to implement it yourself. There's a new Task.Run method that is just shorthand for Task.Factory.StartNew, and it's probably the most common way of starting a task on the thread pool.
If you were dealing with IO completion ports i can see it not having to create a new thread but otherwise i would think it would have to.
Bingo. So methods like Stream.ReadAsync will actually create a Task wrapper around an IOCP (if the Stream has an IOCP).
You can also create some non-I/O, non-CPU "tasks". A simple example is Task.Delay, which returns a task that completes after some time period.
The cool thing about async/await is that you can queue some work to the thread pool (e.g., Task.Run), do some I/O-bound operation (e.g., Stream.ReadAsync), and do some other operation (e.g., Task.Delay)... and they're all tasks! They can be awaited or used in combinations like Task.WhenAll.
Any method that returns Task can be awaited - it doesn't have to be an async method. So Task.Delay and I/O-bound operations just use TaskCompletionSource to create and complete a task - the only thing being done on the thread pool is the actual task completion when the event occurs (timeout, I/O completion, etc).
I guess my understanding of FromCurrentSynchronizationContext always was a bit fuzzy also. I always throught it was, in essence, the UI thread.
I wrote an article on SynchronizationContext. Most of the time, SynchronizationContext.Current:
is a UI context if the current thread is a UI thread.
is an ASP.NET request context if the current thread is servicing an ASP.NET request.
is a thread pool context otherwise.
Any thread can set its own SynchronizationContext, so there are exceptions to the rules above.
Note that the default Task awaiter will schedule the remainder of the async method on the current SynchronizationContext if it is not null; otherwise it goes on the current TaskScheduler. This isn't so important today, but in the near future it will be an important distinction.
I wrote my own async/await intro on my blog, and Stephen Toub recently posted an excellent async/await FAQ.
Regarding "concurrency" vs "multithreading", see this related SO question. I would say async enables concurrency, which may or may not be multithreaded. It's easy to use await Task.WhenAll or await Task.WhenAny to do concurrent processing, and unless you explicitly use the thread pool (e.g., Task.Run or ConfigureAwait(false)), then you can have multiple concurrent operations in progress at the same time (e.g., multiple I/O or other types like Delay) - and there is no thread needed for them. I use the term "single-threaded concurrency" for this kind of scenario, though in an ASP.NET host, you can actually end up with "zero-threaded concurrency". Which is pretty sweet.
async / await basically simplifies the ContinueWith methods ( Continuations in Continuation Passing Style )
It does not introduce concurrency - you still have to do that yourself ( or use the Async version of a framework method. )
So, the C# 5 version would be:
await Task.Run( () => DoSomeAsyncWork() );
DoSomeWorkAfter();
Related
Is there ever any reason to add blocks to a serial dispatch queue asynchronously as opposed to synchronously?
As I understand it a serial dispatch queue only starts executing the next task in the queue once the preceding task has completed executing. If this is the case, I can't see what you would you gain by submitting some blocks asynchronously - the act of submission may not block the thread (since it returns straight-away), but the task won't be executed until the last task finishes, so it seems to me that you don't really gain anything.
This question has been prompted by the following code - taken from a book chapter on design patterns. To prevent the underlying data array from being modified simultaneously by two separate threads, all modification tasks are added to a serial dispatch queue. But note that returnToPool adds tasks to this queue asynchronously, whereas getFromPool adds its tasks synchronously.
class Pool<T> {
private var data = [T]();
// Create a serial dispath queue
private let arrayQ = dispatch_queue_create("arrayQ", DISPATCH_QUEUE_SERIAL);
private let semaphore:dispatch_semaphore_t;
init(items:[T]) {
data.reserveCapacity(data.count);
for item in items {
data.append(item);
}
semaphore = dispatch_semaphore_create(items.count);
}
func getFromPool() -> T? {
var result:T?;
if (dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER) == 0) {
dispatch_sync(arrayQ, {() in
result = self.data.removeAtIndex(0);
})
}
return result;
}
func returnToPool(item:T) {
dispatch_async(arrayQ, {() in
self.data.append(item);
dispatch_semaphore_signal(self.semaphore);
});
}
}
Because there's no need to make the caller of returnToPool() block. It could perhaps continue on doing other useful work.
The thread which called returnToPool() is presumably not just working with this pool. It presumably has other stuff it could be doing. That stuff could be done simultaneously with the work in the asynchronously-submitted task.
Typical modern computers have multiple CPU cores, so a design like this improves the chances that CPU cores are utilized efficiently and useful work is completed sooner. The question isn't whether tasks submitted to the serial queue operate simultaneously — they can't because of the nature of serial queues — it's whether other work can be done simultaneously.
Yes, there are reasons why you'd add tasks to serial queue asynchronously. It's actually extremely common.
The most common example would be when you're doing something in the background and want to update the UI. You'll often dispatch that UI update asynchronously back to the main queue (which is a serial queue). That way the background thread doesn't have to wait for the main thread to perform its UI update, but rather it can carry on processing in the background.
Another common example is as you've demonstrated, when using a GCD queue to synchronize interaction with some object. If you're dealing with immutable objects, you can dispatch these updates asynchronously to this synchronization queue (i.e. why have the current thread wait, but rather instead let it carry on). You'll do reads synchronously (because you're obviously going to wait until you get the synchronized value back), but writes can be done asynchronously.
(You actually see this latter example frequently implemented with the "reader-writer" pattern and a custom concurrent queue, where reads are performed synchronously on concurrent queue with dispatch_sync, but writes are performed asynchronously with barrier with dispatch_barrier_async. But the idea is equally applicable to serial queues, too.)
The choice of synchronous v asynchronous dispatch has nothing to do with whether the destination queue is serial or concurrent. It's simply a question of whether you have to block the current queue until that other one finishes its task or not.
Regarding your code sample code, that is correct. The getFromPool should dispatch synchronously (because you have to wait for the synchronization queue to actually return the value), but returnToPool can safely dispatch asynchronously. Obviously, I'm wary of seeing code waiting for semaphores if that might be called from the main thread (so make sure you don't call getFromPool from the main thread!), but with that one caveat, this code should achieve the desired purpose, offering reasonably efficient synchronization of this pool object, but with a getFromPool that will block if the pool is empty until something is added to the pool.
I am implementing a REST service for financial calculation. So each request is supposed to be a CPU intensive task, and I think that the best place to create threads it's in the following function:
exports.execute = function(data, params, f, callback) {
var queriesList = [];
var resultList = [];
for (var i = 0; i < data.lista.length; i++)
{
var query = (function(cod) {
return function(callbackFlow) {
params.paramcodneg = cod;
doCdaQuery(params, function(err, result)
{
if (err)
{
return callback({ERROR: err}, null);
}
f(data, result, function(ret)
{
resultList.push(ret);
callbackFlow();
});
});
}
})(data.lista[i]);
queriesList.push(query);
}
flow.parallel(queriesList, function() {
callback(null, resultList);
});
};
I don't know what is best, run flow.parallel in a separeted thread or run each function of the queriesList in its own thread. What is best ? And how to use threads-a-gogo module for that ?
I've tried but couldn't write the right code for that.
Thanks in advance.
Kleyson Rios.
I'll admit that I'm relatively new to node.js and I haven't yet used threads a gogo, but I have had some experience with multi-threaded programming, so I'll take a crack at answering this question.
Creating a thread for every single query (I'm assuming these queries are CPU-bound calculations rather than IO-bound calls to a database) is not a good idea. Creating and destroying threads in an expensive operation, so creating an destroying a group of threads for every request that requires calculation is going to be a huge drag on performance. Too many threads will cause more overhead as the processor switches between them. There isn't any advantage to having more worker threads than processor cores.
Also, if each query doesn't take that much processing time, there will be more time spent creating and destroying the thread than running the query. Most of the time would be spent on threading overhead. In this case, you would be much better off using a single-threaded solution using flow or async, which distributes the processing over multiple ticks to allow the node.js event loop to run.
Single-threaded solutions are the easiest to understand and debug, but if the queries are preventing the main thread from getting other stuff done, then a multi-threaded solution is necessary.
The multi-threaded solution you propose is pretty good. Running all the queries in a separate thread prevents the main thread from bogging down. However, there isn't any point in using flow or async in this case. These modules simulate multi-threading by distributing the processing over multiple node.js ticks and tasks run in parallel don't execute in any particular order. However, these tasks still are running in a single thread. Since you're processing the queries in their own thread, and they're no longer interfering with the node.js event loop, then just run them one after another in a loop. Since all the action is happening in a thread without a node.js event loop, using flow or async in just introduces more overhead for no additional benefit.
A more efficient solution is to have a thread pool hanging out in the background and throw tasks at it. The thread pool would ideally have the same number of threads as processor cores, and would be created when the application starts up and destroyed when the application shuts down, so the expensive creating and destroying of threads only happens once. I see that Threads a Gogo has a thread pool that you can use, although I'm afraid I'm not yet familiar enough with it to give you all the details of using it.
I'm drifting into territory I'm not familiar with here, but I believe you could do it by pushing each query individually onto the global thread pool and when all the callbacks have completed, you'll be done.
The Node.flow module would be handy here, not because it would make processing any faster, but because it would help you manage all the query tasks and their callbacks. You would use a loop to push a bunch of parallel tasks on the flow stack using flow.parallel(...), where each task would send a query to the global threadpool using threadpool.any.eval(), and then call ready() in the threadpool callback to tell flow that the task is complete. After the parallel tasks have been queued up, use flow.join() to run all the tasks. That should run the queries on the thread pool, with the thread pool running as many tasks as it can at once, using all the cores and avoiding creating or destroying threads, and all the queries will have been processed.
Other requests would also be tossing their tasks onto the thread pool as well, but you wouldn't notice that because the request being processed would only get callbacks for the tasks that the request gave to the thread pool. Note that this would all be done on the main thread. The thread pool would do all the non-main-thread processing.
You'll need to do some threads a gogo and node.flow documentation reading and figure out some of the details, but that should give you a head start. Using a separate thread is more complex than using the main thread, and making use of a thread pool is even more complex, so you'll have to choose which one is best for you. The extra complexity might or might not be worth it.
Let's say I'm getting a (potentially big) list of images to download from some URLs. I'm using Scala, so what I would do is :
import scala.actors.Futures._
// Retrieve URLs from somewhere
val urls: List[String] = ...
// Download image (blocking operation)
val fimages: List[Future[...]] = urls.map (url => future { download url })
// Do something (display) when complete
fimages.foreach (_.foreach (display _))
I'm a bit new to Scala, so this still looks a little like magic to me :
Is this the right way to do it? Any alternatives if it is not?
If I have 100 images to download, will this create 100 threads at once, or will it use a thread pool?
Will the last instruction (display _) be executed on the main thread, and if not, how can I make sure it is?
Thanks for your advice!
Use Futures in Scala 2.10. They were joint work between the Scala team, the Akka team, and Twitter to reach a more standardized future API and implementation for use across frameworks. We just published a guide at: http://docs.scala-lang.org/overviews/core/futures.html
Beyond being completely non-blocking (by default, though we provide the ability to do managed blocking operations) and composable, Scala's 2.10 futures come with an implicit thread pool to execute your tasks on, as well as some utilities to manage time outs.
import scala.concurrent.{future, blocking, Future, Await, ExecutionContext.Implicits.global}
import scala.concurrent.duration._
// Retrieve URLs from somewhere
val urls: List[String] = ...
// Download image (blocking operation)
val imagesFuts: List[Future[...]] = urls.map {
url => future { blocking { download url } }
}
// Do something (display) when complete
val futImages: Future[List[...]] = Future.sequence(imagesFuts)
Await.result(futImages, 10 seconds).foreach(display)
Above, we first import a number of things:
future: API for creating a future.
blocking: API for managed blocking.
Future: Future companion object which contains a number of useful methods for collections of futures.
Await: singleton object used for blocking on a future (transferring its result to the current thread).
ExecutionContext.Implicits.global: the default global thread pool, a ForkJoin pool.
duration._: utilities for managing durations for time outs.
imagesFuts remains largely the same as what you originally did- the only difference here is that we use managed blocking- blocking. It notifies the thread pool that the block of code you pass to it contains long-running or blocking operations. This allows the pool to temporarily spawn new workers to make sure that it never happens that all of the workers are blocked. This is done to prevent starvation (locking up the thread pool) in blocking applications. Note that the thread pool also knows when the code in a managed blocking block is complete- so it will remove the spare worker thread at that point, which means that the pool will shrink back down to its expected size.
(If you want to absolutely prevent additional threads from ever being created, then you ought to use an AsyncIO library, such as Java's NIO library.)
Then we use the collection methods of the Future companion object to convert imagesFuts from List[Future[...]] to a Future[List[...]].
The Await object is how we can ensure that display is executed on the calling thread-- Await.result simply forces the current thread to wait until the future that it is passed is completed. (This uses managed blocking internally.)
val all = Future.traverse(urls){ url =>
val f = future(download url) /*(downloadContext)*/
f.onComplete(display)(displayContext)
f
}
Await.result(all, ...)
Use scala.concurrent.Future in 2.10, which is RC now.
which uses an implicit ExecutionContext
The new Future doc is explicit that onComplete (and foreach) may evaluate immediately if the value is available. The old actors Future does the same thing. Depending on what your requirement is for display, you can supply a suitable ExecutionContext (for instance, a single thread executor). If you just want the main thread to wait for loading to complete, traverse gives you a future to await on.
Yes, seems fine to me, but you may want to investigate more powerful twitter-util or Akka Future APIs (Scala 2.10 will have a new Future library in this style).
It uses a thread pool.
No, it won't. You need to use the standard mechanism of your GUI toolkit for this (SwingUtilities.invokeLater for Swing or Display.asyncExec for SWT). E.g.
fimages.foreach (_.foreach(im => SwingUtilities.invokeLater(new Runnable { display im })))
I retrieve data from the Bloomberg API, and am quite surprised by the slowness.
My computation is IO bounded by this.
Therefore I decided to use some async monad builder to unthrottle it.
Upon running it, the results are not so much better, which was obvious as I make a call to a function, NextEvent, which is thread blocking.
let outerloop args dic =
...
let rec innerloop continuetoloop =
let eventObj = session.NextEvent(); //This blocks
...
let seqtable = reader.ReadFile( #"C:\homeware\sector.csv", ";".[0], true)
let dic = ConcurrentDictionary<_,_> ()
let wf = seqtable |> Seq.mapi (fun i item -> async { outerloop item dic } )
wf |> Async.Parallel
|> Async.RunSynchronously
|> ignore
printfn "%A" ret
Is there a good way to wrap that blocking call to a nonblocking call ?
Also, why is the async framework not creating as many threads as I have requests (like 200)? when I inspect the threads from which I receive values I see only 4-5 that are used..
UPDATE
I found a compelling reason of why it will never be possible.
async operation take what is after the async instruction and schedule it somewhere in the threadpool.
for all that matters, as long as async function are use correctly, that is, always returning to the threadpool it originated from, we can consider that we are execution on a single thread.
Being on a single thread mean all that scheduling will always be executed somewhere later, and a blocking instruction has no way to avoid the fact that, eventually, once it runs, it will have to block at some point in the future the worflow.
Is there a good way to wrap that blocking call to a nonblocking call ?
No. You can never wrap blocking calls to make them non-blocking. If you could, async would just be a design pattern rather than a fundamental paradigm shift.
Also, why is the async framework not creating as many threads as I have requests (like 200)?
Async is built upon the thread pool which is designed not to create threads aggressively because they are so expensive. The whole point of (real) async is that it does not require as many threads as there are connections. You can handle 10,000 simultaneous connections with ~30 threads.
You seem to have a complete misunderstanding of what async is and what it is for. I'd suggest buying any of the F# books that cover this topic and reading up on it. In particular, your solution is not asynchronous because you just call your blocking StartGetFieldsValue member from inside your async workflow, defeating the purpose of async. You might as well just do Array.Parallel.map getFieldsValue instead.
Also, you want to use a purely functional API when doing things in parallel rather than mutating a ConcurrentDictionary in-place. So replace req.StartGetFieldsValue ret with
let! result = req.StartGetFieldsValue()
...
return result
and replace ignore with dict.
Here is a solution I made that seems to be working.
Of course it does not use only async (minus a), but Async as well.
I define a simple type that has one event, finished, and one method, asyncstart, with the method to run as an argument. it launches the method, then fires the event finished at the appropriate place (in my case I had to capture the synchronization context etc..)
Then in the consumer side, I use simply
let! completion = Async.Waitfromevent wapper.finished |> Async.StartAsChild
let! completed = completion
While running this code, on the consumer side, I use only async calls, making my code non blocking. Of course, there has to be some thread somewhere which is blocked, but this happens outside of my main serving loop which remains reactive and fit.
I am learning F# and very interested in this language
I try to create async expression to run asynchronously.
for example
let prop1=async{
for i=0 to 1000000 do ()
MessageBox.Show("Done")
}
let prop2=async{
for i=0 to 1000000 do ()
MessageBox.Show("Done2")
}
Async.Start(prop1)
Async.Start(prop2)
when i run the program, i got that there are thread amount increasing of program process, from 6 to 8 , when i done close 2 message box , the process seem not destroy those created threads , the count also 8 , what happened or i got misunderstand about F# asynchronous
Thank for your help
The threads are taken from a thread pool (which is why there are more threads than actions, incidentally).
The pool exists until the application terminates.
Nothing to worry about
Edit For a nice in-depth article on F#, async and ThreadPool: http://www.voyce.com/index.php/2011/05/27/fsharp-async-plays-well-with-others/
The runtime might use a thread pool, that is threads are not destroyed, but waiting for another asynchronous tasks. This technique helps the runtime reduce time to start a new async. operation, because creating a new thread might consume some time and resources.