Why ExecutorService is much faster than Coroutines in this example? [Solved] - multithreading

Update:
I made 2 silly mistakes!
I submitted only 1 task in the executor service example
I forgot to await for the tasks to finish.
Fixing the test, lead to all 3 examples having around 190-200 ms/op latency.
I created a benchmark comparison using kotlinx-benchmark (uses jmh) to compare coroutines and a threadpool when making a blocking call.
My rational behind such benchmark is
Coroutines will block the underlying thread when making a blocking call.
A Network call is generally blocking ()
In an average service, I need to make a million of network calls.
In such scenario will I get any benefit, if I use coroutines?
The benchmark I create simulates the blocking call using Thread.sleep(10) // 10 ms block and I need to create 1000 of them. I created 3 examples with following results
Dispatchers.io
Used Dispatchers.io, which is the recommended way to handle IO operations.
#Benchmark
fun withCoroutines() {
runBlocking {
val coroutines = (0 until 1000).map {
CoroutineScope(Dispatchers.IO).async {
sleep(10)
}
}
coroutines.joinAll()
}
}
Avg time: 188.418 ms/op
Fixed Threadpool
Dispatcher.IO created 64 threads (the exact number is nondeterministic statically). So I kept 60 threads for a comparable scenario
#Benchmark
fun withExecutorService() {
val executors = Executors.newFixedThreadPool(60)
executors.submit { sleep(10) }
executors.shutdown()
}
Avg time: 0.054 ms/op
Threadpool Dispatcher
Since the results were shocking I decided to use the same threadpool above as the dispatcher as
Executors.newFixedThreadPool(60).asCoroutineDispatcher()
Avg time: 206,260 ms/op
Questions
Why are coroutines performing exceptionally bad here?
With limitedParallelism(10) options coroutines performed much better at 30ms/op. Default number of threads used by IO are 64. Does that mean that coroutine scheduler is causing too many context switches, leading to poor performance. Still the performance is not close to that of threadpools
Am I correct to assume that the network calls are always blocking? Both executor service and coroutines schedules execution over underlying threads while not blocking the main thread, so they are the direct competitors.
Notes:
I am running jmh with
#State(Scope.Benchmark)
#Fork(1)
#Warmup(iterations = 50)
#Measurement(iterations = 5, time = 1000, timeUnit = TimeUnit.MILLISECONDS)
#OutputTimeUnit(TimeUnit.MILLISECONDS)
#BenchmarkMode(Mode.AverageTime)
The code can be found here

Related

what would be the right way to go for my scenario, thread array, thread pool or tasks?

I am working on a small microfinance application that processes financial transactions, the frequency of these transaction are quite high, which is why I am planning to make it a multi-threaded application that can process multiple transactions in parallel.
I have already designed all the workers that are thread safe,
what I need help for is how to manage these threads. here are some of my options
1.make a specified number of thread pool threads at startup and keep them running like in a infinite loop where they could keep looking for new transactions and if any are found start processing
example code:
void Start_Job(){
for (int l_ThreadId = 0; l_ThreadId < PaymentNoOfWorkerThread; l_ThreadId++)
{
ThreadPool.QueueUserWorkItem(Execute, (object)l_TrackingId);
}
}
void Execute(object l_TrackingId)
{
while(true)
{
var new_txns = Get_New_Txns(); //get new txns if any returns a queue
while(new_txns.count > 0 ){
process_txn(new_txns.Dequeue())
}
Thread.Sleep(some_time);
}
}
2.look for new transactions and assign a thread pool thread for each transaction (my understanding that these threads would be reused after their execution is complete for new txns)
example code:
void Start_Job(){
while(true){
var new_txns = Get_New_Txns(); //get new txns if any returns a queue
for (int l_ThreadId = 0; l_ThreadId < new_txns.count; l_ThreadId++)
{
ThreadPool.QueueUserWorkItem(Execute, (object)new_txn.Dequeue());
}
}
Thread.Sleep(some_time);
}
void Execute(object Txn)
{
process_txn(txn);
}
3.do the above but with tasks.
which option would be most efficient and well suited for my application,
thanks in advance :)
ThreadPool.QueueUserWorkItem is an older API and you shouldn't be using it directly
anymore. Tasks is the way to go and Thread pool is managed automatically for you.
What may suite your application would depend on what happens in process_txn and is subjective, so this is very generic guideline:
If process_txn is a compute bound operation: for example it performs only CPU bound calculations, then you may look at the Task Parallel Library. It will help you use the CPU cores more efficiently.
If process_txn is less of CPU and more IO bound operations: meaning if it may read/write from files/database or connects to some other remote service, then what you should look at is asynchronous programming and make sure your IO operations are all asynchronous which means your threads are never blocked on IO. This will help your service to be more scalable. Also depending on what your queue is, see if you can await on the queue asynchronously, so that none of your application threads are blocked just waiting on the queue.

Node.js multithreading using threads-a-gogo

I am implementing a REST service for financial calculation. So each request is supposed to be a CPU intensive task, and I think that the best place to create threads it's in the following function:
exports.execute = function(data, params, f, callback) {
var queriesList = [];
var resultList = [];
for (var i = 0; i < data.lista.length; i++)
{
var query = (function(cod) {
return function(callbackFlow) {
params.paramcodneg = cod;
doCdaQuery(params, function(err, result)
{
if (err)
{
return callback({ERROR: err}, null);
}
f(data, result, function(ret)
{
resultList.push(ret);
callbackFlow();
});
});
}
})(data.lista[i]);
queriesList.push(query);
}
flow.parallel(queriesList, function() {
callback(null, resultList);
});
};
I don't know what is best, run flow.parallel in a separeted thread or run each function of the queriesList in its own thread. What is best ? And how to use threads-a-gogo module for that ?
I've tried but couldn't write the right code for that.
Thanks in advance.
Kleyson Rios.
I'll admit that I'm relatively new to node.js and I haven't yet used threads a gogo, but I have had some experience with multi-threaded programming, so I'll take a crack at answering this question.
Creating a thread for every single query (I'm assuming these queries are CPU-bound calculations rather than IO-bound calls to a database) is not a good idea. Creating and destroying threads in an expensive operation, so creating an destroying a group of threads for every request that requires calculation is going to be a huge drag on performance. Too many threads will cause more overhead as the processor switches between them. There isn't any advantage to having more worker threads than processor cores.
Also, if each query doesn't take that much processing time, there will be more time spent creating and destroying the thread than running the query. Most of the time would be spent on threading overhead. In this case, you would be much better off using a single-threaded solution using flow or async, which distributes the processing over multiple ticks to allow the node.js event loop to run.
Single-threaded solutions are the easiest to understand and debug, but if the queries are preventing the main thread from getting other stuff done, then a multi-threaded solution is necessary.
The multi-threaded solution you propose is pretty good. Running all the queries in a separate thread prevents the main thread from bogging down. However, there isn't any point in using flow or async in this case. These modules simulate multi-threading by distributing the processing over multiple node.js ticks and tasks run in parallel don't execute in any particular order. However, these tasks still are running in a single thread. Since you're processing the queries in their own thread, and they're no longer interfering with the node.js event loop, then just run them one after another in a loop. Since all the action is happening in a thread without a node.js event loop, using flow or async in just introduces more overhead for no additional benefit.
A more efficient solution is to have a thread pool hanging out in the background and throw tasks at it. The thread pool would ideally have the same number of threads as processor cores, and would be created when the application starts up and destroyed when the application shuts down, so the expensive creating and destroying of threads only happens once. I see that Threads a Gogo has a thread pool that you can use, although I'm afraid I'm not yet familiar enough with it to give you all the details of using it.
I'm drifting into territory I'm not familiar with here, but I believe you could do it by pushing each query individually onto the global thread pool and when all the callbacks have completed, you'll be done.
The Node.flow module would be handy here, not because it would make processing any faster, but because it would help you manage all the query tasks and their callbacks. You would use a loop to push a bunch of parallel tasks on the flow stack using flow.parallel(...), where each task would send a query to the global threadpool using threadpool.any.eval(), and then call ready() in the threadpool callback to tell flow that the task is complete. After the parallel tasks have been queued up, use flow.join() to run all the tasks. That should run the queries on the thread pool, with the thread pool running as many tasks as it can at once, using all the cores and avoiding creating or destroying threads, and all the queries will have been processed.
Other requests would also be tossing their tasks onto the thread pool as well, but you wouldn't notice that because the request being processed would only get callbacks for the tasks that the request gave to the thread pool. Note that this would all be done on the main thread. The thread pool would do all the non-main-thread processing.
You'll need to do some threads a gogo and node.flow documentation reading and figure out some of the details, but that should give you a head start. Using a separate thread is more complex than using the main thread, and making use of a thread pool is even more complex, so you'll have to choose which one is best for you. The extra complexity might or might not be worth it.

Play Framework: Async vs Sync performance

I have following code:
def sync = Action {
val t0 = System.nanoTime()
Thread.sleep(100)
val t1 = System.nanoTime()
Ok("Elapsed time: " + (t1 - t0) / 1000000.0 + "ms")
}
def async = Action {
val t0 = System.nanoTime()
Async {
Future{
Thread.sleep(100)
val t1 = System.nanoTime()
Ok("Elapsed time: " + (t1 - t0) / 1000000.0 + "ms")
}
}
}
Difference among above code is that sync will sleep on the thread that received request and async will sleep on the separate thread so that thread in charge of receiving a request can keep on receiving requests without blocking. When I profile thread, I see a sudden increase in number of threads created for async requests as expected. However both methods above with 4000 concurrent connection 20 sec ramp result in the same throughput and latency. I expected async to perform better. Why would this be?
The short answer is that both methods are essentially the same.
Actions themselves are always asynchronous (see documentation on handling asynchronous results).
In both cases, the sleep call occurs in the action's thread pool (which is not optimal).
As stated in Understanding Play thread pools:
Play framework is, from the bottom up, an asynchronous web framework. Streams are handled asynchronously using iteratees. Thread pools in Play are tuned to use fewer threads than in traditional web frameworks, since IO in play-core never blocks.
Because of this, if you plan to write blocking IO code, or code that could potentially do a lot of CPU intensive work, you need to know exactly which thread pool is bearing that workload, and you need to tune it accordingly.
For instance, this code fragment uses a separate thread pool:
Future {
// Some blocking or expensive code here
}(Contexts.myExecutionContext)
As additional resources, see this answer and this video for more information on handling asynchronous actions and this and this forum messages for extensive discussions on the subject.

Play Framework: thread-pool-executor vs fork-join-executor

Let's say we have a an action below in our controller. At each request performLogin will be called by many users.
def performLogin( ) = {
Async {
// API call to the datasource1
val id = databaseService1.getIdForUser();
// API call to another data source different from above
// This process depends on id returned by the call above
val user = databaseService2.getUserGivenId(id);
// Very CPU intensive task
val token = performProcess(user)
// Very CPU intensive calculations
val hash = encrypt(user)
Future.successful(hash)
}
}
I kind of know what the fork-join-executor does. Basically from the main thread which receives a request, it spans multiple worker threads which in tern will divide the work into few chunks. Eventually main thread will join those result and return from the function.
On the other hand, if I were to choose the thread-pool-executor, my understanding is that a thread is chosen from the thread pool, this selected thread will do the work, then go back to the thread pool to listen to more work to do. So no sub dividing of the task happening here.
In above code parallelism by fork-join executor is not possible in my opinion. Each call to the different methods/functions requires something from the previous step. If I were to choose the fork-join executor for the threading how would that benefit me? How would above code execution differ among fork-join vs thread-pool executor.
Thanks
This isn't parallel code, everything inside of your Async call will run in one thread. In fact, Play! never spawns new threads in response to requests - it's event-based, there is an underlying thread pool that handles whatever work needs to be done.
The executor handles scheduling the work from Akka actors and from most Futures (not those created with Future.successful or Future.failed). In this case, each request will be a separate task that the executor has to schedule onto a thread.
The fork-join-executor replaced the thread-pool-executor because it allows work stealing, which improves efficiency. There is no difference in what can be parallelized with the two executors.

F# Asynch thread problem

I am learning F# and very interested in this language
I try to create async expression to run asynchronously.
for example
let prop1=async{
for i=0 to 1000000 do ()
MessageBox.Show("Done")
}
let prop2=async{
for i=0 to 1000000 do ()
MessageBox.Show("Done2")
}
Async.Start(prop1)
Async.Start(prop2)
when i run the program, i got that there are thread amount increasing of program process, from 6 to 8 , when i done close 2 message box , the process seem not destroy those created threads , the count also 8 , what happened or i got misunderstand about F# asynchronous
Thank for your help
The threads are taken from a thread pool (which is why there are more threads than actions, incidentally).
The pool exists until the application terminates.
Nothing to worry about
Edit For a nice in-depth article on F#, async and ThreadPool: http://www.voyce.com/index.php/2011/05/27/fsharp-async-plays-well-with-others/
The runtime might use a thread pool, that is threads are not destroyed, but waiting for another asynchronous tasks. This technique helps the runtime reduce time to start a new async. operation, because creating a new thread might consume some time and resources.

Resources