Using thenAccept after supplyAsync blocks the main thread - multithreading

I am developing a web application which communicates with other web applications. From time to time, my system sends HTTP request as notification to other systems. Since their responses are not essential to me, I send the requests with Java 8 CompletableFuture supplyAsync and prints their responses with thenAccept so that my main thread will not get blocked. However, I found the CompletableFuture function chains took around 100 to 200 ms each time, which confused me because from my understanding thenAccept() should run in the same thread with supplyAsync()'s.
I mocked my process with below codes
public static void run() {
long start = System.currentTimeMillis();
log.info("run start -> " + new Timestamp(start));
CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
return 42;
}).thenAccept(res -> log.info("run result -> " + res + ", time -> " + new Timestamp(System.currentTimeMillis())));
log.info("run duration ->" + (System.currentTimeMillis() - start));
}
public static void runAsync() {
long start = System.currentTimeMillis();
log.info("runAsync start -> " + new Timestamp(start));
CompletableFuture.supplyAsync(() -> {
try {
Thread.sleep(500);
} catch (InterruptedException e) {
e.printStackTrace();
}
return 42;
}).thenAcceptAsync(res -> log.info("runAsync result -> " + res + ", time ->" + new Timestamp(System.currentTimeMillis())));
log.info("runAsync duration ->" + (System.currentTimeMillis() - start));
}
public static void main(String[] args) throws InterruptedException {
Test.run();
Test.runAsync();
Thread.sleep(1000);
}
the run() method uses thenAccept() with supplyAsync() while runAsync() uses thenAcceptAsync(). I expected both of them should take just a few milliseconds. However, the real outputs are:
10:04:54.632 [main] INFO Test - run start -> 2017-12-08 10:04:54.622
10:04:54.824 [main] INFO Test - run duration ->202
10:04:54.824 [main] INFO Test - runAsync start -> 2017-12-08 10:04:54.824
10:04:54.826 [main] INFO Test - runAsync duration ->2
10:04:55.333 [ForkJoinPool.commonPool-worker-1] INFO Test - run result -> 42, time -> 2017-12-08 10:04:55.333
10:04:55.333 [ForkJoinPool.commonPool-worker-3] INFO Test - runAsync result -> 42, time ->2017-12-08 10:04:55.333
We can see run() takes 202 ms which is 100 times of the duration of runAsync() which uses only 2 ms.
I don't understand where is the 202 ms overhead comes from, and obviously it is not the lambda function in supplyAysnc() which sleeps 500 ms.
Could anyone explain why the run() method blocks, and should I always use thenAcceptAsync() over thenAccept() ?
Many thanks.

…because from my understanding thenAccept() should run in the same thread with supplyAsync()'s
Your understanding is wrong.
From the documentation of CompletableFuture:
Actions supplied for dependent completions of non-async methods may be performed by the thread that completes the current CompletableFuture, or by any other caller of a completion method.
The most obvious consequence is that when a future is already completed, the function passed to thenAccept() will be evaluated directly in the caller’s thread, as the future has no possibility to command the thread which completed it. In fact, there is no association of a CompletableFuture with a thread at all, as anyone could call complete on it, not just the thread executing the Supplier you passed to supplyAsync. That’s also the reason why cancel does not support interruption. The future doesn’t know which thread(s) could potentially try to complete it.
The not so obvious consequence is that even the behavior described above is not guaranteed. The phrase “or by any other caller of a completion method” does not restrict it to the caller of the completion method registering the dependent action. It could also be any other caller registering a dependent action on the same future. So if two threads are calling thenApply concurrently on the same future, either of them could end up evaluating both functions or even weirder, each thread could end up executing the other thread’s action. The specification does not preclude it.
For the test case you have provided in your question, you are more likely to measure the initial­ization overhead, as described in this answer. But for the actual problem in your web application where the framework will be initialized only once, you’re likely stumbling over the wrong under­standing of thenApply’s behavior (or any non-async chaining method in general). If you want to be sure that the evaluation does not happen in the caller’s thread, you must use thenApplyAsync.

The 200 ms are startup time for the thread pool and all the classes supporting it.
It becomes obvious if you swap the statements in your main class:
public static void main(String[] args) throws InterruptedException {
Test.runAsync();
Test.run();
Thread.sleep(1000);
}
now Test.runAsync(); is the call that needs 200 ms and Test.run(); completes in 2 ms

Related

AsyncHttpClient creates how much threads?

I use async http client in my code to asynchronously handle GET responses
I can run simultaneously 100 requests in the same time.
I use just on instance of httpClient in container
#Bean(destroyMethod = "close")
open fun httpClient() = Dsl.asyncHttpClient()
Code looks like
fun method(): CompletableFuture<String> {
return httpClient.prepareGet("someUrl").execute()
.toCompletableFuture()
.thenApply(::getResponseBody)
}
It works fine functionally. In my testing I use mock endpoint with the same url address. But my expectation was that all the requests are handled in several threads, but in profiler I can see that 16 threads are created for AsyncHttpClient, and they aren't destroyed, even if there are no requests to send.
My expectation was that
it will be less threads for async client
threads will be destroyed after some configured timeout
is there some option to control how much threads can be created by asyncHttpClient?
Am I missing something in my expectations?
UPDATE 1
I saw instruction on https://github.com/AsyncHttpClient/async-http-client/wiki/Connection-pooling
I found no info on thread pool
UPDATE 2
I also created method to do the same, but with handler and additional executor pool
Utility method look like
fun <Value, Result> CompletableFuture<Value>.handleResultAsync(executor: Executor, initResultHandler: ResultHandler<Value, Result>.() -> Unit): CompletableFuture<Result> {
val rh = ResultHandler<Value, Result>()
rh.initResultHandler()
val handler = BiFunction { value: Value?, exception: Throwable? ->
if (exception == null) rh.success?.invoke(value) else rh.fail?.invoke(exception)
}
return handleAsync(handler, executor)
}
The updated method look like
fun method(): CompletableFuture<String> {
return httpClient.prepareGet("someUrl").execute()
.toCompletableFuture()
.handleResultAsync(executor) {
success = {response ->
logger.info("ok")
getResponseBody(response!!)
}
fail = { ex ->
logger.error("Failed to execute request", ex)
throw ex
}
}
}
Then I can see that result of GET method is executed in the threads provided by thread pool (previously result was executed in "AsyncHttpClient-3-x"), but additional thread for AsyncHttpClient are still created and not destroyed.
AHC has two types of threads:
For I/O operation.
On your screen, it's AsyncHttpClient-x-x
threads. AHC creates 2*core_number of those.
For timeouts.
On your screen, it's AsyncHttpClient-timer-1-1 thread. Should be
only one.
Source: issue on GitHub: https://github.com/AsyncHttpClient/async-http-client/issues/1658

Skipping threads based on parameter, then returning to them later

I have a method that takes in a value and if a condition is met the action shouldn't run for 24 hours. But when it stops I want to run other threads that don't met that condition.
In this example I have 30 threads made at the beginning of the program. Once I make 5 pieces of cheese I need to stop because that's too much cheese. What would be great is if there was a place to send threads that can't be acted on until time is run out while the others are running. Task.Delay even with Wait does not seem to be effective here.
Here's me code sample:
//Stop making cheese when you have enough for the day but continue making others
public void madeEnoughToday(string cheese)
{
//Find how much cheese is made based on cheese type.
DataGridViewRow row = cheeseGV.Rows
.Cast<DataGridViewRow>()
.Where(r =>
r.Cells["Cheese"].Value.ToString().Equals(cheese))
.First();
if (row.Cells["MadeToday"].Value.Equals(row.Cells["Perday"].Value))
{
Task.Delay(30000).Wait();
}
}
When I need to pause thread execution, I use another thread (global variable, or another implementation) - call Thread.Join() method for the second instance of the thread.
Thread tPause; // global var
private void MyThreadFunc()
{
// do something
if (pauseCondition)
{
tPause=new Thread(PauseThread);
tPause.Start();
tPause.Join(); // You can specify needed milliseconds, or TimeSpan
// the subsequent code will not be executed until tPause.IsAlive == true
// IMPORTANT: if tPause == null during Join() - an exception occurs
}
}
private void PauseThread()
{
Thread.Sleep(Timeout.Infinite); // You can specify needed milliseconds, or TimeSpan
}
private void Main()
{
// any actions
Thread myThread=new Thread(MyThreadFunc);
myThread.Start();
// any actions
}
There are many ways of this realization.
If you want to continue the thread execution, you can call the Thread.Abort() method for the pause thread instance, or use the sophisticated construction of function for the pause thread.

How to parallelize an azure worker role?

I have got a Worker Role running in azure.
This worker processes a queue in which there are a large number of integers. For each integer I have to do processings quite long (from 1 second to 10 minutes according to the integer).
As this is quite time consuming, I would like to do these processings in parallel. Unfortunately, my parallelization seems to not be efficient when I test with a queue of 400 integers.
Here is my implementation :
public class WorkerRole : RoleEntryPoint {
private readonly CancellationTokenSource cancellationTokenSource = new CancellationTokenSource();
private readonly ManualResetEvent runCompleteEvent = new ManualResetEvent(false);
private readonly Manager _manager = Manager.Instance;
private static readonly LogManager logger = LogManager.Instance;
public override void Run() {
logger.Info("Worker is running");
try {
this.RunAsync(this.cancellationTokenSource.Token).Wait();
}
catch (Exception e) {
logger.Error(e, 0, "Error Run Worker: " + e);
}
finally {
this.runCompleteEvent.Set();
}
}
public override bool OnStart() {
bool result = base.OnStart();
logger.Info("Worker has been started");
return result;
}
public override void OnStop() {
logger.Info("Worker is stopping");
this.cancellationTokenSource.Cancel();
this.runCompleteEvent.WaitOne();
base.OnStop();
logger.Info("Worker has stopped");
}
private async Task RunAsync(CancellationToken cancellationToken) {
while (!cancellationToken.IsCancellationRequested) {
try {
_manager.ProcessQueue();
}
catch (Exception e) {
logger.Error(e, 0, "Error RunAsync Worker: " + e);
}
}
await Task.Delay(1000, cancellationToken);
}
}
}
And the implementation of the ProcessQueue:
public void ProcessQueue() {
try {
_queue.FetchAttributes();
int? cachedMessageCount = _queue.ApproximateMessageCount;
if (cachedMessageCount != null && cachedMessageCount > 0) {
var listEntries = new List<CloudQueueMessage>();
listEntries.AddRange(_queue.GetMessages(MAX_ENTRIES));
Parallel.ForEach(listEntries, ProcessEntry);
}
}
catch (Exception e) {
logger.Error(e, 0, "Error ProcessQueue: " + e);
}
}
And ProcessEntry
private void ProcessEntry(CloudQueueMessage entry) {
try {
int id = Convert.ToInt32(entry.AsString);
Service.GetData(id);
_queue.DeleteMessage(entry);
}
catch (Exception e) {
_queueError.AddMessage(entry);
_queue.DeleteMessage(entry);
logger.Error(e, 0, "Error ProcessEntry: " + e);
}
}
In the ProcessQueue function, I try with different values of MAX_ENTRIES: first =20 and then =2.
It seems to be slower with MAX_ENTRIES=20, but whatever the value of MAX_ENTRIES is, it seems quite slow.
My VM is a A2 medium.
I really don't know if I do the parallelization correctly ; maybe the problem comes from the worker itself (which may be it is hard to have this in parallel).
You haven't mentioned which Azure Messaging Queuing technology you are using, however for tasks where I want to process multiple messages in parallel I tend to use the Message Pump Pattern on Service Bus Queues and Subscriptions, leveraging the OnMessage() method available on both Service Bus Queue and Subscription Clients:
QueueClient OnMessage() - https://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.queueclient.onmessage.aspx
SubscriptionClient OnMessage() - https://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.subscriptionclient.onmessage.aspx
An overview of how this stuff works :-) - http://fabriccontroller.net/blog/posts/introducing-the-event-driven-message-programming-model-for-the-windows-azure-service-bus/
From MSDN:
When calling OnMessage(), the client starts an internal message pump
that constantly polls the queue or subscription. This message pump
consists of an infinite loop that issues a Receive() call. If the call
times out, it issues the next Receive() call.
This pattern allows you to use a delegate (or anonymous function in my preferred case) that handles the receipt of the Brokered Message instance on a separate thread on the WaWorkerHost process. In fact, to increase the level of throughput, you can specify the number of threads that the Message Pump should provide, thereby allowing you to receive and process 2, 4, 8 messages from the queue in parallel. You can additionally tell the Message Pump to automagically mark the message as complete when the delegate has successfully finished processing the message. Both the thread count and AutoComplete instructions are passed in the OnMessageOptions parameter on the overloaded method.
public override void Run()
{
var onMessageOptions = new OnMessageOptions()
{
AutoComplete = true, // Message-Pump will call Complete on messages after the callback has completed processing.
MaxConcurrentCalls = 2 // Max number of threads the Message-Pump can spawn to process messages.
};
sbQueueClient.OnMessage((brokeredMessage) =>
{
// Process the Brokered Message Instance here
}, onMessageOptions);
RunAsync(_cancellationTokenSource.Token).Wait();
}
You can still leverage the RunAsync() method to perform additional tasks on the main Worker Role thread if required.
Finally, I would also recommend that you look at scaling your Worker Role instances out to a minimum of 2 (for fault tolerance and redundancy) to increase your overall throughput. From what I have seen with multiple production deployments of this pattern, OnMessage() performs perfectly when multiple Worker Role Instances are running.
A few things to consider here:
Are your individual tasks CPU intensive? If so, parallelism may not help. However, if they are mostly waiting on data processing tasks to be processed by other resources, parallelizing is a good idea.
If parallelizing is a good idea, consider not using Parallel.ForEach for queue processing. Parallel.Foreach has two issues that prevent you from being very optimal:
The code will wait until all kicked off threads finish processing before moving on. So, if you have 5 threads that need 10 seconds each and 1 thread that needs 10 minutes, the overall processing time for Parallel.Foreach will be 10 minutes.
Even though you are assuming that all of the threads will start processing at the same time, Parallel.Foreach does not work this way. It looks at number of cores on your server and other parameters and generally only kicks off number of threads it thinks it can handle, without knowing too much about what's in those threads. So, if you have a lot of non-CPU bound threads that /can/ be kicked off at the same time without causing CPU over-utilization, default behaviour will not likely run them optimally.
How to do this optimally:
I am sure there are a ton of solutions out there, but for reference, the way we've architected it in CloudMonix (that must kick off hundreds of independent threads and complete them as fast as possible) is by using ThreadPool.QueueUserWorkItem and manually keeping track number of threads that are running.
Basically, we use a Thread-safe collection to keep track of running threads that are started by ThreadPool.QueueUserWorkItem. Once threads complete, remove them from that collection. The queue-monitoring loop is indendent of executing logic in that collection. Queue-monitoring logic gets messages from the queue if the processing collection is not full up to the limit that you find most optimal. If there is space in the collection, it tries to pickup more messages from the queue, adds them to the collection and kick-start them via ThreadPool.QueueUserWorkItem. When processing completes, it kicks off a delegate that cleans up thread from the collection.
Hope this helps and makes sense

Interrupt parallel Stream execution

Consider this code :
Thread thread = new Thread(() -> tasks.parallelStream().forEach(Runnable::run));
tasks are a list of Runnables that should be executed in parallel.
When we start this thread, and it begins its execution, then depending on some calculations we need to interrupt (cancel) all those tasks.
Interrupting the Thread will only stop one of exections. How do we handle others? or maybe Streams should not be used that way? or you know a better solution?
You can use a ForkJoinPool to interrupt the threads:
#Test
public void testInterruptParallelStream() throws Exception {
final AtomicReference<InterruptedException> exc = new AtomicReference<>();
final ForkJoinPool forkJoinPool = new ForkJoinPool(4);
// use the pool with a parallel stream to execute some tasks
forkJoinPool.submit(() -> {
Stream.generate(Object::new).parallel().forEach(obj -> {
synchronized (obj) {
try {
// task that is blocking
obj.wait();
} catch (final InterruptedException e) {
exc.set(e);
}
}
});
});
// wait until the stream got started
Threads.sleep(500);
// now we want to interrupt the task execution
forkJoinPool.shutdownNow();
// wait for the interrupt to occur
Threads.sleep(500);
// check that we really got an interruption in the parallel stream threads
assertTrue(exc.get() instanceof InterruptedException);
}
The worker threads do really get interrupted, terminating a blocking operation. You can also call shutdown() within the Consumer.
Note that those sleeps might not be tweaked for a proper unit test, you might have better ideas to just wait as necessary. But it is enough to show that it is working.
You aren't actually running the Runnables on the Thread you are creating. You are running a thread which will submit to a pool, so:
Thread thread = new Thread(() -> tasks.parallelStream().forEach(Runnable::run));
In this example you are in lesser terms doing
List<Runnable> tasks = ...;
Thread thread = new Thread(new Runnable(){
public void run(){
for(Runnable r : tasks){
ForkJoinPool.commonPool().submit(r);
}
}
});
This is because you are using a parallelStream that delegates to a common pool when handling parallel executions.
As far as I know, you cannot get a handle of the Threads that are executing your tasks with a parallelStream so may be out of luck. You can always do tricky stuff to get the thread but probably isn't the best idea to do so.
Something like the following should work for you:
AtomicBoolean shouldCancel = new AtomicBoolean();
...
tasks.parallelStream().allMatch(task->{
task.run();
return !shouldCancel.get();
});
The documentation for the method allMatch specifically says that it "may not evaluate the predicate on all elements if not necessary for determining the result." So if the predicate doesn't match when you want to cancel, then it doesn't need to evaluate any more. Additionally, you can check the return result to see if the loop was cancelled or not.

Kill Spring scheduled thread

Is there anyway to timeout a scheduled task (kill thread) in Spring if the task takes to long or even hangs because of remote resource unavailability
In my case, tasks can take too long or even hang because they're based on HtmlUnitDriver (Selenium) sequence of steps, but from time to time it hangs and I would like to be able to set a time limit for the thread to execute. Something like 1 minute at most.
I setup a fixed rate execution of 5 minutes with an initial delay of 1 minute.
Thanks in advance
I did the same some time ago following this example: example
The basic idea is to put your code in a class implementing Callable or Runnable, then create a FutureTask wherever you are going to invoque your thread with the Callable or Runnable class as parameter. Define an executor , submit your futureTask to the executor, and now you are able to execute the thread for x time inside a try catch block, if your thread ends with an timeoutException you will know that it took too long.
Here is my code:
CallableServiceExecutor callableServiceExecutor = new CallableServiceExecutor();
FutureTask<> task = new FutureTask<>(callableServiceExecutor);
ExecutorService executor = Executors.newSingleThreadExecutor();
executor.submit(task);
Boolean exito = true;
try {
result = task.get(getTimeoutValidacion() , TimeUnit.SECONDS);
} catch (InterruptedException e) {
exito = false;
} catch (ExecutionException e) {
exito = false;
} catch (TimeoutException e) {
exito = false;
}
task.cancel(true);
executor.shutdown();
See: How to timeout a thread
The short answer is that there is not easy or reliable way to kill a thread due to the limitations of Java's thread implementation. The ExecutorService#shutdown() is sort of a hack and heavy. Its best to deal with this in the task itself e.g. like at the network request level if your making a REST request to timeout on the socket.
Or better if you do some sort of message passing ala Actor model (see Akka) you can send a message from "supervisor" for the Actor to die. Also avoiding blocking by using something like Netty will help.

Resources