what happens when a Spring #Async thread never completes? - multithreading

If you have an #Async method the returns a CompletableFuture....and the future never gets completed, does spring just leak the thread? Yes, I know whomever is waiting on the results could timeout and assume exceptional completion for latter stages.....but that doesn't stop the thread. Even if you call cancel, it doesn't do shit to the running thread:
from the docs:
#param mayInterruptIfRunning this value has no effect in this
implementation because interrupts are not used to control
processing.
If I use Future instead of CompletableFuture, cancel will interrupt the thread. Unfortunately, there is no equivalent of "allOf" on Future like we have on CompletableFuture to wait for all the tasks, like so:
// wait for all the futures to finish, regardless of results
CompletableFuture.allOf(futures.toArray(CompletableFuture[]::new))
// if exceptions happened in any future, swallow them
// I don't care because I'm going to process each future in my list anyway
// we just wanted to wait for all the futures to finish
.exceptionally(ex -> null);
how are we supposed to cancel a thread that we bailed on?
if I didn't cancel it (somehow), would my pool just be down a thread forever????

Related

How to close correctly a WINAPI Thread pool cancelling pending tasks

I have a question about closing a WINAPI ThreadPool.
Suppose i have initialized a thread pool with a cleanup group and pushed some tasks to the thread pool with the SubmitThreadpoolWork.
I'm calling the CloseThreadPoolTask in the task's callback function.
Currently there are tasks that executing and others that pending in the thread-pool's queue.
Now, for closing the thread pool I want to use CloseThreadpoolCleanupGroupMembers function without pending for the queued tasks to finish, but I still want to get a callback to the pending task to release its user allocated resources (some that come with the task's parameters).
I saw in this thread:
Cancelling scheduled work/io/timer items in WIN32 thread pool
that the callback that i have passed to the cleanup group (PTP_CLEANUP_GROUP_CANCEL_CALLBACK) will also call to work objects that are currently executing (because they still tied to cleanup group during the execution) - obviously i don't want it to happen... Is there a way that the cleanup cancel callback will not get invoked on currently executing tasks?
Thanks!
if callback for CreateThreadpoolWork begin executed, cleanup callback for it will be not called. or if cancel callback for item will be called - PTP_WORK_CALLBACK will be never called for this item (before and after). so here really no any problem. or will be called your PTP_WORK_CALLBACK or will be called cancell callback for this item. this is mutually exclusive. so reply in topic Cancelling scheduled work/io/timer items in WIN32 thread pool containing very serious errors. i can suggest you for test insert MessageBox call in begin of PTP_WORK_CALLBACK and also another MessageBox before CloseThreadpoolCleanupGroupMembers. and you can make sure if PTP_WORK_CALLBACK begin executed - cancell callback for this item already not called.

Disabling a System.Threading.Timer instance while its callback is in progress

I am using two instances of System.Threading.Timer to fire off 2 tasks that are repeated periodically.
My question is: If the timer is disabled but at that point of time this timer is executing its callback on a thread, then will the Main method exit, or will it wait for the executing callbacks to complete?
In the code below, Method1RunCount is synchronized for read and write using lock statement ( this part of code is not shown below). The call back for timer1 increments Method1RunCount by 1 at end of each run.
static void Main(string[] args)
{
TimerCallback callback1 = Method1;
System.Threading.Timer timer1 = new System.Threading.Timer(callback1,null,0, 90000);
TimerCallback callback2 = Method2;
System.Threading.Timer timer2 = new System.Threading.Timer(callback2, null, 0, 60000);
while (true)
{
System.Threading.Thread.Sleep(250);
if (Method1RunCount == 4)
{
//DISABLE the TIMERS
timer1.Change(System.Threading.Timeout.Infinite, System.Threading.Timeout.Infinite);
timer2.Change(System.Threading.Timeout.Infinite, System.Threading.Timeout.Infinite);
break;
}
}
}
This kind of code tends to work by accident, the period of the timer is large enough to avoid the threading race on the Method1RunCount variable. Make the period smaller and there's a real danger that the main thread won't see the value "4" at all. Odds go down considerably when the processor is heavily loaded and the main thread doesn't get scheduled for while. The timer's callback can then execute more than once while the main thread is waiting for the processor. Completing missing the value getting incremented to 4. Note how the lock statement does not in fact prevent this, it isn't locked by the main thread since it is probably sleeping.
There's also no reasonable guess you can make at how often Method2 runs. Not just because it has a completely different timer period but fundamentally because it isn't synchronized to either the Method1 or the Main method execution at all.
You'd normally increment Method1RunCount at the end of Method1. That doesn't otherwise guarantee that Method1 won't be aborted. It runs on a threadpool thread, they have the Thread.IsBackground property always set to true. So the CLR will readily abort them when the main thread exits. This again tends to not cause a problem by accident.
If it is absolutely essential that Method1 executes exactly 4 times then the simple way to ensure that is to let Method1 do the counting. Calling Timer.Change() inside the method is fine. Use a class like AutoResetEvent to let the main thread know about it. Which now no longer needs the Sleep anymore. You still need a lock to ensure that Method1 cannot be re-entered while it is executing. A good way to know that you are getting thread synchronization wrong is when you see yourself using Thread.Sleep().
From the docs on System.Threading.Timer (http://msdn.microsoft.com/en-us/library/system.threading.timer.aspx):
When a timer is no longer needed, use the Dispose method to free the
resources held by the timer. Note that callbacks can occur after the
Dispose() method overload has been called, because the timer queues
callbacks for execution by thread pool threads. You can use the
Dispose(WaitHandle) method overload to wait until all callbacks have
completed.

Multi-Producer Single-Consumer Lazy Task Execution

I am trying to model a system where there are multiple threads producing data, and a single thread consuming the data. The trick is that I don't want a dedicated thread to consume the data because all of the threads live in a pool. Instead, I want one of the producers to empty the queue when there is work, and yield if another producer is already clearing the queue.
The basic idea is that there is a queue of work, and a lock around the processing. Each producer pushes its payload onto the queue, and then attempts to enter the lock. The attempt is non-blocking and returns either true (the lock was acquired), or false (the lock is held by someone else).
If the lock is acquired, then that thread then processes all of the data in the queue until it is empty (including any new payloads introduced by other producers during processing). Once all of the work has been processed, the thread releases the lock and quits out.
The following is C++ code for the algorithm:
void Process(ITask *task) {
// queue is a thread safe implementation of a regular queue
queue.push(task);
// crit_sec is some handle to a critical section like object
// try_scoped_lock uses RAII to attempt to acquire the lock in the constructor
// if the lock was acquired, it will release the lock in the
// destructor
try_scoped_lock lock(crit_sec);
// See if this thread won the lottery. Prize is doing all of the dishes
if (!lock.Acquired())
return;
// This thread got the lock, so it needs to do the work
ITask *currTask;
while (queue.try_pop(currTask)) {
... execute task ...
}
}
In general this code works fine, and I have never actually witnessed the behavior I am about to describe below, but that implementation makes me feel uneasy. It stands to reason that a race condition is introduced between when the thread exits the while loop and when it releases the critical section.
The whole algorithm relies on the assumption that if the lock is being held, then a thread is servicing the queue.
I am essentially looking for enlightenment on 2 questions:
Am I correct that there is a race condition as described (bonus for other races)
Is there a standard pattern for implementing this mechanism that is performant and doesn't introduce race conditions?
Yes, there is a race condition.
Thread A adds a task, gets the lock, processes itself, then asks for a task from the queue. It is rejected.
Thread B at this point adds a task to the queue. It then attempts to get the lock, and fails, because thread A has the lock. Thread B exits.
Thread A then exits, with the queue non-empty, and nobody processing the task on it.
This will be difficult to find, because that window is relatively narrow. To make it more likely to find, after the while loop introduce a "sleep for 10 seconds". In the calling code, insert a task, wait 5 seconds, then insert a second task. After 10 more seconds, check that both insert tasks are finished, and there is still a task to be processed on the queue.
One way to fix this would be to change try_pop to try_pop_or_unlock, and pass in your lock to it. try_pop_or_unlock then atomically checks for an empty queue, and if so unlocks the lock and returns false.
Another approach is to improve the thread pool. Add a counting semaphore based "consume" task launcher to it.
semaphore_bool bTaskActive;
counting_semaphore counter;
when (counter || !bTaskActive)
if (bTaskActive)
return
bTaskActive = true
--counter
launch_task( process_one_off_queue, when_done( [&]{ bTaskActive=false ) );
When the counting semaphore is active, or when poked by the finished consume task, it launches a consume task if there is no consume task active.
But that is just off the top of my head.

How to know when CloseThreadPool() function completes?

i am new to Sockets programming and going through the Documentation.
From a documentation i found about CloseThreadPool() function :
CloseThreadpool function. The thread pool is closed immediately if there are no outstanding callback objects that are bound to the thread pool. If there are, then the thread pool is released asynchronously when those outstanding objects are freed.
This thread pool is in a thread itself. my main thread takes input for exit. if exit is inputted i set global variable KEEP_LISTENEING to false.
How would i wait my main thread to stop/sleep untill this function truly completes in another thread ?
Use a cleanup group to wait for all callbacks. The sequence is:
CreateThreadpoolCleanupGroup()
SetThreadpoolCallbackCleanupGroup(&CallbackEnvironment, pointerCleanup, ...)
CloseThreadpoolCleanupGroupMembers(, FALSE, )
CloseThreadpoolCleanupGroup()

Is there a pattern to cancel a block on another dispatch queue?

This could be a much more generic question abut how to best cancel blocking jobs on other threads, but I'm interested in a solution in the context of Grand Central Dispatch. I have the need to call a function which basically blocks until it gets data from the network; it could potentially be blocked forever. I have it set up now so that this blocked call happens on a private dispatch queue, and when i do get data, i put a block back on the main queue. Th e problem is that once I dispatch my private-queue-block and blocking call, I can never really cancel that. Imagine this ability was tied to a user setting toggle. If they toggled off, I would want this blocking job and execution block to essentially just end. Is there a good solution to this type of problem?
Thanks
- (void)_beginListeningForNetworkJunk
{
dispatch_async(my_private_queue, ^{
// blocks until it gets data
id data = [NetworkListener waitForData];
dispatch_async(dispatch_get_main_queue(), ^{
[self _handleNetworkData:data];
});
});
}
- (void)_endListeningForNetworkJunk
{
// How do I kill that job that is blocked on my private queue?
}
You can't. The problem is in NetworkListener in its blocking-and-uninterruptible interface.
Normally, you'd code the block to service the network connection asynchronously and also monitor some other signalling mechanism, such as a custom run loop source (or NSPort or pipe file descriptor or …). When the network connection had activity, that would be serviced. When the signalling mechanism fired, you would shut down the network connection and exit the block.
In that way, the block could be cancellable with its cooperation.
Since your block is stuck in -waitForData, it can't cooperate. There's no mechanism for canceling blocks without their cooperation. The same is true of NSOperation and NSThread. The reason is that it's basically infeasible to terminate another thread's activity without its cooperation.
You need a different design for your networking code.
In principle, you can't cancel anything running on any other thread. You can only politely ask the task that is running on another thread to cancel. I usually create objects representing tasks so that "cancel" can be called on these objects.
In your situation: The waitForData cannot be cancelled (unless NetworkListener has some API to do it; in that case waitForData would need some mechanism to distinguish between data arriving and cancellation).
In _endListenForNetworkJunk, you can set a BOOL value "cancelled" to indicate the call is cancelled. Then in the code that runs on the main queue, check whether that "cancelled" value is still cleared. That way, if you call _endListenForNetworkJunk from the main thread, you're sure that _handleNetworkData will not be called. If you call _endListenForNetworkJunk from another thread, the main thread could just have started the call to _handleNetworkData.
If you checked "cancelled" just before dispatching to the main queue, that block could already be dispatched but not executing just before you call _endListenForNetworkJunk on the main thread.

Resources