Interrupting all tasks in an ExecutorService/ThreadPoolExecutor without shutdowning it - multithreading

Is there a way to interrupt all active tasks in an Executor?
I don't believe I must got all previous submited tasks stored in order to fire such a common operation. What if I am using CompletableFutures, which have no control on the computation code as Future does, do I really need to mess with complete() synchronization when I can simply tell the executor to do it for me?
I look for something asynch like:
//Easier than keeping a collection of tasks by both methods
//in a global var, which needs to parse which ones are
//already executed, during execution or whatever
public void start(MyTask task){
executorService.submit(task);
}
public void stop(){
executorService.cancelAnyTask(); //This or
executorService.interruptAnyActiveTask(); //Even this
}
EDIT: I want to interrupt (or either cancel) active tasks, I don't mind the queued ones (really does not mind if the queued are discarded or not). I just look for clearing the executor work at a given discrete time, even if 1 ms later it starts to execute queued tasks again.

Related

Calling the instance to the thread inside that same thread

Context:
I have a cmd application in java which is written to work in peer-to-peer mode in different servers. Once a server starts, all other instances must stop. So I have written a piece of code that runs in a low priority thread and monitors an AtomicBoolean value autoClose, and whenever autoClose is set to true, thread will close application. (P.S.: I don't want to manually add close because the application has 2 main high priority threads and many temporary normal priority threads).
Here is the code:
/**
* Watches autoClose boolean value and closes the connector once it is true
* <p>
* This is a very low priority thread which continuously monitors autoClose
*/
protected void watchAndClose() {
Thread watchAutoClose = new Thread(() -> {
while (true) {
if (autoClose.get()) {
close();
// wait till closing is successful
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException ignored) {
// I want instance of thread watchAutoClose so I can call this
// watchAutoClose.interrupt();
}
if (!component.getStatus()) setAutoClose(false);
}
}
});
watchAutoClose.setPriority(Thread.MIN_PRIORITY);
watchAutoClose.start();
}
Question:
SonarLint says I can't leave InterruptedException part empty. I have to either throw it again or call thatThread.interrupt().
So how can I do this? I want an instance of thread watchAutoClose inside that thread so I can call watchAutoClose.interrupt(). I tried Thread.currentThread() but I fear with that many threads, the currently executing thread wouldn't be this thread. (i.e, there is a possibility of JVM can choose to switch to another thread by the time it is inside the catch clause and calls Thread.currentThread() so at that time current thread would be the other one and I would interrupt that other thread... correct me if I am too worrying or my concept is totally wrong.)
Or should I ignore the warning altogether and leave catch block?
First of all, it’s not clear why you think that waiting for a second was necessary at all. By the time, the close() method returns, the close() method has been completed. On the other hand, if close() truly triggers some asynchronous action, there is no guaranty that waiting one second will be sufficient for its completion.
Further, addressing your literal question, Thread.currentThread() always return the calling thread’s instance. It’s impossible for a thread to execute that method without being in the running state. When a task switch happens, the thread can’t read the reference at all, until it gets CPU time again. Besides that, since the specification says that this method returns the Thread instance representing the caller, the environment has to ensure this property, regardless of how it implements it. It works even when multiple threads call this method truly at the same time, on different CPU cores.
So, regardless of how questionable the approach of waiting a second is, handling interruption like
try {
TimeUnit.SECONDS.sleep(1);
} catch (InterruptedException ignored) {
Thread.currentThread().interrupt();
}
is a valid approach.
But you may also replace this code with
LockSupport.parkNanos(TimeUnit.SECONDS.toNanos(1));
The parkNanos method will return silently on interruption, leaving the calling thread in the interrupted state. So it has the same effect as catching the InterruptedException and restoring the interrupted state, but is simpler and potentially more efficient as no exception needs to be constructed, thrown, and caught.
Another point is that you are creating a polling loop on the atomic variable consuming CPU cycles when the variable is false, which is discouraged, even when you give the thread a low priority.

FreeRTOS suspend task from another function

So I have a half duplex bus driver, where I send something and then always have to wait a lot of time to get a response. During this wait time I want the processor to do something valuable, so I'm thinking about using FreeRTOS and vTaskDelay() or something.
One way to do it would off be splitting the driver up in some send/receive part. After sending, it returns to the caller. The caller then suspends, and does the reception part after a certain period of time.
But, the level of abstraction would be finer if it continues to be one task from the user point of view, as today. Therefore I was thinking, is it possible for a function within a task to suspend the task itself? Like
void someTask()
{
while(true){
someFunction(&someTask(), arg 1, arg 2,...);
otherStuff();
}
}
void someFunction(*someSortOfReferenceToWhateverTaskWhoCalled, arg1, arg2 ...)
{
if(something)
{
/*Use the pointer or whatever to suspend the task that called this function*/
}
}
Have a look at the FreeRTOS API reference for vTaskSuspend, http://www.freertos.org/a00130.html
However I am not sure you are going about controlling the flow of the program in the correct way. Tasks can be suspended on queues, events, delays etc.
For example in serial comms, you might have a task that feeds data into a queue (but suspends if it is full) and an interrupt that takes data out of the queue and transmits the data, or an interrupt putting data in a queue, or sending an event to a task to say there is data ready for it to process, the task can then wake up and process the data or take it out of the queue.
One thing I think is important though (in my opinion) is to only have one suspend point in any task. This is not a strict rule, but will make your life a lot easier in most situations.
There a numerous other task control mechanisms that are common to most RTOS's.
Have a good look around the FreeRTOS website and play with a few demo's. There is also plenty of generic RTOS tutorials on the web. It it worth learning how use the basic features of most RTOS's. It is actually not that complicated.

Serial Dispatch Queue with Asynchronous Blocks

Is there ever any reason to add blocks to a serial dispatch queue asynchronously as opposed to synchronously?
As I understand it a serial dispatch queue only starts executing the next task in the queue once the preceding task has completed executing. If this is the case, I can't see what you would you gain by submitting some blocks asynchronously - the act of submission may not block the thread (since it returns straight-away), but the task won't be executed until the last task finishes, so it seems to me that you don't really gain anything.
This question has been prompted by the following code - taken from a book chapter on design patterns. To prevent the underlying data array from being modified simultaneously by two separate threads, all modification tasks are added to a serial dispatch queue. But note that returnToPool adds tasks to this queue asynchronously, whereas getFromPool adds its tasks synchronously.
class Pool<T> {
private var data = [T]();
// Create a serial dispath queue
private let arrayQ = dispatch_queue_create("arrayQ", DISPATCH_QUEUE_SERIAL);
private let semaphore:dispatch_semaphore_t;
init(items:[T]) {
data.reserveCapacity(data.count);
for item in items {
data.append(item);
}
semaphore = dispatch_semaphore_create(items.count);
}
func getFromPool() -> T? {
var result:T?;
if (dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER) == 0) {
dispatch_sync(arrayQ, {() in
result = self.data.removeAtIndex(0);
})
}
return result;
}
func returnToPool(item:T) {
dispatch_async(arrayQ, {() in
self.data.append(item);
dispatch_semaphore_signal(self.semaphore);
});
}
}
Because there's no need to make the caller of returnToPool() block. It could perhaps continue on doing other useful work.
The thread which called returnToPool() is presumably not just working with this pool. It presumably has other stuff it could be doing. That stuff could be done simultaneously with the work in the asynchronously-submitted task.
Typical modern computers have multiple CPU cores, so a design like this improves the chances that CPU cores are utilized efficiently and useful work is completed sooner. The question isn't whether tasks submitted to the serial queue operate simultaneously — they can't because of the nature of serial queues — it's whether other work can be done simultaneously.
Yes, there are reasons why you'd add tasks to serial queue asynchronously. It's actually extremely common.
The most common example would be when you're doing something in the background and want to update the UI. You'll often dispatch that UI update asynchronously back to the main queue (which is a serial queue). That way the background thread doesn't have to wait for the main thread to perform its UI update, but rather it can carry on processing in the background.
Another common example is as you've demonstrated, when using a GCD queue to synchronize interaction with some object. If you're dealing with immutable objects, you can dispatch these updates asynchronously to this synchronization queue (i.e. why have the current thread wait, but rather instead let it carry on). You'll do reads synchronously (because you're obviously going to wait until you get the synchronized value back), but writes can be done asynchronously.
(You actually see this latter example frequently implemented with the "reader-writer" pattern and a custom concurrent queue, where reads are performed synchronously on concurrent queue with dispatch_sync, but writes are performed asynchronously with barrier with dispatch_barrier_async. But the idea is equally applicable to serial queues, too.)
The choice of synchronous v asynchronous dispatch has nothing to do with whether the destination queue is serial or concurrent. It's simply a question of whether you have to block the current queue until that other one finishes its task or not.
Regarding your code sample code, that is correct. The getFromPool should dispatch synchronously (because you have to wait for the synchronization queue to actually return the value), but returnToPool can safely dispatch asynchronously. Obviously, I'm wary of seeing code waiting for semaphores if that might be called from the main thread (so make sure you don't call getFromPool from the main thread!), but with that one caveat, this code should achieve the desired purpose, offering reasonably efficient synchronization of this pool object, but with a getFromPool that will block if the pool is empty until something is added to the pool.

Distributed\Parallel computing using app-engine (java api)

I want to use the master-slave (worker) paradigm, to solve a problem. I have read that opening new threads manually (for example using thread pool) is not available and I need to use queue, attached code example:
class MyDeferred implements DeferredTask {
#Override
public void run() {
// Do something interesting
}
};
MyDeferred task = new MyDeferred();
// Set instance variables etc as you wish
Queue queue = QueueFactory.getDefaultQueue();
queue.add(withPayload(task));
How can I get the result of the workers (which were added to the queue)?
I need this info, in-order to solve the bigger problem.
Actually you can use threads on GAE, but there are limitations. If you need long-running threads you can use background threads, but this requires you to use backend instances.
If you opt to use task queue, then keep in mind that tasks do not "return" to caller. To aggregate results you'll need to use datastore.
You will have to write the results into the datastore.
Just as a starting point to think about it, you might pass a JobId as a parameter to the tasks, have each task write an entity with the result and the JobId, and then later query the datstore for the given JobId to get all the results.

Passing a `Disposable` object safely to the UI thread with TPL

We recently adopted the TPL as the toolkit for running some heavy background tasks.
These tasks typically produce a single object that implements IDisposable. This is because it has some OS handles internally.
What I want to happen is that the object produced by the background thread will be properly disposed at all times, also when the handover coincides with application shutdown.
After some thinking, I wrote this:
private void RunOnUiThread(Object data, Action<Object> action)
{
var t = Task.Factory.StartNew(action, data, CancellationToken.None, TaskCreationOptions.None, _uiThreadScheduler);
t.ContinueWith(delegate(Task task)
{
if (!task.IsCompleted)
{
DisposableObject.DisposeObject(task.AsyncState);
}
});
}
The background Task calls RunOnUiThread to pass its result to the UI thread. The task t is scheduled on the UI thread, and takes ownership of the data passed in. I was expecting that if t could not be executed because the ui thread's message pump was shut down, the continuation would run, and I could see that that the task had failed, and dispose the object myself. DisposeObject() is a helper that checks if the object is actually IDisposable, and non-null, prior to disposing it.
Sadly, it does not work. If I close the application after the background task t is created, the continuation is not executed.
I solved this problem before. At that time I was using the Threadpool and the WPF Dispatcher to post messages on the UI thread. It wasn't very pretty, but in the end it worked. I was hoping that the TPL was better at this scenario. It would even be better if I could somehow teach the TPL that it should Dispose all leftover AsyncState objects if they implement IDisposable.
So, the code is mainly to illustrate the problem. I want to learn about any solution that allows me to safely handover Disposable objects to the UI thread from background Tasks, and preferably one with as little code as possible.
When a process closes, all of it's kernel handles are automatically closed. You shouldn't need to worry about this:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms686722(v=vs.85).aspx
Have a look at the RX library. This may allow you to do what you want.
From MSDN:
IsCompleted will return true when the Task is in one of the three
final states: RanToCompletion, Faulted, or Canceled
In other words, your DisposableObject.DisposeObject will never be called, because the continuation will always be scheduled after one of the above conditions has taken place. I believe what you meant to do was :
t.ContinueWith(t => DisposableObject.DisposeObject(task.AsyncState),
TaskContinuationOptions.NotOnRanToCompletion)
(BTW you could have simply captured the data variable rather than using the AsyncState property)
However I wouldn't use a continuation for something that you want to ensure happens at all times. I believe a try-finally block will be more fitting here:
private void RunOnUiThread2(Object data, Action<Object> action)
{
var t = Task.Factory.StartNew(() =>
{
try
{
action(data);
}
finally
{
DisposableObject.DisposeObject(task.AsyncState);
//Or use a new *foreground* thread if the disposing is heavy
}
}, CancellationToken.None, TaskCreationOptions.None, _uiThreadScheduler);
}

Resources