Difference between cancel and uninterruptibleCancel (from the Async library) - multithreading

Context:
I'm trying to understand the difference between cancel and uninterruptibleCancel from the Control.Concurent.Async package. I believe it has something to do with the underlying concepts of mask , uninterruptibleMask, and interruptible operations. Here's what I have understood so for:
Asynchronous exceptions are thrown by thread-A, but need to be handled by thread-B. This is precisely what throwTo does. In some way, this can also be considered as a form of inter-thread communication.
Asynchronous exceptions are used by one thread to kill/cancel another thread.
Handling aysnchronous exceptions creates a problem in the target/receiving thread, because one usually doesn't expect exceptions to be raised at any random point in the code. One puts try / catch around certain operations and expects/handles only certain exceptions. But, Asynchronous exceptions can be delivered when the target thread could be at any point in the execution.
mask allows use to protect critical sections in the target/receiving thread from delivery of asynchronous exceptions. The operation protected by mask doesn't need to deal with asynchronous-exceptions till the point it calls restore.
At this point uninterruptibleMask comes into the picture, and I start losing the plot. I thought the whole point of mask was to NOT deliver asynchronous-exceptions while executing a protected piece of code. However, here is what the docs say about "interruptible actions":
It is useful to think of mask not as a way to completely prevent asynchronous exceptions, but as a way to switch from asynchronous mode to polling mode. The main difficulty with asynchronous exceptions is that they normally can occur anywhere, but within a mask an asynchronous exception is only raised by operations that are interruptible (or call other interruptible operations). In many cases these operations may themselves raise exceptions, such as I/O errors, so the caller will usually be prepared to handle exceptions arising from the operation anyway. To perform an explicit poll for asynchronous exceptions inside mask, use allowInterrupt.
Questions:
Even within a code-block protected by mask, if there are some points where it is safe to handle asynchronous-exceptions, one can call allowInterrupt. This implicitly means, that, unless allowInterrupt is called, asynchronous exceptions will NOT be delivered while executing masked code. What, then, is the purpose of uninterruptibleMask?
Consequently, what is the need for uninterruptibleCancel? IIUC, thread A is trying to cancel thread B, but thread A, itself, is trying to protect itself from some sort of asynchronous exceptions, which may possibly be initiated by a third thread C, right? In the code for cancel (given below), which part is so critical that it needs the ultimate form of protection from asynchronous exceptions? Isn't throwTo an atomic/masked operation itself? Further, even if an asynchronous-exception is delivered to thread-A while executing waitCatch, what difference does it make? Actually, if I think about it, why do we need to even mask this code in the first place (let alone, uninterruptibleMask) ?
cancel a#(Async t _) = throwTo t AsyncCancelled <* waitCatch a

Under no masking, asynchronous exceptions can happen wherever. Under mask, asynchronous exceptions can only appear from interruptible actions (which are generally blocking). Under uninterruptibleMask, asynchronous exceptions are completely out of the picture. Also, please note that allowInterrupt is just one of the interruptible actions; there are a ton more, e.g. takeMVar. With just mask, it is e.g. impossible to block on an MVar without opening yourself up to exceptions, but uninterruptibleMask lets you do it (though you shouldn't).
uninterruptibleCancel is useful because cancel waits for the target thread to finish. This is a blocking operation, so, as is convention, it is also interruptible. Thus, when you use cancel, you open yourself up to receiving unexpected exceptions, whether you are masked or not. When you use uninterruptibleCancel, you are 100% guaranteed to not get an exception. That's it. Remember that exceptions are non-local; even if nothing in cancel is critical, leaving it unprotected means an exception can leak into something that is.
mask $ do
cancel something -- whoops, this can receive an exception, even though it's masked
someCleanup -- therefore this might not get called
vs.
mask $ do
uninterruptibleCancel something -- no exceptions
someCleanup -- so this will definitely happen (assuming the target thread ends)

Related

Is AsyncReadExt::read_u64 cancel safe?

In the documentation for AsyncReadExt::read_u64 it says it has the same errors as AsyncReadExt::read_exact, but says nothing about cancellation safety.
The same holds for all the other read_<type> functions on AsyncReadExt.
It seems likely that they have the same cancellation safety as read_exact (that is, none), but is that true?
Is there another way to read the next 4 bytes in a cancel safe way?
There's stuff in Tokio that covers my use case at a higher level, but I'd like to know how I would do this myself.
No it's not cancel safe
While the implementations of read_exact and the read_* functions differ they do the exact same thing:
Poll the underlying AsyncRead into a buffer, propagating errors appropriately.
If the reader returns Poll::Pending, propagate that.
If the buffer is full, return Ok(()).
If the buffer isn't full, repeat the whole thing over again.
If the future is canceled after some bytes are read it leaves the reader in an unknown state, thereby rendering them not cancel safe.
edit: making these methods object safe is difficult, the only way to do it is to rewrite the methods to do one of two things: when it is dropped, somehow communicate the internal state to a listener on the outside, probably via a channel, or have the future somehow run itself to completion when it's dropped. It would be preferrable to rewrite the surrounding code to not depend on its cancel safety.

Blocking code and callbacks. How to to transform blocking code in non-blocking code?

I feel the core of this question is not related to the specific language and library I am using, so I am using some pseudo-code. We can assume C as a language and a WinApi COM DLL.
Let's say I am using a dynamically linked external library which exposes some callbacks in response to some events. Say:
function RegisterCallback(ptr *callback);
Which is to be used as:
function OnEvent(type newValue) {
...
}
...
RegisterCallback(&OnEvent)
...
The library tells me that the callback should be non-blocking.
Now, suppose that I want to update an internal state in response to this event. This internal state is accessed by other threads and hence is guarded by a mutex. Thus I would like to write:
function OnEvent(type newValue) {
mutexLock();
internalState = newState;
mutexUnlock()
}
Bu this would be a potentially blocking operation. How should I proceed? The only solution I see out of this is to use a different thread to update the state, like:
function OnEvent(type newValue) {
sendChangeStateMessage(newValue)
}
But, again, in order for this call to be non-blocking, this "sending operation" should be buffered (that is, have a queue of messages), since sending (i.e. sharing data) across threads require syncing, and hence locking.
EDIT: Of course if the operation is atomic (as might be for an integer) there is no such problem /EDIT
To wrap it up: how do you transform a blocking code into a non-blocking one?
Thanks
Locking using a mutex is not always a blocking operation.
If the mutex is only used to protect access to that one variable, and if all other threads acquiring that mutex do not do any blocking operations while the mutex is locked, then this is not a blocking operation. A blocking operation is an operation that will block until something happens. In this case, this use of mutex is unlikely to be blocking unless, for instance, you lock the mutex in another thread and wait for a network read.

Is OVERLAPPED.hEvent set when async function ends synchronously?

MSDN says that AcceptEx() may return TRUE, but I was never able to reproduce this.
If AcceptEx() returns TRUE, will hEvent be set?
is it safe to call GetOverlappedResult() after AcceptEx() returns TRUE?
Is it the same for other functions like ReadFile()?
At least for ReadFile(socket) it's like that:
If ReadFile() succeeds or fails with ERROR_IO_PENDING, the event is set.
If the connection is closed before calling ReadFile(), it fails, and the event is not set.
From what I can tell from the MSDN pages on AcceptEx and the OVERLAPPED structure, when AcceptEx completes it should set the OVERLAPPED::hEvent handle to signalled.
From MSDN page of OVERLAPPED
A handle to the event that will be set to a signaled state by the system when the operation has completed. The user must initialize this member either to zero or a valid event handle using the CreateEvent function before passing this structure to any overlapped functions.
Its talking fairly broadly here, which I would say is safe to assume it applies to all functions taking an OVERLAPPED structure.
If your AcceptEx never returns true, it may be that you have a bug in your code. Unless you post actual code, will be hard to tell what that might be.
In the same page on OVERLAPPED it says this about ReadFile
Functions such as ReadFile and WriteFile set this handle to the nonsignaled state before they begin an I/O operation. When the operation has completed, the handle is set to the signaled state.
With regards to calling GetOverlappedResult it also states specifically what to do:
Functions such as GetOverlappedResult and the synchronization wait functions reset auto-reset events to the nonsignaled state. Therefore, you should use a manual reset event; if you use an auto-reset event, your application can stop responding if you wait for the operation to complete and then call GetOverlappedResult with the bWait parameter set to TRUE.
Like #HansPassant said in the comments, don't use it.

Correct design for Haskell exception handling

I'm currently trying to wrap my mind around the correct way to use exceptions in Haskell. How exceptions work is straight-forward enough; I'm trying to get a clear picture of the correct way to interpret them.
The basic position is that, in a well-designed application, exceptions shouldn't escape to the top-level. Any exception that does is clearly one which the designer did not anticipate - i.e., a program bug (e.g., divide by zero), rather than an unusual run-time occurrence (e.g., file not found).
To that end, I wrote a simple top-level exception handler that catches all exceptions and prints a message to stderr saying "this is a bug" (before rethrowing the exception to terminate the program).
However, suppose the user presses Ctrl+C. This causes an exception to be thrown. Clearly this is not any kind of program bug. However, failing to anticipate and react to a user abort such as this could be considered a bug. So perhaps the program should catch this and handle it appropriately, doing any necessary cleanup before exiting.
The thing is, though... The code that handles this is going to catch the exception, release any resources or whatever, and then rethrow the exception! So if the exception makes it to the top-level, that doesn't necessarily mean it was unhandled. It just means we wanted to exit quickly.
So, my question: Should exceptions be used for flow-control in this manner? Should every function that explicitly catches UserInterrupt use explicit flow-control constructs to exit manually rather than rethrow the exception? (But then how does the caller know to also exit?) Is it OK for UserInterrupt to reach the top-level? But in that case, is it OK for ThreadKilled too, by the same argument?
In short, should the interrupt handler make a special case for UserInterrupt (and possibly ThreadKilled)? What about a HeapOverflow or StackOverflow? Is that a bug? Or is that "circumstance beyond the program's control"?
Cleaning up in the presence of exceptions
However, failing to anticipate and react to a user abort such as this could be considered a bug. So perhaps the program should catch this and handle it appropriately, doing any necessary cleanup before exiting.
In some sense you are right — the programmer should anticipate exceptions. But not by catching them. Instead, you should use exception-safe functions, such as bracket.
For example:
import Control.Exception
data Resource
acquireResource :: IO Resource
releaseResource :: Resource -> IO ()
workWithResource = bracket acquireResource releaseResource $ \resource -> ...
This way the resources will be cleaned up regardless of whether the program will be aborted by Ctrl+C.
Should exceptions reach top level?
Now, I'd like to address another statement of yours:
The basic position is that, in a well-designed application, exceptions shouldn't escape to the top-level.
I would argue that, in a well-designed application, exceptions are a perfectly fine way to abort. If there are any problems with this, then you're doing something wrong (e.g. want to execute a cleanup action at the end of main — but that should be done in bracket!).
Here's what I often do in my programs:
Define a data type that represents any possible error — anything that might go wrong. Some of them often wrap other exceptions.
data ProgramError
= InputFileNotFound FilePath IOException
| ParseError FilePath String
| ...
Define how to print errors in a user-friendly way:
instance Show ProgramError where
show (InputFileNotFound path e) = printf "File '%s' could not be read: %s" path (show e)
...
Declare the type as an exception:
instance Exception ProgramError
Throw these exceptions in the program whenever I feel like it.
Should I catch exceptions?
Exceptions that you anticipate must be caught and wrapped (e.g. in InputFileNotFound) to give them more context. What about the exceptions that you don't anticipate?
I can see some value in printing "it's a bug" to the users, so that they report the problem back to you. If you do this, you should anticipate UserInterrupt — it's not a bug, as you say. How you should treat ThreadKilled depends on your application — literally, whether you anticipate it!
This, however, is orthogonal to the "good design" and depends more on what kind of users you're targeting, what you expect of them and what they expect of your program.
The response may range from just printing the exception to a dialog that says "we're very sorry, would you like to submit a report to the developers?".
Should exceptions be used for flow-control in this manner?
Yes. I highly recommend you read Breaking from a loop, which shows how Either and EitherT at their core at nothing more than abstractions for exiting from a code block early. Exceptions are just a special case of this behavior where you exit because of an error, but there is no reason why that should be the only case in which you exit prematurely.

Clojure mutable storage types

I'm attempting to learn Clojure from the API and documentation available on the site. I'm a bit unclear about mutable storage in Clojure and I want to make sure my understanding is correct. Please let me know if there are any ideas that I've gotten wrong.
Edit: I'm updating this as I receive comments on its correctness.
Disclaimer: All of this information is informal and potentially wrong. Do not use this post for gaining an understanding of how Clojure works.
Vars always contain a root binding and possibly a per-thread binding. They are comparable to regular variables in imperative languages and are not suited for sharing information between threads. (thanks Arthur Ulfeldt)
Refs are locations shared between threads that support atomic transactions that can change the state of any number of refs in a single transaction. Transactions are committed upon exiting sync expressions (dosync) and conflicts are resolved automatically with STM magic (rollbacks, queues, waits, etc.)
Agents are locations that enable information to be asynchronously shared between threads with minimal overhead by dispatching independent action functions to change the agent's state. Agents are returned immediately and are therefore non-blocking, although an agent's value isn't set until a dispatched function has completed.
Atoms are locations that can be synchronously shared between threads. They support safe manipulation between different threads.
Here's my friendly summary based on when to use these structures:
Vars are like regular old variables in imperative languages. (avoid when possible)
Atoms are like Vars but with thread-sharing safety that allows for immediate reading and safe setting. (thanks Martin)
An Agent is like an Atom but rather than blocking it spawns a new thread to calculate its value, only blocks if in the middle of changing a value, and can let other threads know that it's finished assigning.
Refs are shared locations that lock themselves in transactions. Instead of making the programmer decide what happens during race conditions for every piece of locked code, we just start up a transaction and let Clojure handle all the lock conditions between the refs in that transaction.
Also, a related concept is the function future. To me, it seems like a future object can be described as a synchronous Agent where the value can't be accessed at all until the calculation is completed. It can also be described as a non-blocking Atom. Are these accurate conceptions of future?
It sounds like you are really getting Clojure! good job :)
Vars have a "root binding" visible in all threads and each individual thread can change the value it sees with out affecting the other threads. If my understanding is correct a var cannot exist in just one thread with out a root binding that is visible to all and it cant be "rebound" until it has been defined with (def ... ) the first time.
Refs are committed at the end of the (dosync ... ) transaction that encloses the changes but only when the transaction was able to finish in a consistent state.
I think your conclusion about Atoms is wrong:
Atoms are like Vars but with thread-sharing safety that blocks until the value has changed
Atoms are changed with swap! or low-level with compare-and-set!. This never blocks anything. swap! works like a transaction with just one ref:
the old value is taken from the atom and stored thread-local
the function is applied to the old value to generate a new value
if this succeeds compare-and-set is called with old and new value; only if the value of the atom has not been changed by any other thread (still equals old value), the new value is written, otherwise the operation restarts at (1) until is succeeds eventually.
I've found two issues with your question.
You say:
If an agent is accessed while an action is occurring then the value isn't returned until the action has finished
http://clojure.org/agents says:
the state of an Agent is always immediately available for reading by any thread
I.e. you never have to wait to get the value of an agent (I assume the value changed by an action is proxied and changed atomically).
The code for the deref-method of an Agent looks like this (SVN revision 1382):
public Object deref() throws Exception{
if(errors != null)
{
throw new Exception("Agent has errors", (Exception) RT.first(errors));
}
return state;
}
No blocking is involved.
Also, I don't understand what you mean (in your Ref section) by
Transactions are committed on calls to deref
Transactions are committed when all actions of the dosync block have been completed, no exceptions have been thrown and nothing has caused the transaction to be retried. I think deref has nothing to do with it, but maybe I misunderstand your point.
Martin is right when he say that Atoms operation restarts at 1. until is succeeds eventually.
It is also called spin waiting.
While it is note really blocking on a lock the thread that did the operation is blocked until the operation succeeds so it is a blocking operation and not an asynchronously operation.
Also about Futures, Clojure 1.1 has added abstractions for promises and futures.
A promise is a synchronization construct that can be used to deliver a value from one thread to another. Until the value has been delivered, any attempt to dereference the promise will block.
(def a-promise (promise))
(deliver a-promise :fred)
Futures represent asynchronous computations. They are a way to get code to run in another thread, and obtain the result.
(def f (future (some-sexp)))
(deref f) ; blocks the thread that derefs f until value is available
Vars don't always have a root binding. It's legal to create a var without a binding using
(def x)
or
(declare x)
Attempting to evaluate x before it has a value will result in
Var user/x is unbound.
[Thrown class java.lang.IllegalStateException]

Resources