Clojure, atoms and refs - multithreading

I have a couple of question about refs and atom, and clojure reference types in general after reading clojure programming and mostly the question is related to this book.
First:
The books says about coordination, and it says "A coordinated operation is one where multiple actors must cooperate in order to yield correct results.". Does this mean if I have 3 fn fn1, fn2, and fn3, and each of them does some operation that possibly change the state of the reference (assuming it happens in each own Thread), it happens in a synchronous way in a chained operation? Something like, output of fn1 is input of fn2 and so on.
Second:
I cannot understand the difference between refs and atoms. The book says refs is for coordinated sync and atoms is for uncoordinated sync. Each of them (refs and atoms) has their own example, where atoms is used in such way where it is being operated by multiple function (1 atom 2 function), and multiple refs with 1 function. The book didn't give an example why we shouldn't or can't do the other way around.

Atoms allow multiple threads to apply transformations to a single value and guarantee the transformations are atomic. swap! takes the atom and a function expecting the current value of the atom. The result of calling that function with the current value is stored in the atom. multiple calls to swap! may interleave, but each call will run in isolation.
Refs allow multiple threads to update multiple values in a co-ordinated way. All updates to all refs inside a sync will complete or none will. You MUST write your code such that transaction retries are catered for. There are a few potential performance tweaks, if you can relax the ordering of operations, which MAY reduce the chance of transaction retry (but don't guarantee it).

Difference is really easy.
refs are operating under a transaction (similar to Databases transactions). Imagine a banking system. You can represent an account as a ref.
To transfer money, you start a Clojure STM transaction -via (dosync)-. Subtract X amount of money from ref-1 and add that amount to account ref-2.
If something goes wrong, then Clojure STM will restart the operation.
Imagine there is no transaction. You subtracted X amount of money from ref-1 and before you add that amount to ref-2, something went wrong in your system. Your customers will not be happy at all (if you aren't sued any way).
The Clojure STM is implemented as MVCC.
Atoms on the other hand don't need a transaction in place to operate. Atoms are convenient when there is no coordination. For example, a counter that increases the total number of a visited page in a web analytics system.
Have a look at Clojure Refs. It offers a lot of valuable information.

Related

How can an immutable data structure NOT be thread safe?

In a post called What is this thing you call "thread safe"?, Eric Lippert says:
Thread safety of immutable data structures is all about ensuring that use of the data across all operations is logically consistent, at the expense of the fact that you're looking at an immutable snapshot that might be out-of-date.
I thought the whole point of immutable data structures is that they do not change and therefore cannot be out of date, and that therefore they are intrinsically thread-safe
What does Lippert mean here?
What does Lippert mean here?
I agree that the way I wrote that particular bit was not as clear as it could be.
Back in 2009 we were designing the data structures for Roslyn -- the "C# and VB compiler as a service" -- and therefore were considering how to do analysis in the IDE, a world where the code is almost never correct -- if the code were correct, why are you editing it? -- and where it can be changing several times a second as you type.
I thought the whole point of immutable data structures is that they do not change and therefore cannot be out of date, and that therefore they are intrinsically thread-safe.
It is the fact that they do not change that makes them possibly out of date. Consider a common scenario in the IDE:
using System;
class P
{
static void Main()
{
Console.E
}
}
We have immutable data structures which represent the world at the moment before you typed "E", and we have an immutable data structure which represents the edit you've just made -- striking the letter E -- and now a whole bunch of stuff happens.
The lexer, knowing that the previous lex state is immutable and matches the world before the "E" re-lexes the world just around the E, rather than re-lexing the entire token stream. Similarly, the parser works out what the new (ill-formed!) parse tree is for this edit. That creates a new immutable parse tree that is an edit of the old immutable parse tree, and then the real fun starts. The semantic analyzer tries to figure out what Console means and then what you could possibly mean by E so that it can do an IntelliSense dropdown centered on members of System.Console that begin with E, if there are any. (And we're also starting an error-reporting workflow since there are now many semantic and syntactic errors in the program.)
Now what happens if while we are working all that out on a background thread, you hit "backspace" and then "W"?
All that analysis, which is still in flight will be correct, but it will be correct for Console.E and not Console.W. The analysis is out-of-date. It belongs to a world that is no longer relevant, and we have to start over again with analyzing the backspace and the W.
In short, it is perfectly safe to do analysis of immutable data structures on another thread, but stuff perhaps continues to happen on the UI thread that invalidates that work; this is one of the prices you pay for farming work on immutable data out to worker threads.
Remember, these invalidations can happen extremely quickly; we budgeted 30ms for a re-lex, re-parse and IntelliSense update because a fast typist can get in well over ten keystrokes per second; having an lexer and parser that re-used the immutable state of past lexes and parses was a key part of this strategy, but you then have to plan for an invalidation that throws away your current analysis to happen just as quickly.
Incidentally, the mechanisms we needed to invent to efficiently track those invalidations were themselves quite interesting, and led to some insights into cancellation-based workflows -- but that's a subject for another day.
He means that you might be looking at a different snapshot than someone else. Consider how cons lists work: after adding another element to the head of a list, there are effectively two lists (snapshots). Both of them are immutable but not the same.

What constructs are not possible using Ponylang's lock-free model?

Ponylang is a new language that is lock-free and datarace-free. My impression is that to accomplish this, Ponylang looks at the sentence "if two threads can see the same object, then writes must prohibit any other operation by another thread", and uses a type system to enforce the various special cases. For example, there's a type descriptor that says, "no other thread can see this object", and one that says, "this reference is read-only", and various others. Admittedly my understanding of this is quite poor, and ponylang's documentation is short on examples.
My question is: are there operations possible with a lock-based language that aren't translatable into ponylang's type-based system at all? Also, are there such operations that are not translatable into efficient constructs in ponylang?
[...] are there operations possible with a lock-based language that aren't translatable into ponylang's type-based system at all?
The whole point with reference capabilities, in Pony, is to prevent you from doing things that are possible and even trivial, in other languages, like sharing a list between two threads and add elements to it concurrently. So, yes, in languages like Java, you can share data between threads in a way that is impossible in Pony.
Also, are there such operations that are not translatable into efficient constructs in ponylang?
If you're asking if the lock-based languages can be more efficient in some situations, than pony, then I think so. You can always create a situation that benefits from N threads and 1 lock and is worse when you use the actor model which forces you to pass information around in messages.
This thing is not to see the actor model as superior in all cases. It's a different model of concurrency and problems are solved differently. For example, to compute N values and accumulate the results in a list:
In a thread-model you would
create a thread pool,
create thread-safe list,
Create N tasks sharing the list, and
wait for N tasks to finish.
In an actor-model you would
create an actor A waiting for N values,
create N actors B sharing the actor A, and
wait for A to produce a list.
Obviously, each task would add a value to the list and each actor B would send the value to actor A. Depending on how messages are passed between actors, it can be a slower to send N values than to lock N times. Typically it will be slower but, on the other hand, you will never get a list with an unexpected size.
I believe it can do anything that a shared everything + locks can do. with just iso objects and consume it is basically pure a message passing system which can do anything that a lock system does. As in mach3 can do anything linux can.

Why does FRP consider time as a factor for values?

Behaviors are ubiquitously defined as “time-varying value”s1.
Why? time being the dependency/parameter for varying values is very uncommon.
My intuition for FRP would be to have behaviors as event-varying values instead; it is much more common, much more simple, I wage a much more of an efficient idea, and extensible enough to support time too (tick event).
For instance, if you write a counter, you don't care about time/associated timestamps, you just care about the "Increase-button clicked" and "Decrease-button clicked" events.
If you write a game and want a position/force behavior, you just care about the WASD/arrow keys held events, etc. (unless you ban your players for moving to the left in the afternoon; how iniquitous!).
So: Why time is a consideration at all? why timestamps? why are some libraries (e.g. reactive-banana, reactive) take it up to the extent of having Future, Moment values? Why work with event-streams instead of just responding to an event occurrence? All of this just seems to over-complicate a simple idea (event-varying/event-driven value); what's the gain? what problem are we solving here? (I'd love to also get a concrete example along with a wonderful explanation, if possible).
1 Behaviors have been defined so here, here, here... & pretty much everywhere I've encountered.
Behaviors differ from Events primarily in that a Behavior has a value right now while an Event only has a value whenever a new event comes in.
So what do we mean by "right now"? Technically all changes are implemented as push or pull semantics over event streams, so we can only possibly mean "the most recent value as of the last event of consequence for this Behavior". But that's a fairly hairy concept—in practice "now" is much simpler.
The reasoning for why "now" is simpler comes down to the API. Here are two examples from Reactive Banana.
Eventually an FRP system must always produce some kind of externally visible change. In Reactive Banana this is facilitated by the reactimate :: Event (IO ()) -> Moment () function which consumes event streams. There is no way to have a Behavior trigger external changes---you always have to do something like reactimate (someBehavior <# sampleTickEvent) to sample the behavior at concrete times.
Behaviors are Applicatives unlike Events. Why? Well, let's assume Event was an applicative and think about what happens when we have two event streams f and x and write f <*> x: since events occur all at different times the chances of f and x being defined simultaneously are (almost certainly) 0. So f <*> x would always mean the empty event stream which is useless.
What you really want is for f <*> x to cache the most current values for each and take their combined value "all of the time". That's really confusing concept to talk about in terms of an event stream, so instead lets consider f and x as taking values for all points in time. Now f <*> x is also defined as taking values for all points in time. We've just invented Behaviors.
Because it was the simplest way I could think of to give a precise denotation (implementation-independent meaning) to the notion of behaviors, including the sorts of operations I wanted, including differentiation and integration, as well as tracking one or more other behaviors (including but not limited to user-generated behavior).
Why? time being the dependency/parameter for varying values is very uncommon.
I suspect that you're confusing the construction (recipe) of a behavior with its meaning. For instance, a behavior might be constructed via a dependency on something like user input, possibly with additional synthetic transformation. So there's the recipe. The meaning, however, is simply a function of time, related to the time-function that is the user input. Note that by "function", I mean in the math sense of the word: a (deterministic) mapping from domain (time) to range (value), not in the sense that there's a purely programmatic description.
I've seen many questions asking why time matters and why continuous time. If you apply the simple discipline of giving a mathematical meaning in the style of denotational semantics (a simple and familiar style for functional programmers), the issues become much clearer.
If you really want to grok the essence of and thinking behind FRP, I recommend you read my answer to "Specification for a Functional Reactive Programming language" and follow pointers, including "What is Functional Reactive Programming".
Conal Elliott's Push-Pull FRP paper describes event-varying data, where the only points in time that are interesting are when events occcur. Reactive event-varying data is the current value and the next Event that will change it. An Event is a Future point in the event-varying Reactive data.
data Reactive a = a ‘Stepper ‘ Event a
newtype Event a = Ev (Future (Reactive a))
The Future doesn't need to have a time associated with it, it just need to represent the idea of a value that hasn't happened yet. In an impure language with events, for example, a future can be an event handle and a value. When the event occurs, you set the value and raise the handle.
Reactive a has a value for a at all points in time, so why would we need Behaviors? Let's make a simple game. In between when the user presses the WASD keys, the character, accelerated by the force applied, still moves on the screen. The character's position at different points in time is different, even though no event has occurred in the intervening time. This is what a Behavior describes - something that not only has a value at all points in time, but its value can be different at all points in time, even with no intervening events.
One way to describe Behaviors would be to repeat what we just stated. Behaviors are things that can change in-between events. In-between events they are time-varying values, or functions of time.
type Behavior a = Reactive (Time -> a)
We don't need Behavior, we could simply add events for clock ticks, and write all of the logic in our entire game in terms of these tick events. This is undesirable to some developers as the code declaring what our game is is now intermingled with the code providing how it is implemented. Behaviors allow the developer to separate this logic between the description of the game in terms of time-varying variables and the implementation of the engine that executes that description.

Managing a stateful computation system in Haskell

So, I have a system of stateful processors that are chained together. For example, a processor might output the average of its last 10 inputs. It requires state to calculate this average.
I would like to submit values to the system, and get the outputs. I also would like to jump back and restore the state at any time in the past. Ie. I run 1000 values through the system. Now I want to "move" the system back to exactly as it was after I had sent the 500th value through. Then I want to "replay" the system from that point again.
I also need to be able to persist the historical state to disk so I can restore the whole thing again some time in the future (and still have the move back and replay functions work). And of course, I need to do this with gigabytes of data, and have it be extremely fast :)
I had been approaching it using closures to hold state. But I'm wondering if it would make more sense to use a monad. I have only read through 3 or 4 analogies for monads so don't understand them well yet, so feel free to educate me.
If each processor modifies its state in the monad in such a way that its history is kept and it is tied to an id for each processing step. And then somehow the monad is able to switch its state to a past step id and run the system with the monad in that state. And the monad would have some mechanism for (de)serializing itself for storage.
(and given the size of the data... it really shouldn't even all be in memory, which would mean the monad would need to be mapped to disk, cached, etc...)
Is there an existing library/mechanism/approach/concept that has already been done to accomplish or assist in accomplishing what I'm trying to do?
So, I have a system of stateful processors that are chained together. For example, a processor might output the average of its last 10 inputs. It requires state to calculate this average.
First of all, it sounds like what you have are not just "stateful processors" but something like finite-state machines and/or transducers. This is probably a good place to start for research.
I would like to submit values to the system, and get the outputs. I also would like to jump back and restore the state at any time in the past. Ie. I run 1000 values through the system. Now I want to "move" the system back to exactly as it was after I had sent the 500th value through. Then I want to "replay" the system from that point again.
The simplest approach here, of course, is to simply keep a log of all prior states. But since it sounds like you have a great deal of data, the storage needed could easily become prohibitive. I would recommend thinking about how you might construct your processors in a way that could avoid this, e.g.:
If a processor's state can be reconstructed easily from the states of its neighbors a few steps prior, you can avoid logging it directly
If a processor is easily reversible in some situations, you don't need to log those immediately; rewinding can be calculated directly, and logging can be done as periodic snapshots
If you can nail a processor down to a very small number of states, make sure to do so.
If a processor behaves in very predictable ways on certain kinds of input, you can record that as such--e.g., if it idles on numeric input below some cutoff, rather than logging each value just log "idled for N steps".
I also need to be able to persist the historical state to disk so I can restore the whole thing again some time in the future (and still have the move back and replay functions work). And of course, I need to do this with gigabytes of data, and have it be extremely fast :)
Explicit state is your friend. Functions are a convenient way to represent active state machines, but they can't be serialized in any simple way. You want a clean separation of a (basically static) network of processors vs. a series of internal states used by each processor to calculate the next step.
Is there an existing library/mechanism/approach/concept that has already been done to accomplish what I'm trying to do? Does the monad approach make sense? Are there other better/special approaches that would help it do this efficiently especially given the enormous amount of data I have to manage?
If most of your processors resemble finite state transducers, and you need to have processors that take inputs of various types and produce different types of outputs, what you probably want is actually something with a structure based on Arrows, which gives you an abstraction for things that compose "like functions" in some sense, e.g., connecting the input of one processor to the output of another.
Furthermore, as long as you avoid the ArrowApply class and make sure that your state machine model only returns an output value and a new state, you'll be guaranteed to avoid implicit state because (unlike functions) Arrows aren't automatically higher-order.
Given the size of the data... it really shouldn't even all be in memory, which would mean the monad would need to be mapped to disk, cached, etc...
Given a static representation of your processor network, it shouldn't be too difficult to also provide an incremental input/output system that would read the data, serialize/deserialize the state, and write any output.
As a quick, rough starting point, here's an example of probably the simplest version of what I've outlined above, ignoring the logging issue for the moment:
data Transducer s a b = Transducer { curState :: s
, runStep :: s -> a -> (s, b)
}
runTransducer :: Transducer s a b -> [a] -> [b]
runTransducer t [] = (t, [])
runTransducer t (x:xs) = let (s, y) = runStep t (curState t) x
(t', ys) = runTransducer (t { curState = s }) xs
in (t', y:ys)
It's a simple, generic processor with explicit internal state of type s, input of type a, and output of type b. The runTransducer function shoves a list of inputs through, updating the state value manually, and collects a list of outputs.
P.S. -- since you were asking about monads, you might want to know if the example I gave is one. In fact, it's a combination of multiple common monads, though which ones depends on how you look at it. However, I've deliberately avoided treating it as a monad! The thing is, monads capture only abstractions that are in some sense very powerful, but that same power also makes them more resistant in some ways to optimization and static analysis. The main thing that needs to be ruled out is processors that take other processors as input and run them, which (as you can imagine) can create convoluted logic that's nearly impossible to analyze.
So, while the processors probably could be monads, and in some logical sense intrinsically are, it may be more useful to pretend that they aren't; imposing an artificial limitation in order to make static analysis simpler.

Keeping State in a Purely Functional Language

I am trying to figure out how to do the following, assume that your are working on a controller for a DC motor you want to keep it spinning at a certain speed set by the user,
(def set-point (ref {:sp 90}))
(while true
(let [curr (read-speed)]
(controller #set-point curr)))
Now that set-point can change any time via a web a application, I can't think of a way to do this without using ref, so my question is how functional languages deal with this sort of thing? (even though the example is in clojure I am interested in the general idea.)
This will not answer your question but I want to show how these things are done in Clojure. It might help someone reading this later so they don't think they have to read up on monads, reactive programming or other "complicated" subjects to use Clojure.
Clojure is not a purely functional language and in this case it might be a good idea to leave the pure functions aside for a moment and model the inherent state of the system with identities.
In Clojure, you would probably use one of the reference types. There are several to choose from and knowing which one to use might be difficult. The good news is they all support the unified update model so changing the reference type later should be pretty straight forward.
I've chosen an atom but depending on your requirements it might be more appropriate to use a ref or an agent.
The motor is an identity in your program. It is a "label" for some thing that has different values at different times and these values are related to each other (i.e., the speed of the motor). I have put a :validator on the atom to ensure that the speed never drops below zero.
(def motor (atom {:speed 0} :validator (comp not neg? :speed)))
(defn add-speed [n]
(swap! motor update-in [:speed] + n))
(defn set-speed [n]
(swap! motor update-in [:speed] (constantly n)))
> (add-speed 10)
> (add-speed -8)
> (add-speed -4) ;; This will not change the state of motor
;; since the speed would drop below zero and
;; the validator does not allow that!
> (:speed #motor)
2
> (set-speed 12)
> (:speed #motor)
12
If you want to change the semantics of the motor identity you have at least two other reference types to choose from.
If you want to change the speed of the motor asynchronously you would use an agent. Then you need to change swap! with send. This would be useful if, for example, the clients adjusting the motor speed are different from the clients using the motor speed, so that it's fine for the speed to be changed "eventually".
Another option is to use a ref which would be appropriate if the motor need to coordinate with other identities in your system. If you choose this reference type you change swap! with alter. In addition, all state changes are run in a transaction with dosync to ensure that all identities in the transaction are updated atomically.
Monads are not needed to model identities and state in Clojure!
For this answer, I'm going to interpret "a purely functional language" as meaning "an ML-style language that excludes side effects" which I will interpret in turn as meaning "Haskell" which I'll interpret as meaning "GHC". None of these are strictly true, but given that you're contrasting this with a Lisp derivative and that GHC is rather prominent, I'm guessing this will still get at the heart of your question.
As always, the answer in Haskell is a bit of sleight-of-hand where access to mutable data (or anything with side effects) is structured in such a way that the type system guarantees that it will "look" pure from the inside, while producing a final program that has side effects where expected. The usual business with monads is a large part of this, but the details don't really matter and mostly distract from the issue. In practice, it just means you have to be explicit about where side effects can occur and in what order, and you're not allowed to "cheat".
Mutability primitives are generally provided by the language runtime, and accessed through functions that produce values in some monad also provided by the runtime (often IO, sometimes more specialized ones). First, let's take a look at the Clojure example you provided: it uses ref, which is described in the documentation here:
While Vars ensure safe use of mutable storage locations via thread isolation, transactional references (Refs) ensure safe shared use of mutable storage locations via a software transactional memory (STM) system. Refs are bound to a single storage location for their lifetime, and only allow mutation of that location to occur within a transaction.
Amusingly, that whole paragraph translates pretty directly to GHC Haskell. I'm guessing that "Vars" are equivalent to Haskell's MVar, while "Refs" are almost certainly equivalent to TVar as found in the stm package.
So to translate the example to Haskell, we'll need a function that creates the TVar:
setPoint :: STM (TVar Int)
setPoint = newTVar 90
...and we can use it in code like this:
updateLoop :: IO ()
updateLoop = do tvSetPoint <- atomically setPoint
sequence_ . repeat $ update tvSetPoint
where update tv = do curSpeed <- readSpeed
curSet <- atomically $ readTVar tv
controller curSet curSpeed
In actual use my code would be far more terse than that, but I've left things more verbose here in hopes of being less cryptic.
I suppose one could object that this code isn't pure and is using mutable state, but... so what? At some point a program is going to run and we'd like it to do input and output. The important thing is that we retain all the benefits of code being pure, even when using it to write code with mutable state. For instance, I've implemented an infinite loop of side effects using the repeat function; but repeat is still pure and behaves reliably and nothing I can do with it will change that.
A technique to tackle problems that apparently scream for mutability (like GUI or web applications) in a functional way is Functional Reactive Programming.
The pattern you need for this is called Monads. If you really want to get into functional programming you should try to understand what monads are used for and what they can do. As a starting point I would suggest this link.
As a short informal explanation for monads:
Monads can be seen as data + context that is passed around in your program. This is the "space suit" often used in explanations. You pass data and context around together and insert any operation into this Monad. There is usually no way to get the data back once it is inserted into the context, you just can go the other way round inserting operations, so that they handle data combined with context. This way it almost seems as if you get the data out, but if you look closely you never do.
Depending on your application the context can be almost anything. A datastructure that combines multiple entities, exceptions, optionals, or the real world (i/o-monads). In the paper linked above the context will be execution states of an algorithm, so this is quite similar to the things you have in mind.
In Erlang you could use a process to hold the value. Something like this:
holdVar(SomeVar) ->
receive %% wait for message
{From, get} -> %% if you receive a get
From ! {value, SomeVar}, %% respond with SomeVar
holdVar(SomeVar); %% recursively call holdVar
%% to start listening again
{From, {set, SomeNewVar}} -> %% if you receive a set
From ! {ok}, %% respond with ok
holdVar(SomeNewVar); %% recursively call holdVar with
%% the SomeNewVar that you received
%% in the message
end.

Resources