Can reactive-banana handle cycles in the network? - haskell

We have code like this:
guiState :: Discrete GuiState
guiState = stepperD (GuiState []) $
union (mkGuiState <$> changes model) evtAutoLayout
evtAutoLayout :: Event GuiState
evtAutoLayout = fmap fromJust . filterE isJust . fmap autoLayout $ changes guiState
You can see that evtAutoLayout feeds into guiState which feeds into
evtAutoLayout--so there is a cycle there. This is deliberate. Auto
layout adjusts the gui state until it reaches an equilibrium and then
it returns Nothing and so it should stop the loop. A new model change
can kick it off again, of course.
When we put this together, though, we run into an infinite loop on the
compile function call. Even if autoLayout = Nothing, it still results in a stack overflow during compile.
If I remove the union call in guiState and remove evtAutoLayout out of
the picture...
guiState :: Discrete GuiState
guiState = stepperD (GuiState []) $ mkGuiState <$> changes model
it works fine.
Any suggestions?

The question
Does the reactive-banana library support recursively defined events?
has not only one, but three answers. The short answers are: 1. generally no, 2. sometimes yes, 3. with workaround yes.
Here the long answers.
The semantics of reactive-banana do not support defining an Event directly in terms of itself.
This is a decision that Conal Elliott made in his original FRP semantics and I've decided to stick to it. Its main benefit is that the semantics remain very simple, you can always think in terms of
type Behavior a = Time -> a
type Event a = [(Time,a)]
I have provided a module Reactive.Banana.Model that implements almost precisely this model, you can consult its source code for any questions concerning the semantics of reactive-banana. In particular, you can use it to reason about your example: a calculation with pen & paper or trying it in GHCi (with some mock data) will tell you that the value evtAutoLayout is equal to _|_, i.e. undefined.
The latter may be surprising, but as you wrote it, the example is indeed undefined: the GUI state only changes if an evtAutoLayout event happens, but it can only happen if you know whether the GUI state changes, which in turn, etc. You always need to break the strangulating feedback loop by inserting a small delay. Unfortunately, reactive-banana doesn't currently offer a way to insert small delays, mainly because I don't know how to describe small delays in terms of the [(Time,a)] model in a way that allows recursion. (But see answer 3.)
It is possible and encouraged to define an Event in terms of a Behavior that refers to the Event again. In other words, recursion is allowed as long as you go through a Behavior.
A simple example would be
import Reactive.Banana.Model
filterRising :: (FRP f, Ord a) => Event f a -> Event f a
filterRising eInput = eOutput
where
eOutput = filterApply (greater <$> behavior) eInput
behavior = stepper Nothing (Just <$> eOutput)
greater Nothing _ = True
greater (Just x) y = x < y
example :: [(Time,Int)]
example = interpretTime filterRising $ zip [1..] [2,1,5,4,8,9,7]
-- example = [(1.0, 2),(3.0, 5),(5.0, 8),(6.0, 9)]
Given an event stream, the function filterRising returns only those events that are greater than the previously returned. This is hinted at in the documentation for the stepper function.
However, this is probably not the kind of recursion you desire.
Still, it is possible to insert small delays in reactive-banana, it's just not part of the core library and hence doesn't come with any guaranteed semantics. Also, you do need some support from your event loop to do that.
For instance, you can use a wxTimer to schedule an event to happen right after you've handled the current one. The Wave.hs example demonstrates the recursive use of a wxTimer with reactive-banana. I don't quite know what happens when you set the timer interval to 0, though, it might execute too early. You probably have to experiment a bit to find a good solution.
Hope that helps; feel free to ask for clarifications, examples, etc.
Disclosure: I'm the author of the reactive-banana library.

Related

Real life and useful examples of Reverse State monad

Reverse State monad is really nice and mind blowing example of Haskell language's expressiveness and lazy evaluation. But it's not that easy to understand this monad. Moreover, it's really hard to find some convincing real life example of what you can do with Reverse State monad easier than with any other tool in the language.
Reverse State monad is defined in the next way:
newtype RState s a = RState { runRState :: s -> (a,s) }
instance Monad (RState s) where
return x = RState $ (,) x
RState sf >>= f = RState $ \s ->
let (a, past) = sf future
(b, future) = runRState (f a) s
in (b, past)
It already has some examples and usages but I don't find them quite practical.
Quora answer: well-explained and even has real life example of usage but without code and it's not clear whether it's a really good idea to use RState.
Mindfuck: introducing this nice concept but example is not useful. Nobody will write Fibonacci numbers this way.
Kwang's Haskell Blog: shows how Writer can be emulated with RState but come on. Not really a real life example :)
I'm also aware of tardis package but no tutorial of this library, documentation examples are really abstract, not so many people really understand it. The closest to what I want is this tutorial but it has example of tardis, not just RState. As well as this book reference.
Thus I'm not looking for tardis real life patterns, I'm interested only in RState illustration if possible. Though I understand that there might be no samples of pure RState usages. In that case minimal example with RStateT transformer or tardis is good enough.
Did someone use this monad in real life or have really nice & useful illustration with code?
I have known about these monads for well over a decade now, and have only just recently seen a realistic application of them. It's in a bit of an unusual setting. A coworker and I are using functional reactive programming via the 'reflex' library, and are working on a library to help with building terminal-graphics applications. If you're familiar with 'reflex-dom', it's similar in nature, except that our basic monad, rather than putting subsequent widgets one after the other in the DOM, instead just stacks terminal character-cell-based "images" on top of each other, and it's up to the user to carve up the screen sensibly. We wanted to provide something a little nicer than this, which would keep track of remaining screen real-estate to some extent, and let the user place some "tiles" in rows and columns, such that a do-block basically corresponds to either a column or row of tiles on the screen.
In addition to handling the problem of layout, we also want the tiles to be able to manage keyboard focus, allowing the user to press tab to cycle through them, or shift-tab to go in reverse. It was here that the forwards-and-backwards-in-time state monad transformer became quite handy: we can have the current state in either direction be an Event (of an empty tuple). Each tile can send an event to the previous and next widgets (and receive an event from them), notifying widgets when they are receiving keyboard focus and so should stop blocking key presses from reaching their child widgets. So schematically, the tile widget looks something like:
do rec focusP <- recvFromPast
sendToPast shiftTabPress
tabPress <- gate focused $ ... filter input Event for Tab keypresses ...
shiftTabPress <- gate focused $ ... filter input Event for Shift-Tab ...
focused <- hold False $ leftmost
[ True <$ (focusP <> focusF)
, False <$ (shiftTabPress <> tabPress) ]
v <- ... run the child widget and do layout stuff ...
sendToFuture tabPress
focusF <- recvFromFuture
return v
Here, sendToFuture is the ordinary state "put", sendToPast is the reverse-time "put", recvFromPast is the ordinary state "get", and recvFromFuture is reverse-time "get". So focusP :: Event t () is an Event that we get from our predecessor (another tile like this one, probably) telling us that we have the focus now, and focusF, is a similar Event we receive from our successor. We keep track of when we have the focus using a 'hold' to construct focused :: Behavior t Bool, which is then used to gate the keyboard events so that we're sure to tell our neighbours they're receiving focus only if we ourselves are focused, and is also used in the bit I elided where we're running the child widget, in order to filter its input events appropriately.
I'm not certain we're actually going to still be doing it this way by the time the library gets released, but it seems to work well thus far, and I was happy to have finally noticed a case in which this construction could be put to practical use.

Behavior that is both recursive, and depends on other behaviors

My network looks like this:
ePhysics :: Event t ()
bPlayerForce :: Behavior t (Double,Double)
bPlayerPosition :: Behavior t (Double, Double)
ePhysics is fired repeatedly from a timer.
I'm having issues with defining bPlayerPosition. What I intend is that it will start at (0,0), and whenever ePhysics is fired, bPlayerPosition will be recalculated using bPlayerForce as a parameter.
The problem is that in-order to specify an initial value I need to use accumB/stepper but they only work with events, and I can't get the force value from bPlayerForce since only behaviors can get the value of other behaviors (with <*>).
The alternative would be just using <*> but with that I have no initial value, so it becomes a meaningless infinite recursion:
let bPlayerPosition = pure calcPosition <*> bPlayerForce <*> bPlayerPosition
I have 3 questions:
Is there a way of getting a value out of a behavior without <*>? like, when reactimateing or mapping an event? the issue of not being able to has been screwing with me constantly since the very start.
Would there be a more functional/frp way to do physics? (in general and the ones specific to the question)
How can I solve the presented problem?
The apply combinator, also called <#>, and its variant <# provide a way to sample a Behavior whenever an event happens.
Looks fine to me. The Animation.hs example does something similar. You have to approximate the integral by summing over discrete time steps, which are given by your ePhysics event.
Something along the lines of
let bPlayerPosition = stepper (0,0) $
(calcPosition <*> bPlayerForce <*> bPlayerPosition) <# ePhysics

"Behavior now" in FRP

In a previous SO question (Is it possible?: Behavior t [Behavior t a] -> Behavior t [a]) we were analyzing the existence of a Behavior join (to use reactive-banana terms).
Behavior t (Behavior t a) -> Behavior t a
Implemented in the semantic model as follows
type Behavior t a = t -> a
behaviorNow :: Behavior t (Behavior t a) -> Behavior t a
behaviorNow f t = f t t
While implementing this directly would be unfortunate since we could produce a Behavior Monad using const and behaviorNow, if and how does behaviorNow violate the semantics of FRP?
I'd love to hear answers using the terminology of any other FRP system along with comparisons if meaningful.
In a poll based FRP system, any behavior has a meaningful join
the sample of join bb is the sample of the b obtained by sampling bb
In push based FRP, any behavior that is a step function composed with other step functions has a meaningful >>= and join. Pushing values through >>= can be described in imperative terms:
when the argument of the bind changes, evaluate the bind and
change the current step function to the returned step function
change the value to the value of the current step function
when the value of the current step function changes, change the value
Providing a Monad instance may be slightly undesirable because it is likely to be chosen by preference by library users, even if it is less efficient. For example, the code in this unrelated answer performs more work when a computation was built with >>= than if it had been equivalently built with <*>.
Conal Elliott described in declarative terms a join for simultaneously pushing and polling values for behaviors built from step functions:
-- Reactive is a behavior that can only be a step function
data Reactive a = a `Stepper` Event a
newtype Event a = Ev (Future (Reactive a))
join :: Reactive (Reactive a) -> Reactive a
join ((a `Stepper` Ev ur) `Stepper` Ev urr ) =
((`switcher` Ev urr ) <$> ur) _+_ (join <$> urr )
switcher :: Reactive a -> Event (Reactive a) -> Reactive a
r `switcher` er = join (r `Stepper` er)
where Future is the type for a value we haven't seen yet, _+_ is the first of the two Future possibilities to occur, and <$> is infix fmap on Futures. [1]
If we don't provide any other means of creating behaviors than
the constant function (which is trivially a step function)
a "stepper" that remembers the most recent value of an event
application of various combinators of behaviors where the combinators themselves aren't time-varying
then every behavior is a step function and we can use this or a similar Monad instance for behaviors.
Difficulties only arise when we want to have behaviors that are continuous or are a function of some time other than when an event occurred. Consider if we had the following
time :: Behavior t t
which is the behavior that tracks the current time. A Monad instance for polling the system would still be the same, but we can no longer push changes through the system reliably. What happens when we make something as simple as time >>= \x -> if am x then return 0 else return 1 (where am t is true for times in the morning)? Neither our definition of >>= above nor Elliot's join can admit the optimization of knowing when the time changes; it changes continuously. The best we could do to >>= is something like:
if we know that the argument to the bind is step valued then
when the argument of the bind changes, evaluate the bind and
change the current step function to the returned step function
change the value to the value of the current step function
when the value of the current step function changes, change the value
otherwise
return an abstract syntax tree for this >>=
For the join form, we would be reduced to doing something similar, and simply record the AST in the instance that the outer behavior in a join isn't a step function.
Additionally, anything built using this as an input could change at noon and midnight, whether or not any other event was raised. It would taint everything from that point on with the inability to reliably push events.
From an implementation point of view, our best option would seem to be to continuously poll time, and replace anywhere it was used with a stepper built from the polling events. This wouldn't update values between events, so now users of our library can't reliably poll values.
Our best choice for an implementation would be to keep an abstract syntax tree of what happened with arbitrary behaviors like these and provide no means to generate events from behaviors. Then behaviors can be polled, but no updates will ever be pushed for them. In that case, we might as well leave it out of the library, and let the user pass around ASTs (which they can get for Free), and let the user evaluate the entire AST every time it's polled. We can't optimize it any more for the library user, since any value like this can change continuously.
There is one final option, but it involves introducing quite a bit of complexity. Introduce the notion of predictability for properties of continuously varying values and computations of continuously varying values. This would allow us to provide a Monad interface for a larger subset of time-varying behaviors, but not for all of them. This complexity is already desirable in other parts of programs, but I don't know of any libraries outside symbolic math which attempt to address this.
(Author here.)
First note, that the behaviorNow function is the monadic join.
In reactive-banana-0.7, Behavior t is not a monad beause that would have serious consequences for efficiency.
The first and most important problem is that behaviors can also be stateful. In conjunction with join, this would lead to time leaks. The main indication of problems is that the starting time t of the inner Behavior t is the same as the outer one. For instance, consider the program
e :: Event t Int
b :: Int -> Behavior t Int
b x = accumB 0 $ (x+) <$ e
bb :: Behavior t (Behavior t Int)
bb = stepper (pure 0) $ b <$> e
The behavior join bb would need to keep track of the whole history of the event e in order to perform the accumulation in the definition of b. In other words, the event e could never be garbage collected -- a time leak.
A second problem is that internally, the implementation of Behavior t also includes an event that keeps track of when the behavior changes. However, a liberal use of the join combinator, for instance as implied by do notation, would lead to rather convoluted calculations to determine whether the behavior has changed or not. This is contrary to the reason for keeping track in the first place: efficiency by avoiding expensive calculations.
The Reactive.Banana.Switch module offers various combinators that are cousins of the join function, but avoid the first problem with cleverly chosen types. In particular:
The switchB function is the most direct analogue of join.
The AnyMoment Identity type is similar to the Behavior type, but without state and without keeping track of changes. Consequently, it has a monad instance.

Program design in Haskell: how to do simulation without mutability

I have a question about the best way to design a program I'm working on in Haskell. I'm writing a physics simulator, which is something I've done a bunch in standard imperative languages, and usually the main method looks something like:
while True:
simulationState = stepForward(simulationState)
render(simulationState)
And I'm wondering how to do something similar in Haskell. I have a function step :: SimState -> SimState and a function display :: SimState -> IO () that uses HOpenGL to draw a simulation state, but I'm at a loss as to how to do this in a "loop" of sorts, as all of the solutions I can come up with involve some sort of mutability. I'm a bit of a noob when it comes to Haskell, so it's entirely possible that I'm missing a very obvious design decision. Also, if there's a better way to architect my program as a whole, I'd be glad to hear it.
Thanks in advance!
In my opinion, the right way to think about this problem is not as a loop, but as a list or other such infinite streaming structure. I gave a similar answer to a similar question; the basic idea is, as C. A. McCann wrote, to use iterate stepForward initialState, where iterate :: (a -> a) -> a -> [a] “returns an infinite list of repeated applications of [stepForward] to [initialState]”.
The problem with this approach is that you have trouble dealing with a monadic step, and in particular a monadic rendering function. One approach would just be to take the desired chunk of the list in advance (possibly with a function like takeWhile, possibly with manual recursion) and then mapM_ render on that. A better approach would be to use a different, intrinsically monadic, streaming structure. The four that I can think of are:
The iteratee package, which was originally designed for streaming IO. I think here, your steps would be a source (enumerator) and your rendering would be a sink (iteratee); you could then use a pipe (an enumeratee) to apply functions and/or do filtering in the middle.
The enumerator package, based on the same ideas; one might be cleaner than the other.
The newer pipes package, which bills itself as “iteratees done right”—it's newer, but the semantics are, at least to me, significantly clearer, as are the names (Producer, Consumer, and Pipe).
The List package, in particular its ListT monad transformer. This monad transformer is designed to allow you to create lists of monadic values with more useful structure than [m a]; for instance, working with infinite monadic lists becomes more manageable. The package also generalizes many functions on lists into a new type class. It provides an iterateM function twice; the first time in incredible generality, and the second time specialized to ListT. You can then use functions such as takeWhileM to do your filtering.
The big advantage to reifying your program’s iteration in some data structure, rather than simply using recursion, is that your program can then do useful things with control flow. Nothing too grandiose, of course, but for instance, it separates the “how to terminate” decision from the “how to generate” process. Now, the user (even if it's just you) can separately decide when to stop: after n steps? After the state satisfies a certain predicate? There's no reason to bog down your generating code with these decisions, as it's logically a separate concern.
Well, if drawing successive states is all you want to do, that's pretty simple. First, take your step function and the initial state and use the iterate function. iterate step initialState is then an (infinite) list of each simulation state. You can then map display over that to get IO actions to draw each state, so together you'd have something like this:
allStates :: [SimState]
allStates = iterate step initialState
displayedStates :: [IO ()]
displayedStates = fmap display allStates
The simplest way to run it would be to then use the intersperse function to put a "delay" action between each display action, then use the sequence_ function to run the whole thing:
main :: IO ()
main = sequence_ $ intersperse (delay 20) displayedStates
Of course that means you have to forcibly terminate the application and precludes any sort of interactivity, so it's not really a good way to do it in general.
A more sensible approach would be to interleave things like "seeing if the application should exit" at each step. You can do that with explicit recursion:
runLoop :: SimState -> IO ()
runLoop st = do display st
isDone <- checkInput
if isDone then return ()
else delay 20 >> runLoop (step st)
My preferred approach is to write non-recursive steps instead and then use a more abstract loop combinator. Unfortunately there's not really good support for doing it that way in the standard libraries, but it would look something like this:
runStep :: SimState -> IO SimState
runStep st = do display st
delay 20
return (step st)
runLoop :: SimState -> IO ()
runLoop initialState = iterUntilM_ checkInput runStep initialState
Implementing the iterUntilM_ function is left as an exercise for the reader, heh.
Your approach is ok, you just need to remember that loops are expressed as recursion in Haskell:
simulation state = do
let newState = stepForward state
render newState
simulation newState
(But you definietly need a criterion how to end the loop.)

How can monads make my job easier? Show me some cool piece of code

I like reading snippets of code about concepts that I don't understand. Are there any snippets that show off monads in all their glory? More importantly how can I apply monads to make my job easier.
I use jQuery heavily. That's one cool application of monads I know of.
Like others, I think the question is far too general. I think most answers (like mine) will give examples of something neat making use of one specific monad. The real power of monads is that, once you understand them as an abstraction, you can apply that knowledge to any new monads you come across (and in Haskell there are a lot). This in turn means you can easily figure out what new code does and how to use it because you already know the interface and some rules that govern its behavior.
Anyway, here's an example using the List monad from a test-running script I wrote:
runAll :: IO ()
runAll = do
curdir <- getCurrentDirectory
sequence $ runTest <$> srcSets <*> optExeFlags <*> optLibFlags
setCurrentDirectory curdir
Technically I'm using the Applicative interface, but you can just change the <*>'s to ap from Control.Monad if that bothers you.
The cool thing about this is that it calls runTest for every combination of arguments from the lists "srcSets", "optExeFlags", and "optLibFlags" in order to generate profiling data for each of those sets. I think this is much nicer than what I would have done in C (3 nested loops).
Your question is really vague -- it's like asking, "show an example of code that uses variables". It's so intrinsic to programming that any code is going to be an example. So, I'll just give you the most-recently-visited Haskell function that's still open in my editor, and explain why I used monadic control flow.
It's a code snippet from my xmonad config file. It is part of the implementation for a layout that behaves in a certain way when there is one window to manage, and in another way for more than one window. This function takes a message and generates a new layout. If we decide that there is no change to be made, however, we return Nothing:
handleMessage' :: AlmostFull a -> SomeMessage -> Int -> Maybe (AlmostFull a)
handleMessage' l#(AlmostFull ratio delta t) m winCount =
case winCount of
-- keep existing Tall layout, maybe update ratio
0 -> finalize (maybeUpdateRatio $ fromMessage m) (Just t)
1 -> finalize (maybeUpdateRatio $ fromMessage m) (Just t)
-- keep existing ratio, maybe update Tall layout
_ -> finalize (Just ratio) (pureMessage t m)
where
finalize :: Maybe Rational -> Maybe (Tall a) -> Maybe (AlmostFull a)
finalize ratio t = ratio >>= \ratio -> t >>= \t ->
return $ AlmostFull ratio delta t
maybeUpdateRatio :: Message -> Maybe Rational
maybeUpdateRatio (Just Shrink) = Just (max 0 $ ratio-delta)
maybeUpdateRatio (Just Expand) = Just (min 1 $ ratio+delta)
maybeUpdateRatio _ = Nothing
We decide what to return based on the current window manager state (which is determined by a computation in the X monad, whose result we pass to this function to keep the actual logic pure) -- if there are 0 or 1 windows, we pass the message to the AlmostFull layout and let it decide what to do. That's the f function. It returns Just the new ratio if the message changes the ratio, otherwise it returns Nothing. The other half is similar; it passes the message onto Tall's handler if there are 2 or more windows. That returns Just a new Tall layout if that's what the user asked for, otherwise it returns Nothing.
The finalize function is the interesting part; it extracts both ratio (the desired new ratio) and t (the desired new Tall layout) from its Maybe wrapper. This means that both have to be not Nothing, otherwise we automatically return Nothing from our function.
The reason we used the Maybe monad here was so that we could write a function contingent on all results being available, without having to write any code to handle the cases where a Nothing appeared.
Essentially, monads are "imperative minilanguages". Hence, they enable you to use any imperative construct like exceptions (Maybe), logging (Writer), Input/Output (IO), State (State), non-determinism (lists [a]), parsers (Parsec, ReadP) or combinations thereof.
For more advanced examples, have a look at the example code for my operational package. In particular,
WebSessionState.lhs implements web sessions that are programmed as if the server were a persistent process while they are in fact delivered asynchronously.
TicTacToe.hs shows a game engine where players and AI are written as if they were running in concurrent processes.
I've been looking into Haskell and Information Flow security. This paper is pretty interesting, it uses Monads to enforce confidentiality in Haskell Programs.
http://www.cse.chalmers.se/~russo/seclib.htm
Here is something that I did recently that might show off some of the power of monads. The actual code is not shown here to protect the innocent, this is just a sketch.
Let's say you want to search through some dictionary and depending on what you find you want to do some other search. The searches might return Nothing (the element you are looking for doesn't exist) in which case you might try a different search, and if all searches fail you return Nothing.
The idea is to make our own monad by combining monad transformers, and then we can easily make some combinators for searches. Our monad will be ReaderT Dictionary Maybe. And we define the functions find wich looks up a given key, both which will return a the list of elements it found in both of the searches and oneOf which takes two searches and tries the first and if it didn't succeed it tries the second. Here is an example of such a search:
import Control.Monad
import Control.Monad.Reader
find a = ReaderT (lookup a)
both a b = liftM2 (++) a b
oneOf = mplus
search = both (find 1) ((find 2) `oneOf` (find 3))
`oneOf` both (find 4) (find 5)
And running:
(runReaderT search) [(1,"a"),(3,"c"),(4,"d"),(5,"g")] --> Just "ac"
(runReaderT search) [(6,"a")] --> Nothing
The big advantage we gain from this being a monad is that we can bind searches together and lift other functions into this abstraction. Let's say for instance, I have two searches search_a and search_b, and I want to do them and then return them merged:
do a <- search_a
b <- search_b
return (merge a b)
or alternatively liftM2 merge search_a search_b.

Resources