In a previous SO question (Is it possible?: Behavior t [Behavior t a] -> Behavior t [a]) we were analyzing the existence of a Behavior join (to use reactive-banana terms).
Behavior t (Behavior t a) -> Behavior t a
Implemented in the semantic model as follows
type Behavior t a = t -> a
behaviorNow :: Behavior t (Behavior t a) -> Behavior t a
behaviorNow f t = f t t
While implementing this directly would be unfortunate since we could produce a Behavior Monad using const and behaviorNow, if and how does behaviorNow violate the semantics of FRP?
I'd love to hear answers using the terminology of any other FRP system along with comparisons if meaningful.
In a poll based FRP system, any behavior has a meaningful join
the sample of join bb is the sample of the b obtained by sampling bb
In push based FRP, any behavior that is a step function composed with other step functions has a meaningful >>= and join. Pushing values through >>= can be described in imperative terms:
when the argument of the bind changes, evaluate the bind and
change the current step function to the returned step function
change the value to the value of the current step function
when the value of the current step function changes, change the value
Providing a Monad instance may be slightly undesirable because it is likely to be chosen by preference by library users, even if it is less efficient. For example, the code in this unrelated answer performs more work when a computation was built with >>= than if it had been equivalently built with <*>.
Conal Elliott described in declarative terms a join for simultaneously pushing and polling values for behaviors built from step functions:
-- Reactive is a behavior that can only be a step function
data Reactive a = a `Stepper` Event a
newtype Event a = Ev (Future (Reactive a))
join :: Reactive (Reactive a) -> Reactive a
join ((a `Stepper` Ev ur) `Stepper` Ev urr ) =
((`switcher` Ev urr ) <$> ur) _+_ (join <$> urr )
switcher :: Reactive a -> Event (Reactive a) -> Reactive a
r `switcher` er = join (r `Stepper` er)
where Future is the type for a value we haven't seen yet, _+_ is the first of the two Future possibilities to occur, and <$> is infix fmap on Futures. [1]
If we don't provide any other means of creating behaviors than
the constant function (which is trivially a step function)
a "stepper" that remembers the most recent value of an event
application of various combinators of behaviors where the combinators themselves aren't time-varying
then every behavior is a step function and we can use this or a similar Monad instance for behaviors.
Difficulties only arise when we want to have behaviors that are continuous or are a function of some time other than when an event occurred. Consider if we had the following
time :: Behavior t t
which is the behavior that tracks the current time. A Monad instance for polling the system would still be the same, but we can no longer push changes through the system reliably. What happens when we make something as simple as time >>= \x -> if am x then return 0 else return 1 (where am t is true for times in the morning)? Neither our definition of >>= above nor Elliot's join can admit the optimization of knowing when the time changes; it changes continuously. The best we could do to >>= is something like:
if we know that the argument to the bind is step valued then
when the argument of the bind changes, evaluate the bind and
change the current step function to the returned step function
change the value to the value of the current step function
when the value of the current step function changes, change the value
otherwise
return an abstract syntax tree for this >>=
For the join form, we would be reduced to doing something similar, and simply record the AST in the instance that the outer behavior in a join isn't a step function.
Additionally, anything built using this as an input could change at noon and midnight, whether or not any other event was raised. It would taint everything from that point on with the inability to reliably push events.
From an implementation point of view, our best option would seem to be to continuously poll time, and replace anywhere it was used with a stepper built from the polling events. This wouldn't update values between events, so now users of our library can't reliably poll values.
Our best choice for an implementation would be to keep an abstract syntax tree of what happened with arbitrary behaviors like these and provide no means to generate events from behaviors. Then behaviors can be polled, but no updates will ever be pushed for them. In that case, we might as well leave it out of the library, and let the user pass around ASTs (which they can get for Free), and let the user evaluate the entire AST every time it's polled. We can't optimize it any more for the library user, since any value like this can change continuously.
There is one final option, but it involves introducing quite a bit of complexity. Introduce the notion of predictability for properties of continuously varying values and computations of continuously varying values. This would allow us to provide a Monad interface for a larger subset of time-varying behaviors, but not for all of them. This complexity is already desirable in other parts of programs, but I don't know of any libraries outside symbolic math which attempt to address this.
(Author here.)
First note, that the behaviorNow function is the monadic join.
In reactive-banana-0.7, Behavior t is not a monad beause that would have serious consequences for efficiency.
The first and most important problem is that behaviors can also be stateful. In conjunction with join, this would lead to time leaks. The main indication of problems is that the starting time t of the inner Behavior t is the same as the outer one. For instance, consider the program
e :: Event t Int
b :: Int -> Behavior t Int
b x = accumB 0 $ (x+) <$ e
bb :: Behavior t (Behavior t Int)
bb = stepper (pure 0) $ b <$> e
The behavior join bb would need to keep track of the whole history of the event e in order to perform the accumulation in the definition of b. In other words, the event e could never be garbage collected -- a time leak.
A second problem is that internally, the implementation of Behavior t also includes an event that keeps track of when the behavior changes. However, a liberal use of the join combinator, for instance as implied by do notation, would lead to rather convoluted calculations to determine whether the behavior has changed or not. This is contrary to the reason for keeping track in the first place: efficiency by avoiding expensive calculations.
The Reactive.Banana.Switch module offers various combinators that are cousins of the join function, but avoid the first problem with cleverly chosen types. In particular:
The switchB function is the most direct analogue of join.
The AnyMoment Identity type is similar to the Behavior type, but without state and without keeping track of changes. Consequently, it has a monad instance.
Related
I'm using Netwire to write a program that is driven by events from the network. I guess there are three questions here:
What makes Control.Wire.Unsafe.Event unsafe? Like the title says, what invariants do I need to maintain to use it safely?
I decided I need something like this: mapMaybeE :: Monad m => (a -> Maybe b) -> Wire s e m (Event a) (Event b). The context is I have messages coming in from the network and I want to respond to only some of them. This is what I wrote:
mapMaybeE :: Monad m => (a -> Maybe b) -> Wire s e m (Event a) (Event b)
mapMaybeE f = arr go . arr (fmap f)
where go WU.NoEvent = WU.NoEvent
go (WU.Event Nothing) = WU.NoEvent
go (WU.Event (Just a)) = WU.Event a
Is that "legal"? Or am I supposed to inhibit if there's no event?
Does Netwire make sense for this sort of problem? All the examples I've seen are of games that loop continuously. Here, I only want to step the wires when there's something to be done. Mostly, that will be network events, but I might also want to do things on a timer. E.g. an event comes in, then five seconds later the program does something. It shouldn't have to loop continuously until the time in the session is five seconds greater than when the event came in.
For a lot of these answers, "correct" or "legal" depends on what you want your application to do. "Idiomatic" might be a more interesting question, but as the library author has passed away, it's difficult to answer these questions definitively. The following therefore only represents my experience and may not be correct:
The "unsafe" part of Control.Wire.Unsafe.Event is the idea that you will be working with discrete instances in time, and you may not necessarily preserve the continuous time semantics that your program expects. In particular, there's no difference (from a type perspective) between an event happening in a simulation with a time state (the s in Wire s e m a b) that's represented as an Integer vs a Float, so you have to be careful to make sure that what you're doing makes sense for your application. The included general purpose combinators don't have that risk in the sense that they work with any sensible definition of "time". From the documentation of data Event:
Denotes a stream of values, each together with time of occurrence. Since Event is commonly used for functional reactive programming it does not define most of the usual instances to protect continuous time and discrete event occurrence semantics.
From the README:
If you are a framework developer you can import the Control.Wire.Unsafe.Event module to implement your own events. A game engine may include events for key presses or certain things happening in the scene. However, as an application developer you should view this type as being opaque. This is necessary in order to protect continuous time semantics. You cannot access event values directly.
There's certainly nothing "illegal" about doing it that way. When you inhibit is totally dependent on your application. The key difference being that if you compose wires that inhibit, the inhibition "bubbles up" to the first wire that handles it (such as with an Alternative: (<|>)). Producing NoEvent is fine if there's no event. :) The behavior you're looking for might be better modeled using the existing combinators dropWhileE and fmap from Control.Wire.Event though:
mapMaybeE :: Monad m => (a -> Maybe b) -> Wire s e m (Event a) (Event b)
mapMaybeE = arr (fmap fromJust) . dropWhileE isNothing . arr (fmap f)
Yes, netwire makes sense for any problem where you have to simulate the state of a system that has time-dependent semantics. To expand,
It shouldn't have to loop continuously until the time in the session is five seconds greater than when the event came in.
Something is going to need to keep track of this timer, so you're not going to get around having a loop in one capacity or another. (Maybe you can have the operating system do it via a call to sleep or something, but internally there's still some loop somewhere...) netwire lets you model the behavior of your system explicitly and respond to these kinds of events: both network events and timer events. Because Haskell is lazy, if you compose wires such that "really complicated wire" depends on a timer, the result from "really complicated wire" will not be evaluated until the timer expires (see after.)
I'm rather new to Elm, and I'm deeply attracted by the way Elm dealing with GUI. But after some deep thought, I find it's hard to efficiently update just one element of a list or finger tree (Just like finger tree in Haskell, if it already exists in Elm's library) which is under a Signal and the size of it also varies against time.
Specifically, to express a dynamic finger tree, we have to write
Signal [ {-the finger tree's element type-} ]
But if we want to update just one element of the finger tree efficiently, we have to write
Signal [ Signal {-the core data type-} ]
But in Elm the Signal is not a Monad, then how to flatten the two layer Signals into one layer?
Comment 1: I don't know in detail how Elm behaves in this situation. Reprocessing the whole finger tree is just my guess.
Comment 2: For example, suppose we have a signal value, marked as s, of type Signal (fingerTree Int), and the following function, marked as f, whose input is s is, for example, lift (fmap (+1)) whose type is Signal (fingerTree Int) -> Signal (fingerTree Int). And if s has just one element changed, function f has to re-do the (+1) operation for every element of s. Obviously, it's a waste of time and I'm not sure if Elm is intelligent enough to detect the immutability.
TL;DR: Implement your logic/data processing as pure functions and lift them to transform signals.
The trick is to write a function processList : [elementType] -> [elementType] that provides the logic you want (check if the third element is a wibbler and change it to a wobbler or whatever you wanted to do), and then use the lift function which has type
lift : (a -> b) -> Signal a -> Signal b
like lift processList mySignalThatProducesLists to edit the data produced by mySignalThatProducesLists using processList.
The main idea here is that you encode the logic and data processing as pure functions and then use them as Signal transformers using lift. It's like Elm automatically reapplies the function every time the data in the source signal is updated.
If you're used to programming in Haskell, you can think of Signal a as a newtype wrapper around Time -> a for some opaque Time type, and lift as fmap. You don't need to have a monad to edit data.
There are also lift2 andlift3functions for lifting functions with more than one argument, so again to use a Haskell analogy, you essentially have the capabilities of an Applicative functor there.
The reason they don't have a Monad is that that imposes implementation details that reduce efficiency. It may be that there's some equivalent of ArrowApply in there that I haven't spotted (that would give you an expressibility eqiuivelent of Monad), but I'm not sure there is.
I'm trying to understand the actual need for the reader and/or state monads. All the examples I've seen (including many on stackoverflow as I've hunted for suitable examples I can use and in various books and blog articles) are of the form (pseudo code)
f = do
foo <- ask
do something with foo
g = do
foo <- ask
do something else using foo
h = runReader
(
f
g
)
In other words, call two functions and (presumably) maintain some state from one call to the next. However, I don't find this example particularly convincing as (I think) I could just have f return some state and then pass that state on to g.
I'd love to see an example, using a single integer (say) as the state to be preserved where, rather than two sequential calls to f and then g from a central place, rather there's a call to f which then internally calls g and then have changed state (if state monad) available back in the main routine.
Most (well actually all) the examples I have seen spend a tremendous amount of time focusing on the definition of the monad and then show how to set up a single function call. To me, it would be the ability to do nested calls and have the state carried along for the ride to demonstrate why it's useful.
Here's a non-trivial example of a stateful subroutine calling another stateful subroutine.
import Control.Monad.Trans.State
f :: State Int ()
f = do
r <- g
modify (+ r)
g :: State Int Int
g = do
modify (+ 1)
get
main = print (execState f 4)
In this example, the initial state begins at 4 and the stateful computation begins at f. f internally calls g, which increments the state to 5 and then returns the current state (still 5). This restores control to f, which binds the value 5 to r and then increments the current state by r, giving a final state of 10:
>>> main
10
Almost everything you can do with monads you can do without them. (Well, some are special like ST, STM, IO, etc., but that's a different story.) But:
they allow you to encapsulate many common patterns, like in this case stateful computations, and hide details or boiler-plate code that would be otherwise needed; and
there are plethora of functions that work on any (or many) monads, which you can just specialize for a particular monad you're using.
To give an example: Often one needs to have some kind of a generator that supplies unique names, like when generating code etc. This can be easily accomplished using the state monad: Each time newName is called, it outputs a new name and increments the internal state:
import Control.Monad.State
import Data.Tree
import qualified Data.Traversable as T
type NameGen = State Int
newName :: NameGen String
newName = state $ \i -> ("x" ++ show i, i + 1)
Now let's say we have a tree that has some missing values. We'd like to supply them with such generated names. Fortunately, there is a generic function mapM that allows to traverse any traversable structure with any monad (without the monad abstraction, we wouldn't have this function). Now fixing the tree is easy. For each value we check if it's filled (then we just use return to lift it into the monad), and if not, supply a new name:
fillTree :: Tree (Maybe String) -> NameGen (Tree String)
fillTree = T.mapM (maybe newName return)
Just imagine implementing this function without monads, with explicit state - going manually through the tree and carrying the state around. The original idea would be completely lost in boilerplate code. Moreover, the function would be very specific to Tree and NameGen.
But with monads, we can go even further. We could parametrize the name generator and construct even more generic function:
fillTreeM :: (Monad m) => m String -> Tree (Maybe String) -> m (Tree String)
fillTreeM gen = T.mapM (maybe gen return)
Note the first parameter m String. It's not a constant String value, it's a recipe for generating a new String within m, whenever it's needed.
Then the original one can be rewritten just as
fillTree' :: Tree (Maybe String) -> NameGen (Tree String)
fillTree' = fillTreeM newName
But now we can use the same function for many very different purposes. For example, use the Rand monad and supply randomly generated names.
Or, at some point we might consider a tree without filled out nodes invalid. Then we just say that wherever we're asked for a new name, we instead abort the whole computation. This can be implemented just as
checkTree :: Tree (Maybe String) -> Maybe (Tree String)
checkTree = fillTreeM Nothing
where Nothing here is of type Maybe String, which, instead of trying to generate a new name, aborts the computation within the Maybe monad.
This level of abstraction would be hardly possible without having the concept of monads.
I'm trying to understand the actual need for the reader and/or state monads.
There are many ways to understand monads in general, and these monads in particular. In this answer, I focus on one understanding of these monads which I believe might be helpful for the OP.
The reader and state monads provide library support for very simple usage patterns:
The reader monad provides support for passing arguments to functions.
The state monad provides support for getting results out of functions and passing them to other functions.
As the OP correctly figured out, there is no great need for library support for these things, which are already very easy in Haskell. So many Haskell programs could use a reader or state monad, but there's no point in doing so, so they don't.
So why would anyone ever use a reader or state monad? I know three important reasons:
Realistic programs contain many functions that call each other and pass information back and forth. Sometimes, many functions take arguments that are just passed on to other functions. The reader monad is a library for this "accept arguments and pass them on" pattern. The state monad is a library for the similar "accept arguments, pass them on, and pass the results back as my result" pattern.
In this situation, a benefit of using the reader or state monad is that the arguments get passed on automatically, and we can focus on the more interesting jobs of these functions. A cost is that we have to use monadic style (do notation etc.).
Realistic programs can use multiple monads at once. They need arguments that are passed on, arguments that are returned, error handling, nondeterminism, ...
In this situation, a benefit of using the reader or state monad transformer is that we can package all of these monads into a single monad transformer stack. We still need monadic style, but now we pay the cost once (use do notation everywhere) and get the benefit often (multiple monads in the transformer stack).
Some library functions work for arbitrary monads. For example, the sequence :: [m a] -> m [a] takes a list of monadic actions, runs all of them in sequence, and returns the collected result.
A benefit of using the reader or state (or whatever) monad is that we can use these very generic library functions that work for any monad.
Note that points 1 and 2 only show up in realistic, somewhat large programs. So it is hard to give a small example for this benefit of using monads. Point 3 shows up in small library functions, but is harder to understand, because these library functions are often very abstract.
We have code like this:
guiState :: Discrete GuiState
guiState = stepperD (GuiState []) $
union (mkGuiState <$> changes model) evtAutoLayout
evtAutoLayout :: Event GuiState
evtAutoLayout = fmap fromJust . filterE isJust . fmap autoLayout $ changes guiState
You can see that evtAutoLayout feeds into guiState which feeds into
evtAutoLayout--so there is a cycle there. This is deliberate. Auto
layout adjusts the gui state until it reaches an equilibrium and then
it returns Nothing and so it should stop the loop. A new model change
can kick it off again, of course.
When we put this together, though, we run into an infinite loop on the
compile function call. Even if autoLayout = Nothing, it still results in a stack overflow during compile.
If I remove the union call in guiState and remove evtAutoLayout out of
the picture...
guiState :: Discrete GuiState
guiState = stepperD (GuiState []) $ mkGuiState <$> changes model
it works fine.
Any suggestions?
The question
Does the reactive-banana library support recursively defined events?
has not only one, but three answers. The short answers are: 1. generally no, 2. sometimes yes, 3. with workaround yes.
Here the long answers.
The semantics of reactive-banana do not support defining an Event directly in terms of itself.
This is a decision that Conal Elliott made in his original FRP semantics and I've decided to stick to it. Its main benefit is that the semantics remain very simple, you can always think in terms of
type Behavior a = Time -> a
type Event a = [(Time,a)]
I have provided a module Reactive.Banana.Model that implements almost precisely this model, you can consult its source code for any questions concerning the semantics of reactive-banana. In particular, you can use it to reason about your example: a calculation with pen & paper or trying it in GHCi (with some mock data) will tell you that the value evtAutoLayout is equal to _|_, i.e. undefined.
The latter may be surprising, but as you wrote it, the example is indeed undefined: the GUI state only changes if an evtAutoLayout event happens, but it can only happen if you know whether the GUI state changes, which in turn, etc. You always need to break the strangulating feedback loop by inserting a small delay. Unfortunately, reactive-banana doesn't currently offer a way to insert small delays, mainly because I don't know how to describe small delays in terms of the [(Time,a)] model in a way that allows recursion. (But see answer 3.)
It is possible and encouraged to define an Event in terms of a Behavior that refers to the Event again. In other words, recursion is allowed as long as you go through a Behavior.
A simple example would be
import Reactive.Banana.Model
filterRising :: (FRP f, Ord a) => Event f a -> Event f a
filterRising eInput = eOutput
where
eOutput = filterApply (greater <$> behavior) eInput
behavior = stepper Nothing (Just <$> eOutput)
greater Nothing _ = True
greater (Just x) y = x < y
example :: [(Time,Int)]
example = interpretTime filterRising $ zip [1..] [2,1,5,4,8,9,7]
-- example = [(1.0, 2),(3.0, 5),(5.0, 8),(6.0, 9)]
Given an event stream, the function filterRising returns only those events that are greater than the previously returned. This is hinted at in the documentation for the stepper function.
However, this is probably not the kind of recursion you desire.
Still, it is possible to insert small delays in reactive-banana, it's just not part of the core library and hence doesn't come with any guaranteed semantics. Also, you do need some support from your event loop to do that.
For instance, you can use a wxTimer to schedule an event to happen right after you've handled the current one. The Wave.hs example demonstrates the recursive use of a wxTimer with reactive-banana. I don't quite know what happens when you set the timer interval to 0, though, it might execute too early. You probably have to experiment a bit to find a good solution.
Hope that helps; feel free to ask for clarifications, examples, etc.
Disclosure: I'm the author of the reactive-banana library.
I have been researching FRP and found a bunch of different implementations. One model I have seen is one I will refer to as the 'Signal' representation. This essential combines Events and Behaviours into one entity.
Firstly, a Signal is an object thats value is a Behaviour. Secondly, a Signal has an Event 'stream' that can be seen and operated on as a standard data structure (you can use 'each', 'map' and 'filter' etc on the Signal to define how Events are reacted to). For example I can do this (where 'time' is a Signal representation of time):
time.each { t => print(t) } // every time there is a 'tick' the current time is printed
a = time * 5 //where 'a' would be a dynamic value always up to date with time
Is this representation of FRP correct or are there any problems? I quite like the way this works and also how simple it is to describe personally but I'm not sure its right.
Unfortunately, coalescing "event" and "behavior" into a single entity "signal" doesn't work so well.
Most signal-based FRP implementations that I know end up creating an additional "event"-like type along the lines of
type Event a = Signal (Maybe a)
So, the concept of events doesn't go away, and there is no real simplification. In fact, I would argue that the signal type is a semantic complification. Signals are only popular because they are easier to implement.
The main argument against signals is that they cannot represent continuous time behaviors, because they have to cater to the discrete events. In Conal Elliott's original vision, behaviors were simple continuous functions of time
type Behavior a = Time -> a
-- = function that takes the current time as parameter and returns
-- the corresponding value of type a
In contrast, signals always are always discretized and usually associated with a fixed time step. (It is possible to implement both events and behaviors on top of a variable time step signal, but it's not a good abstraction by itself.) Compare this to an event stream
type Event a = [(Time,a)]
-- list of pairs of the form (current time, corresponding event value)
where the individual events don't necessarily occur in regularly spaced time intervals.
The argument for the distinction between behaviors and events is that their API is quite different. The main point is that they have different product types:
(Behavior a , Behavior b) = Behavior (a,b)
(Event a , Event b ) = Event (a :+: b)
In words: a pair of behaviors is the same as a behavior of pairs, but a pair of events is the same as an event from either component/channel. Another point is that there are two operations
(<*>) :: Behavior (a -> b) -> Behavior a -> Behavior b
apply :: Behavior (a -> b) -> Event a -> Event b
that have almost the same type, but quite different semantics. (The first updates the result when the first argument changes, while the second doesn't.)
To summarize: signals can be used for implementing FRP and are valuable for experimenting with new implementation techniques, but behaviors and events are a better abstraction for people who just want to use FRP.
(Full Disclosure: I have implemented an FRP library called reactive-banana in Haskell.)