Animations
Now the traffic light is green, but we want it to switch to red every now and then. The CodeWorld API not only allows us to draw drawings, but also to run animations. What is an animation? It is a picture that changes over time, where time can conveniently be understood as the number of seconds since the start of the animation.
In imperative language, one would probably have a getCurrentTime() function and call that from somewhere in our drawing generating. This is not possible nor desirable in a pure functional language, as it would be a hidden side effect. Instead, the time is provided as a parameter.
So here this codes makes the traffic light switch every three seconds:
trafficController :: Double -> Picture
trafficController t
| round (t/3) `mod` 2 == 0 = trafficLight True
| otherwise = trafficLight False
main :: IO ()
main = animationOf trafficController
Questions:
How can the function trafficController work without the t input in the main statement if it is defined (above) to work with time parameter?
What makes the t parameter increment all the time ?
animationOf's type is (Double -> Picture) -> IO (). This means its argument has to have type Double -> Picture, i.e. it must be a function from Double to Picture. trafficController is a function with precisely this type. Note that trafficController t (for some Double t) isn't: it's a Picture!
The definition of animationOf, which you can find here. If you look into what it does with its argument, and then the functions it calls, etc., it ultimately calls its parameter (trafficController in this case) with different t repeatedly. However, it requires tracing a few steps, and I wouldn't recommend it at this stage.
Related
It's not clear to me the difference between makeLense and makePrisms?
I'm aware that when we want to access a nested structure/data then use makeLense like this:
data Point = Point { _x :: Int, _y :: Int}
data Test= Test {_name :: String, _position :: Point}
makeLenses ''Point
makeLenses ''Test
Then we can access or modify the component of Test or Point. For instance, we define a function:
modify :: Test -> Test
modify = over (position . x) (*8)
So we can have:
let t1 = Test {_name= "Me", _position = Point {_x = 3, _y = 8}}
then
modify t1
would be:
Test {_name = "Me", _position = Point {_x = 24, _y = 8}}
But, I don't know when and how to use makePrisms in the above example!
In order to understand this, you must understand types, type variables, algebraic data types (sum and product types) and you must also understand typeclasses and the Functor typeclass in particular. If you don't understand these things, bookmark this page, then go off and understand them, possibly with this resource that I helped work on for precisely this purpose, to illustrate and explain these basics: http://happylearnhaskelltutorial.com
So before we get to Prisms, you first need to know what a Lens is.
A Lens is often described as a functional getter/setter, but that's more about the implementation and so forth.
I'd like to try an experiment of description here with you now.
Say I have a page of very small text with some words on it. Now, I hand you a piece of cardboard the same size as that page, only it has hole for a magnifying glass set up to focus on a particular word. That word is in a specific place in that page.
So we have these two things: a page, and a magnifying glass "card" without a magnifying glass... if we put a glass in, it focusses on a particular place on the page. Now someone comes along with another page with different words on it, but in the same layout as the first page.
It's easy to image that you can take that card, put it on the new page and it will focus on the different word in that same place.
As well as a magnifying glass, you also have a special "eraser/printer" glass that when you put it into the card allows you erase and type words over that page.
So to now apply this to Lens, you can see with this example how if we take one piece of data (a page of text), a lens that fits that data (a card with a hole in it that matches the shape of the page) and a function that can "get" or "set" (or maybe other things) (that is, the magnifying glass or the eraser/printer glass, or other types of glasses), well, from this we can view the smaller piece of data (the word) in the larger piece of data (the page) to extract it... or perhaps we can set a different matching piece of data into the spot on that page...
lens :: :: Functor f => (s -> a) -> (s -> b -> t) -> (a -> f b) -> s -> f t
What does this function do? It creates a lens from some functions. Now we have the framework above to understand what a lens is, we can undertsand this function. I say to you that the s variable stands for "state" and it corresponds to the type of the piece of paper that is the context that the lens will focus within. Next I can say the a type variable corresponds to the word on the page that the lens will focus on. What about b and t? They are the transformed a and s values if we decide to change the value of the a, and that change changes its type.
What is the Functor for then? we'll find out in a while. Well, first up let's make a lens to make this practical. So, back to our lens function, its first argument is the "getter" function, the second argument is the "setter" function (hence the types). Then there's one other argument. Well, because Haskell functions are curried that's actually the return type: a function s -> f t. Let's make that lens now.
Let's say we have a list of values [(1,(5,9)), (2,(3,6))] and we'd like to make a lens that focusses on the second value in the second nested tuple. That's silly, though, because you could just use snd . snd right? Yeah, you could, so it's not a great example, but all the better examples are more complex, so bear with me - we'll get to them, and besides, can your snd.snd function also SET, or have a function applied to it? No, I didn't think so! :) (okay okay I know you could use fmap (const val) to set, but can it also change the type? well this line of thought is actually what you end up with Lens with if you continue it to its logical conclusion as Edward Kmett did - that's where the Functor comes in!)
sndOfSndLens = lens (snd.snd) (\(x,(y,z)) newZ -> (x,(y,newZ)))
So what did we get? we got a function of this type sndOfSndLens :: Functor f => (a -> f t2) -> (t, (t1, a)) -> f (t, (t1, t2)).
So let's get our values out: map (view sndOfSnd) [(1,(5,9)), (2,(3,6))] -> [9,6] Nice! That works... let's set new values in: map (set sndOfSnd 2000) [(1,(5,9)), (2,(3,6))] -> [(1,(5,2000)),(2,(3,2000))] okay fine...
If it was just getters or setters, that's boring, but there's also a function called over which will take a lens and a transforming function and run that transformation on the function of the focus of the lens... so let's subtract 10 from each: map (over sndOfSnd (flip (-) 10)) [(1,(5,9)), (2,(3,6))] -> [(1,(5,-1)),(2,(3,-4))] that's cool! well I'll let you read the rest of the lens documentation to understand all the other functions, because they're deep and lenses compose, and you can do all sorts of other things with them, too.
Prism
I promised we'd get to Prism, and look we're here... A Lens is a specific "optic" (while also, sometimes confusingly referring to the entire set of optics in total, too, so watch out for that), and Prism is another optic, thankfully completely specific. If a Lens is an optic that works for focussing on a particular part of a product algebraic data type, then a Prims does the same thing for sum types. That's why we were able to make a lens that talked about pairs ((,)) because they're a product type... that is they combine two types into one. Lens lets you focus on one piece, or a path through those pieces. By the way, our lens we created above can easily be defined by composing the more generic built in lenses: _2 . _2. There are also operator versions of all the lens functions we were talking about. They look pretty crazy, but they have a logic to them. Read up about them!
Prism lets you focus on one path through a sum type. What's a good example? Well, let's say we have think about the Either data type. It's Either a b where it's defined as data Either a b = Left a | Right b. So there's a corresponding prism function that lets us build a Prism the same as our above value. What happens if we use the built in _Left which focusses on the left side of an Either, but we only have a Right 10 value? Let's see... but first, we should let you know that we can't use view any more because it might not work, so we need to use preview which will return a value that might fail (sorry, spoilers!):
preview _Left (Left 10) -> Just 10 and then the Right one? preview _Left (Right 10) -> Nothing. Okay that's sweet.
The set function works fine, because it can fail silently if it doesn't make sense: set _Left 30 (Left 10) -> Left 30. What happens when it doesn't work? set _Right 30 (Left 10) -> Left 10 that's right, nothing.
Cool... hopefully this explains Lenses and Prims. They're two really useful optics. The Lens library is full of them so I encourage you to look at them
What about the original question tho?
The original question was about makeLenses and makePrisms. These are template haskell expressions (that is, they're meta programming / similar to macros, but typed macros), and they let you automatically build your own lenses and/or prisms from your own data types. Hopefully it makes more sense now when you would pick one over the other and how they differ. This will give you at least an inkling into how they're different. To really understand, you should read the documentation and look up all the other functions and optics that are possible.
Welcome to the amazingly cool world of Lenses!
In my breakout implementation there are two main behaviors that describe the
game's main state:
paddlePosition :: Behavior t Point
ballPosition :: Behavior t Point
Both are implemented in terms of tickEvent :: Event t () which discretely
updates them.
The gameOverEvent :: Event t () filters all tick events where the ball
position is below the screen.
I would like to replace paddlePosition by a new behavior as soon as there
is a gameOverEvent, leaving the paddle in place, in pseudo code:
newPaddlePosition = \t -> case gameOverEvent of
[] -> paddlePosition t
((t',()) : _) -> paddlePosition t'
The first question is: How do I express newPaddlePosition using
reactive-banana?
The second is question is a bit more vague: What is a good way to organize the
whole program depending on if the game is over or not? There are other
considerations like: How to handle ballPosition, how to draw the game, and so
on.
You are probably looking for dynamic event switching, in particular the switchB combinator.
See also this question.
Note that the pseudocode for your newPaddlePosition function does not make much sense: It says that when the gameOverEvent never occurs, then the new paddle position is equal to paddlePosition, otherwise it is constant and equal to the value of paddlePosition at the time that the event occurs. This is impossible to express in a causal FRP library. You probably mean something else.
My network looks like this:
ePhysics :: Event t ()
bPlayerForce :: Behavior t (Double,Double)
bPlayerPosition :: Behavior t (Double, Double)
ePhysics is fired repeatedly from a timer.
I'm having issues with defining bPlayerPosition. What I intend is that it will start at (0,0), and whenever ePhysics is fired, bPlayerPosition will be recalculated using bPlayerForce as a parameter.
The problem is that in-order to specify an initial value I need to use accumB/stepper but they only work with events, and I can't get the force value from bPlayerForce since only behaviors can get the value of other behaviors (with <*>).
The alternative would be just using <*> but with that I have no initial value, so it becomes a meaningless infinite recursion:
let bPlayerPosition = pure calcPosition <*> bPlayerForce <*> bPlayerPosition
I have 3 questions:
Is there a way of getting a value out of a behavior without <*>? like, when reactimateing or mapping an event? the issue of not being able to has been screwing with me constantly since the very start.
Would there be a more functional/frp way to do physics? (in general and the ones specific to the question)
How can I solve the presented problem?
The apply combinator, also called <#>, and its variant <# provide a way to sample a Behavior whenever an event happens.
Looks fine to me. The Animation.hs example does something similar. You have to approximate the integral by summing over discrete time steps, which are given by your ePhysics event.
Something along the lines of
let bPlayerPosition = stepper (0,0) $
(calcPosition <*> bPlayerForce <*> bPlayerPosition) <# ePhysics
In a previous SO question (Is it possible?: Behavior t [Behavior t a] -> Behavior t [a]) we were analyzing the existence of a Behavior join (to use reactive-banana terms).
Behavior t (Behavior t a) -> Behavior t a
Implemented in the semantic model as follows
type Behavior t a = t -> a
behaviorNow :: Behavior t (Behavior t a) -> Behavior t a
behaviorNow f t = f t t
While implementing this directly would be unfortunate since we could produce a Behavior Monad using const and behaviorNow, if and how does behaviorNow violate the semantics of FRP?
I'd love to hear answers using the terminology of any other FRP system along with comparisons if meaningful.
In a poll based FRP system, any behavior has a meaningful join
the sample of join bb is the sample of the b obtained by sampling bb
In push based FRP, any behavior that is a step function composed with other step functions has a meaningful >>= and join. Pushing values through >>= can be described in imperative terms:
when the argument of the bind changes, evaluate the bind and
change the current step function to the returned step function
change the value to the value of the current step function
when the value of the current step function changes, change the value
Providing a Monad instance may be slightly undesirable because it is likely to be chosen by preference by library users, even if it is less efficient. For example, the code in this unrelated answer performs more work when a computation was built with >>= than if it had been equivalently built with <*>.
Conal Elliott described in declarative terms a join for simultaneously pushing and polling values for behaviors built from step functions:
-- Reactive is a behavior that can only be a step function
data Reactive a = a `Stepper` Event a
newtype Event a = Ev (Future (Reactive a))
join :: Reactive (Reactive a) -> Reactive a
join ((a `Stepper` Ev ur) `Stepper` Ev urr ) =
((`switcher` Ev urr ) <$> ur) _+_ (join <$> urr )
switcher :: Reactive a -> Event (Reactive a) -> Reactive a
r `switcher` er = join (r `Stepper` er)
where Future is the type for a value we haven't seen yet, _+_ is the first of the two Future possibilities to occur, and <$> is infix fmap on Futures. [1]
If we don't provide any other means of creating behaviors than
the constant function (which is trivially a step function)
a "stepper" that remembers the most recent value of an event
application of various combinators of behaviors where the combinators themselves aren't time-varying
then every behavior is a step function and we can use this or a similar Monad instance for behaviors.
Difficulties only arise when we want to have behaviors that are continuous or are a function of some time other than when an event occurred. Consider if we had the following
time :: Behavior t t
which is the behavior that tracks the current time. A Monad instance for polling the system would still be the same, but we can no longer push changes through the system reliably. What happens when we make something as simple as time >>= \x -> if am x then return 0 else return 1 (where am t is true for times in the morning)? Neither our definition of >>= above nor Elliot's join can admit the optimization of knowing when the time changes; it changes continuously. The best we could do to >>= is something like:
if we know that the argument to the bind is step valued then
when the argument of the bind changes, evaluate the bind and
change the current step function to the returned step function
change the value to the value of the current step function
when the value of the current step function changes, change the value
otherwise
return an abstract syntax tree for this >>=
For the join form, we would be reduced to doing something similar, and simply record the AST in the instance that the outer behavior in a join isn't a step function.
Additionally, anything built using this as an input could change at noon and midnight, whether or not any other event was raised. It would taint everything from that point on with the inability to reliably push events.
From an implementation point of view, our best option would seem to be to continuously poll time, and replace anywhere it was used with a stepper built from the polling events. This wouldn't update values between events, so now users of our library can't reliably poll values.
Our best choice for an implementation would be to keep an abstract syntax tree of what happened with arbitrary behaviors like these and provide no means to generate events from behaviors. Then behaviors can be polled, but no updates will ever be pushed for them. In that case, we might as well leave it out of the library, and let the user pass around ASTs (which they can get for Free), and let the user evaluate the entire AST every time it's polled. We can't optimize it any more for the library user, since any value like this can change continuously.
There is one final option, but it involves introducing quite a bit of complexity. Introduce the notion of predictability for properties of continuously varying values and computations of continuously varying values. This would allow us to provide a Monad interface for a larger subset of time-varying behaviors, but not for all of them. This complexity is already desirable in other parts of programs, but I don't know of any libraries outside symbolic math which attempt to address this.
(Author here.)
First note, that the behaviorNow function is the monadic join.
In reactive-banana-0.7, Behavior t is not a monad beause that would have serious consequences for efficiency.
The first and most important problem is that behaviors can also be stateful. In conjunction with join, this would lead to time leaks. The main indication of problems is that the starting time t of the inner Behavior t is the same as the outer one. For instance, consider the program
e :: Event t Int
b :: Int -> Behavior t Int
b x = accumB 0 $ (x+) <$ e
bb :: Behavior t (Behavior t Int)
bb = stepper (pure 0) $ b <$> e
The behavior join bb would need to keep track of the whole history of the event e in order to perform the accumulation in the definition of b. In other words, the event e could never be garbage collected -- a time leak.
A second problem is that internally, the implementation of Behavior t also includes an event that keeps track of when the behavior changes. However, a liberal use of the join combinator, for instance as implied by do notation, would lead to rather convoluted calculations to determine whether the behavior has changed or not. This is contrary to the reason for keeping track in the first place: efficiency by avoiding expensive calculations.
The Reactive.Banana.Switch module offers various combinators that are cousins of the join function, but avoid the first problem with cleverly chosen types. In particular:
The switchB function is the most direct analogue of join.
The AnyMoment Identity type is similar to the Behavior type, but without state and without keeping track of changes. Consequently, it has a monad instance.
I was reading an interesting article about monad, 'pipes' and being a 'red pipe'. From what it seems it is saying function parameters or classes are types of pipes to make sure everything fits. Then it goes about being a red pipe and must always be a red pipe and purity. This part I didn't understand
Why can't a 'red pipe' become a 'blue pipe'? Can a red pipe be converted to a plain pipe? What does this have to do with purity? and can I think of this being something like class RedPipe: PlainPipe { /* same interface and implementation here*/ } ?
In Haskell, the type of a function tells you exactly what it does and, more importantly, what it does NOT do. This makes it easier to reason about Haskell code.
For example, if I have a function of the following type:
f :: Int -> Int
I know that it takes an Int as input and produces an Int as output, and it does nothing else. Importantly, I know it has no side effects.
If I have a function of type:
g :: Double -> State Int Double
... that function takes a Double as an argument and produces a way to produce a Double, but only if I allow it to consult or modify some Int state.
If I have a function of type:
h :: Int -> Maybe String
I know that this function takes an Int and might produce a String, or it might fail, producing nothing. Notice that none of the previous two functions returned Maybe, meaning that they could NOT fail. Haskell does not permit failure (i.e. nullable values) by default.
If I have a function of type:
i :: String -> IO ()
I know that this function takes a String and can be run to produce side effects. Note that none of the previous functions had an IO in their type, meaning that they could not produce side effects. Haskell does not permit side effects by default. You must explicitly opt in to them in the type.
This means that we can look at the types of functions or values and immediately understand what features they are using. So, for example, if I see a Maybe in the type, I know there is a potential to fail, and if I don't, then I know there is no potential to fail. Similarly, if I see an IO in the type, then I know there is a potential for side effects, but if I see no IO in the type, then there is no potential for side effects.
In mainstream languages, you don't have this ability to selectively "opt-in" to features. All features are on by default, all the time, meaning that you must always check for null because you have no way to guarantee that some function didn't fail, and you must always run tests because you have no way to guarantee that some function didn't implicitly modify the state of your system.
Haskell lets you restrict the behaviors of functions to give you finer grained control over their "permissions", which makes it easier to scale to large programs without bugs and also makes it easier to read and understand Haskell code because the type enumerates the full range of the code's behaviors, so you never have to understand more than what the type permits.
The author is using the terms "red pipe" and "blue pipe" as a metaphor. As far as I know, that's not widely used terminology. I think the key point of his article is that type safety provided by languages such as Java help to catch some kinds of programmer mistakes, isolating pure from impure functions can catch even more. Using one of his examples:
square :: Double -> Double
This type signature tells me that
The function square doesn't have any side effects. It's not going to sneak off and update a database, or print something to the screen, or change some state data, or otherwise surprise me. (In Java, or C, for example, I would have to read the function and any functions it calls, or rely on the documentation, to know what the function does.)
Every time I call it with a particular value, say, square 5, I will get exactly the same result. This is called referential transparency, and it allows the compiler to do some optimisation because it knows that this value will never change, so it only needs to be calculated once.