Earlier I asked about translating monadic code to use only the applicative functor instance of Parsec. Unfortunately I got several replies which answered the question I literally asked, but didn't really give me much insight. So let me try this again...
Summarising my knowledge so far, an applicative functor is something which is somewhat more restricted than a monad. In the tradition of "less is more", restricting what the code can do increases the possibilities for crazy code manipulation. Regardless, a lot of people seem to believe that using applicative instead of monad is a superior solution where it's possible.
The Applicative class is defined in Control.Applicative, whose Haddock's listing helpfully separates the class methods and utility functions with a vast swathe of class instances between them, to make it hard to quickly see everything on screen at once. But the pertinent type signatures are
pure :: x -> f x
<*> :: f (x -> y) -> f x -> f y
*> :: f x -> f y -> f y
<* :: f x -> f y -> f x
<$> :: (x -> y) -> f x -> f y
<$ :: x -> f y -> f x
Makes perfect sense, right?
Well, Functor already gives us fmap, which is basically <$>. I.e., given a function from x to y, we can map an f x to an f y. Applicative adds two essentially new elements. One is pure, which has roughly the same type as return (and several other operators in various category theory classes). The other is <*>, which gives us the ability to take a container of functions and a container of inputs and produce a container of outputs.
Using the operators above, we can very neatly do something such as
foo <$> abc <*> def <*> ghi
This allows us to take an N-ary function and source its arguments from N functors in a way which generalises easily to any N.
This much I already understand. There are two main things which I do not yet understand.
First, the functions *>, <* and <$. From their types, <* = const, *> = flip const, and <$ could be something similar. Presumably this does not describe what these functions actually do though. (??!)
Second, when writing a Parsec parser, each parsable entity usually ends up looking something like this:
entity = do
var1 <- parser1
var2 <- parser2
var3 <- parser3
...
return $ foo var1 var2 var3...
Since an applicative functor does not allow us to bind intermediate results to variables in this way, I'm puzzled as to how to gather them up for the final stage. I haven't been able to wrap my mind around the idea fully enough in order to comprehend how to do this.
The <* and *> functions are very simple: they work the same way as >>. The <* would work the same way as << except << does not exist. Basically, given a *> b, you first "do" a, then you "do" b and return the result of b. For a <* b, you still first "do" a then "do" b, but you return the result of a. (For appropriate meanings of "do", of course.)
The <$ function is just fmap const. So a <$ b is equal to fmap (const a) b. You just throw away the result of an "action" and return a constant value instead. The Control.Monad function void, which has a type Functor f => f a -> f () could be written as () <$.
These three functions are not fundamental to the definition of an applicative functor. (<$, in fact, works for any functor.) This, again, is just like >> for monads. I believe they're in the class to make it easier to optimize them for specific instances.
When you use applicative functors, you do not "extract" the value from the functor. In a monad, this is what >>= does, and what foo <- ... desugars to. Instead, you pass the wrapped values into a function directly using <$> and <*>. So you could rewrite your example as:
foo <$> parser1 <*> parser2 <*> parser3 ...
If you want intermediate variables, you could just use a let statement:
let var1 = parser1
var2 = parser2
var3 = parser3 in
foo <$> var1 <*> var2 <*> var3
As you correctly surmised, pure is just another name for return. So, to make the shared structure more obvious, we can rewrite this as:
pure foo <*> parser1 <*> parser2 <*> parser3
I hope this clarifies things.
Now just a little note. People do recommend using applicative functor functions for parsing. However, you should only use them if they make more sense! For sufficiently complex things, the monad version (especially with do-notation) can actually be clearer. The reason people recommend this is that
foo <$> parser1 <*> parser2 <*> parser3
is both shorter and more readable than
do var1 <- parser1
var2 <- parser2
var3 <- parser3
return $ foo var1 var2 var3
Essentially, the f <$> a <*> b <*> c is essentially like lifted function application. You can imagine the <*> being a replacement for a space (e.g. function application) in the same way that fmap is a replacement for function application. This should also give you an intuitive notion of why we use <$>--it's like a lifted version of $.
I can make a few remarks here, hopefully helpful. This reflects my understanding which itself might be wrong.
pure is unusually named. Usually functions are named referring to what they produce, but in pure x it is x that is pure. pure x produces an applicative functor which "carries" the pure x. "Carries" of course is approximate. An example: pure 1 :: ZipList Int is a ZipList, carrying a pure Int value, 1.
<*>, *>, and <* are not functions, but methods (this answers your first concern). f in their types is not general (like it would be, for functions) but specific, as specified by a specific instance. That's why they are indeed not just $, flip const and const. The specialized type f specifies the semantics of combination. In the usual applicative style programming, combination means application. But with functors, an additional dimension is present, represented by the "carrier" type f. In f x, there is a "contents", x, but there is also a "context", f.
The "applicative functors" style sought to enable the "applicative style" programming, with effects. Effects being represented by functors, carriers, providers of context; "applicative" referring to the normal applicative style of functional application. Writing just f x to denote application was once a revolutionary idea. There was no need for additional syntax anymore, no (funcall f x), no CALL statements, none of this extra stuff - combination was application... Not so, with effects, seemingly - there was again that need for the special syntax, when programming with effects. The slain beast reappeared again.
So came the Applicative Programming with Effects to again make the combination mean just application - in the special (perhaps effectful) context, if they were indeed in such context. So for a :: f (t -> r) and b :: f t, the (almost plain) combination a <*> b is an application of carried contents (or types t -> r and t), in a given context (of type f).
The main distinction from monads is, monads are non-linear. In
do { x <- a
; y <- b x
; z <- c x y
; return
(x, y, z) }
the computation b x depends on x, and c x y depends on both x and y. The functions are nested:
a >>= (\x -> b x >>= (\y -> c x y >>= (\z -> .... )))
If b and c do not depend on the previous results (x, y), this can be made flat by making the computation stages return repackaged, compound data (this addresses your second concern):
a >>= (\x -> b >>= (\y-> return (x,y))) -- `b ` sic
>>= (\(x,y) -> c >>= (\z-> return (x,y,z))) -- `c `
>>= (\(x,y,z) -> ..... )
and this is essentially an applicative style (b, c are fully known in advance, independent of the value x produced by a, etc.). So when your combinations create data that encompass all the information they need for further combinations, and there's no need for "outer variables" (i.e. all computations are already fully known, independent of any values produced by any of the previous stages), you can use this style of combination.
But if your monadic chain has branches dependent on values of such "outer" variables (i.e. results of previous stages of monadic computation), then you can't make a linear chain out of it. It is essentially monadic then.
As an illustration, the first example from that paper shows how the "monadic" function
sequence :: [IO a] → IO [a]
sequence [ ] = return [ ]
sequence (c : cs) = do
{ x <- c
; xs <- sequence cs -- `sequence cs` fully known, independent of `x`
; return
(x : xs) }
can actually be coded in this "flat, linear" style as
sequence :: (Applicative f) => [f a] -> f [a]
sequence [] = pure []
sequence (c : cs) = pure (:) <*> c <*> sequence cs
-- (:) x xs
There's no use here for the monad's ability to branch on previous results.
a note on the excellent Petr Pudlák's answer: in my "terminology" here, his pair is combination without application. It shows that the essence of what the Applictive Functors add to plain Functors, is the ability to combine. Application is then achieved by the good old fmap. This suggests combinatory functors as perhaps a better name (update: in fact, "Monoidal Functors" is the name).
You can view functors, applicatives and monads like this: They all carry a kind of "effect" and a "value". (Note that the terms "effect" and "value" are only approximations - there doesn't actually need to be any side effects or values - like in Identity or Const.)
With Functor you can modify possible values inside using fmap, but you cannot do anything with effects inside.
With Applicative, you can create a value without any effect with pure, and you can sequence effects and combine their values inside. But the effects and values are separate: When sequencing effects, an effect cannot depend on the value of a previous one. This is reflected in <*, <*> and *>: They sequence effects and combine their values, but you cannot examine the values inside in any way.
You could define Applicative using this alternative set of functions:
fmap :: (a -> b) -> (f a -> f b)
pureUnit :: f ()
pair :: f a -> f b -> f (a, b)
-- or even with a more suggestive type (f a, f b) -> f (a, b)
(where pureUnit doesn't carry any effect)
and define pure and <*> from them (and vice versa). Here pair sequences two effects and remembers the values of both of them. This definition expresses the fact that Applicative is a monoidal functor.
Now consider an arbitrary (finite) expression consisting of pair, fmap, pureUnit and some primitive applicative values. We have several rules we can use:
fmap f . fmap g ==> fmap (f . g)
pair (fmap f x) y ==> fmap (\(a,b) -> (f a, b)) (pair x y)
pair x (fmap f y) ==> -- similar
pair pureUnit y ==> fmap (\b -> ((), b)) y
pair x pureUnit ==> -- similar
pair (pair x y) z ==> pair x (pair y z)
Using these rules, we can reorder pairs, push fmaps outwards and eliminate pureUnits, so eventually such expression can be converted into
fmap pureFunction (x1 `pair` x2 `pair` ... `pair` xn)
or
fmap pureFunction pureUnit
So indeed, we can first collect all effects together using pair and then modify the resulting value inside using a pure function.
With Monad, an effect can depend on the value of a previous monadic value. This makes them so powerful.
The answers already given are excellent, but there's one small(ish) point I'd like to spell out explicitly, and it has to do with <*, <$ and *>.
One of the examples was
do var1 <- parser1
var2 <- parser2
var3 <- parser3
return $ foo var1 var2 var3
which can also be written as foo <$> parser1 <*> parser2 <*> parser3.
Suppose that the value of var2 is irrelevant for foo - e.g. it's just some separating whitespace. Then it also doesn't make sense to have foo accept this whitespace only to ignore it. In this case foo should have two parameters, not three. Using do-notation, you can write this as:
do var1 <- parser1
parser2
var3 <- parser3
return $ foo var1 var3
If you wanted to write this using only <$> and <*> it should be something like one of these equivalent expressions:
(\x _ z -> foo x z) <$> parser1 <*> parser2 <*> parser3
(\x _ -> foo x) <$> parser1 <*> parser2 <*> parser3
(\x -> const (foo x)) <$> parser1 <*> parser2 <*> parser3
(const . foo) <$> parser1 <*> parser2 <*> parser3
But that's kind of tricky to get right with more arguments!
However, you can also write foo <$> parser1 <* parser2 <*> parser3. You could call foo the semantic function which is fed the result of parser1 and parser3 while ignoring the result of parser2 in between. The absence of > is meant to be indicative of the ignoring.
If you wanted to ignore the result of parser1 but use the other two results, you can similarly write foo <$ parser1 <*> parser2 <*> parser3, using <$ instead of <$>.
I've never found much use for *>, I would normally write id <$ p1 <*> p2 for the parser that ignores the result of p1 and just parses with p2; you could write this as p1 *> p2 but that increases the cognitive load for readers of the code.
I've learnt this way of thinking just for parsers, but it has later been generalised to Applicatives; but I think this notation comes from the uuparsing library; at least I used it at Utrecht 10+ years ago.
I'd like to add/reword a couple things to the very helpful existing answers:
Applicatives are "static". In pure f <*> a <*> b, b does not depend on a, and so can be analyzed statically. This is what I was trying to show in my answer to your previous question (but I guess I failed -- sorry) -- that since there was actually no sequential dependence of parsers, there was no need for monads.
The key difference that monads bring to the table is (>>=) :: Monad m => m a -> (a -> m b) -> m a, or, alternatively, join :: Monad m => m (m a). Note that whenever you have x <- y inside do notation, you're using >>=. These say that monads allow you to use a value "inside" a monad to produce a new monad, "dynamically". This cannot be done with an Applicative. Examples:
-- parse two in a row of the same character
char >>= \c1 ->
char >>= \c2 ->
guard (c1 == c2) >>
return c1
-- parse a digit followed by a number of chars equal to that digit
-- assuming: 1) `digit`s value is an Int,
-- 2) there's a `manyN` combinator
-- examples: "3abcdef" -> Just {rest: "def", chars: "abc"}
-- "14abcdef" -> Nothing
digit >>= \d ->
manyN d char
-- note how the value from the first parser is pumped into
-- creating the second parser
-- creating 'half' of a cartesian product
[1 .. 10] >>= \x ->
[1 .. x] >>= \y ->
return (x, y)
Lastly, Applicatives enable lifted function application as mentioned by #WillNess.
To try to get an idea of what the "intermediate" results look like, you can look at the parallels between normal and lifted function application. Assuming add2 = (+) :: Int -> Int -> Int:
-- normal function application
add2 :: Int -> Int -> Int
add2 3 :: Int -> Int
(add2 3) 4 :: Int
-- lifted function application
pure add2 :: [] (Int -> Int -> Int)
pure add2 <*> pure 3 :: [] (Int -> Int)
pure add2 <*> pure 3 <*> pure 4 :: [] Int
-- more useful example
[(+1), (*2)]
[(+1), (*2)] <*> [1 .. 5]
[(+1), (*2)] <*> [1 .. 5] <*> [3 .. 8]
Unfortunately, you can't meaningfully print the result of pure add2 <*> pure 3 for the same reason that you can't for add2 ... frustrating. You may also want to look at the Identity and its typeclass instances to get a handle on Applicatives.
Related
I've recently been trying to learn Haskell with the "Learn You a Haskell" and have been really struggling with understanding functions as Applicatives. I should point out that using other types of Applicatives like Lists and Maybe I seem to understand well enough to use them effectively.
As I tend to do when trying to understand something is I tried to play with as many examples as I could and once the pattern emerges things tend to make sense. As such I tried a few examples. Attached are my notes of several examples I tried along with a diagram I drew to try to visualize what was happening.
The definition of funct doesnt seem to relevant to the outcome but in my tests I used a function with the following definition:
funct :: (Num a) => a -> a -> a -> a
At the bottom I tried to show the same thing as in the diagrams just using normal math notation.
So all of this is well and good, I can understand the pattern when I have some function of an arbitrary number of arguments (though needs 2 or more) and apply it to a function that takes one argument. However intuitively this pattern doesn't make that much sense to me.
So here are the specific questions I have:
What is the intuitive way to understand the pattern I'm seeing, particularly if i view an Applicative as a container (which is how I view Maybe and lists)?
What is the pattern when the function on the right of the <*> takes more than a single argument (I've mostly been using the function (+3) or (+5) on the right)?
why is the function on the right hand side of the <*> applied to the second argument of the function on the left side. For example if the function on the right hand side were f() then funct(a,b,c) turns into funct (x, f(x), c)?
Why does it work for funct <*> (+3) but not for funct <*> (+)? Moreover it DOES work for (\ a b -> 3) <*> (+)
Any explanation that gives me a better intuitive understanding for this concept would be greatly appreciated. I read other explanations such as in the book I mentioned that explains functions in terms of ((->)r) or similar patterns but even though I know how to use the ->) operator when defining a function I'm not sure i understand it in this context.
Extra Details:
I want to also include the actual code I used to help me form the diagrams above.
First I defined funct as I showed above with:
funct :: (Num a) => a -> a -> a -> a
Throughout the process i refined funct in various ways to understand what was going on.
Next I tried this code:
funct a b c = 6
functMod = funct <*> (+3)
functMod 2 3
Unsuprisingly the result was 6
So now I tried just returning each argument directly like this:
funct a b c = a
functMod = funct <*> (+3)
functMod 2 3 -- returns 2
funct a b c = b
functMod = funct <*> (+3)
functMod 2 3 -- returns 5
funct a b c = c
functMod = funct <*> (+3)
functMod 2 3 -- returns 3
From this I was able to confirm the second diagram is what was taking place. I repeated this patterns to observe the third diagram as well (which is the same patterns extended on top a second time).
You can usually understand what a function is doing in Haskell if you
substitute its definition into some examples. You already have some
examples and the definition you need is <*> for (->) a which is
this:
(f <*> g) x = f x (g x)
I don't know if you'll find any better intuition than just using the
definition a few times.
On your first example we get this:
(funct <*> (+3)) x
= funct x ((+3) x)
= funct x (x+3)
(Since there was nothing I could do with funct <*> (+3) without a
further parameter I just applied it to x - do this any time you need
to.)
And the rest:
(funct <*> (+3) <*> (+5)) x
= (funct x (x+3) <*> (+5)) x
= funct x (x+3) x ((+5) x)
= funct x (x+3) x (x+5)
(funct <*> (+)) x
= funct x ((+) x)
= funct x (x+)
Notice you can't use the same funct with both of these - in
the first it can take four numbers, but in the second it needs to take
a number and a function.
((\a b -> 3) <*> (+)) x
= (\a b -> 3) x (x+)
= (\b -> 3) (x+)
= 3
(((\a b -> a + b) <*> (+)) x
= (\a b -> a + b) x (x+)
= x + (x+)
= type error
As pointed out by David Fletcher, (<*>) for functions is:
(g <*> f) x = g x (f x)
There are two intuitive pictures of (<*>) for functions which, though not quite able to stop it from being dizzying, might help with keeping your balance as you go through code that uses it. In the next few paragraphs, I will use (+) <*> negate as a running example, so you might want to try it out a few times in GHCi before continuing.
The first picture is (<*>) as applying the result of a function to the result of another function:
g <*> f = \x -> (g x) (f x)
For instance, (+) <*> negate passes an argument to both (+) and negate, giving out a function and a number respectively, and then applies one to the other...
(+) <*> negate = \x -> (x +) (negate x)
... which explains why its result is always 0.
The second picture is (<*>) as a variation on function composition in which the argument is also used to determine what the second function to be composed will be
g <*> f = \x -> (g x . f) x
From that point of view, (+) <*> negate negates the argument and then adds the argument to the result:
(+) <*> negate = \x -> ((x +) . negate) x
If you have a funct :: Num a => a -> a -> a -> a, funct <*> (+3) works because:
In terms of the first picture: (+ 3) x is a number, and so you can apply funct x to it, ending up with funct x ((+ 3) x), a function that takes two arguments.
In terms of the second picture: funct x is a function (of type Num a => a -> a -> a) that takes a number, and so you can compose it with (+ 3) :: Num a => a -> a.
On the other hand, with funct <*> (+), we have:
In terms of the first picture: (+) x is not a number, but a Num a => a -> a function, and so you can't apply funct x to it.
In terms of the second picture: the result type of (+), when seen as a function of one argument ((+) :: Num a => a -> (a -> a)), is Num a => a -> a (and not Num a => a), and so you can't compose it with funct x (which expects a Num a => a).
For an arbitrary example of something that does work with (+) as the second argument to (<*>), consider the function iterate:
iterate :: (a -> a) -> a -> [a]
Given a function and an initial value, iterate generates an infinite list by repeatedly applying the function. If we flip the arguments to iterate, we end up with:
flip iterate :: a -> (a -> a) -> [a]
Given the problem with funct <*> (+) was that funct x wouldn't take a Num a => a -> a function, this seems to have a suitable type. And sure enough:
GHCi> take 10 $ (flip iterate <*> (+)) 1
[1,2,3,4,5,6,7,8,9,10]
(On a tangential note, you can leave out the flip if you use (=<<) instead of (<*>). That, however, is a different story.)
As a final aside, neither of the two intuitive pictures lends itself particularly well to the common use case of applicative style expressions such as:
(+) <$> (^2) <*> (^3)
To use the intuitive pictures there, you'd have to account for how (<$>) for functions is (.), which murks things quite a bit. It is easier to just see the entire thing as lifted application instead: in this example, we are adding up the results of (^2) and (^3). The equivalent spelling as...
liftA2 (+) (^2) (^3)
... somewhat emphasises that. Personally, though, I feel one possible disadvantage of writing liftA2 in this setting is that, if you apply the resulting function right in the same expression, you end up with something like...
liftA2 (+) (^2) (^3) 5
... and seeing liftA2 followed by three arguments tends to make my brain tilt.
You can view the function monad as a container. Note that it's really a separate monad for every argument-type, so we can pick a simple example: Bool.
type M a = Bool -> a
This is equivalent to
data M' a = M' { resultForFalse :: a
, resultForTrue :: a }
and the instances could be defined
instance Functor M where instance Functor M' where
fmap f (M g) = M g' fmap f (M' gFalse gTrue) = M g'False g'True
where g' False = f $ g False where g'False = f $ gFalse
g' True = f $ g True g'True = f $ gTrue
and similar for Applicative and Monad.
Of course this exhaustive case-listing definition would become totally impractical for argument-types with more than a few possible values, but it's always the same principle.
But the important thing to take away is that the instances are always specific for one particular argument. So, Bool -> Int and Bool -> String belong to the same monad, but Int -> Int and Char -> Int do not. Int -> Double -> Int does belong to the same monad as Int -> Int, but only if you consider Double -> Int as an opaque result type which has nothing to do with the Int-> monad.
So, if you're considering something like a -> a -> a -> a then this is not really a question about applicatives/monads but about Haskell in general. And therefore, you shouldn't expect that the monad=container picture gets you anywhere. To understand a -> a -> a -> a as a member of a monad, you need to pick out which of the arrows you're talking about; in this case it's only the leftmost one, i.e. you have the value M (a->a->a) in the type M=(a->) monad. The arrows between a->a->a do not participate in the monadic action in any way; if they do in your code, then it means you're actually mixing multiple monads together. Before you do that, you should understand how a single monad works, so stick to examples with only a single function arrow.
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
From my understanding, it takes a function f, where another function (a -> b) as its argument, returns a function f. Applying f to a then returns a function f and apply f to b.
Here is an example:
Prelude> (+) <$> Just 2 <*> Just 3
Just 5
But I don't quite understand how it works.
I guess (+) should be f, Just 2 and Just 3 should be a and b respectively. Then what is (a -> b)?
From my understanding, it takes a function f...
Unfortunately this is incorrect. In this case, f is a type, not a function. Specifically, f is a "higher-kinded type" with kind * -> *. The type f is the functor.
In this case, f is Maybe. So we can rewrite the function types, specializing them for Maybe:
pure :: a -> Maybe a
(<*>) :: Maybe (a -> b) -> Maybe a -> Maybe b
It starts to become a bit clearer once you get this far. There are a couple different possible definitions for pure, but only one that makes sense:
pure = Just
The operator x <$> y is the same as pure x <*> y, so if you write out:
(+) <$> Just 2 <*> Just 3
Then we can rewrite it as:
pure (+) <*> pure 2 <*> pure 3
Although this technically has a more general type. Working with the functor laws, we know that pure x <*> pure y is the same as pure (x y), so we get
pure ((+) 2) <*> pure 3
pure ((+) 2 3)
pure (2 + 3)
In this case, we a and b are the types but since <*> appears twice they actually have different types in each case.
In the first <*>, a is Int and b is Int -> Int.
In the second <*>, both a and b are Int. (Technically you get generic versions of Int but that's not really important to the question.)
Applicative functors were introduced to Haskell as applicative style programming "Idioms". Unpacking this phrase, we have "applicative style programming"; which is just application of functions to arguments. We also have "idioms", or phrases in a language which have a special meaning. For example "raining cats and dogs" is an idiom for raining very heavily. Putting them together, applicative functors are function applications with special meaning.
Take for example, following Dietrich Epp's lead, anApplication defined by a function,
anApplication = f a
where
f = (+2)
a = 3
and, anIdiomaticApplication, defined with idiomatic application,
anIdiomaticApplication = f <*> a
where
f = Just (+2)
a = Just 3
The top level structure of these definitions are similar. The difference? The first has a space--normal function application--and the second has <*>--idiomatic function application. This illustrates how <*> facilitates applicative style: just use <*> in place of a space.
The application, <*>, is idiomatic because it carries a meaning other than just pure function application. By way of exposition, in anIdiomaticApplication we have something like this:
f <*> a :: Maybe (Int -> Int) <*> Maybe Int
Here, the <*> in the type is used to represent a type level function* that corresponds to the signature of the real <*>. To the type-<*> we apply the type arguments for f and a (respectively Maybe (Int -> Int) and Maybe Int). After application we have
f <*> a :: Maybe Int
As an intermediate step, we can imagine something like
f <*> a :: Maybe ((Int -> Int) _ Int)
With _ being the type level stand-in for regular function application.
At this point we can finally see the idiom-ness called out. f <*> a is like a normal function application, (Int -> Int) _ Int, in the Maybe context/idiom. So, <*> is just function application that happens within a certain context.
In parting, I'll emphasize that understanding <*> is only partially understanding its use. We can understand that f <*> a is just function application which some extra idiomatic meaning. Due to the Applicative laws, we can also assume that idiomatic application will be somehow sensible.
Don't be surprised, however, if you look at <*> and get confused since there is so little there. We must also be versed in the various Haskell Idioms. For instance, in the Maybe idiom either the function or value may not be present, in which case the output will be Nothing. There are of course, many others, but getting familiar with just Either a and State s should model a wide variety of the different kinds.
*Something like this could actually be made with a closed type family (untested)
type family IdmApp f a where
IdmApp (f (a->b)) a = f b
I'm currently in the process of trying to learn Haskell, and ran into an odd issue regarding the Maybe monad which I can't seem to figure out.
As an experiment, I'm currently trying to take a string, convert each letter to an arbitrary number, and multiply/combine them together. Here's what I have so far:
lookupTable :: [(Char, Int)]
lookupTable = [('A', 1), ('B', 4), ('C', -6)]
strToInts :: String -> [Maybe Int]
strToInts = map lookupChar
where
lookupChar :: Char -> Maybe Int
lookupChar c = lookup c lookupTable
-- Currently fails
test :: (Num n, Ord n) => [Maybe n] -> [Maybe n]
test seq = [ x * y | (x, y) <- zip seq $ tail seq, x < y ]
main :: IO ()
main = do
putStrLn $ show $ test $ strToInts "ABC"
When I try running this, it returns the following error:
test.hs:13:16:
Could not deduce (Num (Maybe n)) arising from a use of `*'
from the context (Num n, Ord n)
bound by the type signature for
test :: (Num n, Ord n) => [Maybe n] -> [Maybe n]
at test.hs:12:9-48
Possible fix: add an instance declaration for (Num (Maybe n))
In the expression: x * y
In the expression: [x * y | (x, y) <- zip seq $ tail seq]
In an equation for `test':
test seq = [x * y | (x, y) <- zip seq $ tail seq]
I'm not 100% sure why this error is occurring, or what it exactly means, though I suspect it might be because I'm trying to multiply two Maybe monads together -- if I change the definition of test to the following, the program compiles and runs fine:
test :: (Num n, Ord n) => [Maybe n] -> [Maybe n]
test seq = [ x | (x, y) <- zip seq $ tail seq, x < y ]
I also tried changing the type declaration to the below, but that didn't work either.
test :: (Num n, Ord n) => [Maybe n] -> [Num (Maybe n)]
I'm not really sure how to go about fixing this error. I'm fairly new to Haskell, so it might just be something really simple that I'm missing, or that I've structured everything completely wrong, but this is stumping me. What am I doing wrong?
Maybe does not have a num instance so you cannot multiply them together directly. You need to somehow apply the pure function to the values inside the context. This is exactly what applicative functors are for!
Applicative functors live in Control.Applicative:
import Control.Applicative
So you have this function and you want to apply it to 2 arguments in a context:
(*) :: Num a => a -> a -> a
You probably learned about fmap, it takes a function and applies it to a value in a context. <$> is an alias for fmap. When we fmap the pure function over the maybe value we get the following result:
(*) <$> Just 5 :: Num a => Maybe (a -> a)
So now we have maybe a function and we need to apply it to maybe a value, this is exactly what the applicative functor does. Its main operator is <*> which has the signature:
(<*>) :: f (a -> b) -> f a -> f b
When we specialize it we get the function we need:
(<*>) :: Maybe (a -> b) -> Maybe a -> Maybe b
We apply it and the output is the number you expect.
(*) <$> Just 5 <*> Just 5 :: Num a => Maybe a
So to make your code compile you need to change your test function to use <$> and <*>, see if you can figure out how.
Reite's answer is correct, and it's generally how I'd normally recommend handling it - however, it seems to me that you don't quite understand how to work with Maybe values; if so there is little sense looking at applicative functors right now.
The definition of Maybe is basically just
data Maybe a = Nothing | Just a
Which basically means exactly what it sounds like when you read it in plain english "A value of type Maybe a is either the value Nothing for that type, or a value of the form Just a".
Now you can use pattern matching to work with that, with the example of lists:
maybeReverse :: Maybe [a] -> Maybe [a]
maybeReverse Nothing = Nothing
maybeReverse (Just xs) = Just $ reverse xs
Which basically means "If the value is Nothing, then there's nothing to reverse, so the result is Nothing again. If the value is Just xs then we can reverse xs and wrap it with Just again to turn it into a Maybe [a] value).
Of course writing functions like this for every single function we ever want to use with a Maybe value would be tedious; So higher order functions to the rescue! The observation here is that in maybeReverse we didn't do all that much with reverse, we just applied it to the contained value and wrapped the result of that in Just.
So we can write a function called liftToMaybe that does this for us:
liftToMaybe :: (a->b) -> Maybe a -> Maybe b
liftToMaybe f Nothing = Nothing
liftToMaybe f (Just a) = Just $ f a
A further observation we can make is that because functions are values, we can also have Maybe values of functions. To do anything useful with those we could again unwrap them... or notice we're in the same situation as in the last paragraph, and immediately notice that we don't really care what function exactly is in that Maybe value and just write the abstraction directly:
maybeApply :: Maybe (a->b) -> Maybe a -> Maybe b
maybeApply Nothing _ = Nothing
maybeApply _ Nothing = Nothing
maybeApply (Just f) (Just a) = Just $ f a
Which, using our liftToMaybe function above, we can simplify a bit:
maybeApply :: Maybe (a->b) -> Maybe a -> Maybe b
maybeApply Nothing _ = Nothing
maybeApply (Just f) x = liftToMaybe f x
The <$> and <*> operators in Reite's answer are basically just infix names for liftToMaybe (which is also known as fmap) and maybeApply respectively; They have the types
(<$>) :: Functor f => (a->b) -> f a -> f b
(<*>) :: Applicative f => f (a->b) -> f a -> f b
You don't really need to know what the Functor and Applicative things are right now (although you should look into them at some point; They're basically generalizations of the above Maybe functions for other kinds of "context") - basically, just replace f with Maybe and you'll see that these are basically the same functions we've talked about earlier.
Now, I leave applying this to your original problem of multiplication to you (although the other answers kinda spoil it).
You are correct, the problem is that you are trying to multiply two Maybe values together, but (*) only works in instances of Num.
As it turn out, Maybe is an instance of the Applicative typeclass. This means that you can "lift" funcions that work with a type a to functions that work with a type Maybe a.
import Control.Applicative
Two functions provided by Applicative are:
pure :: a -> f a Puts a pure value in a "neutral context". For Maybe, this is Just.
(<*>) :: f (a -> b) -> f a -> f b Lets you apply a "function in a context" to two "values in a context".
So, suppose we have this pure computation:
(*) 2 3
Here are some analogous computations in a Maybe context:
Just (*) <*> Just 2 <*> Just 3
-- result is Just 6
pure (*) <*> pure 2 <*> pure 3
-- result is Just 6 -- equivalent to the above
pure (*) <*> pure 2 <*> Nothing
-- Nothing
Nothing <*> pure 2 <*> Just 3
-- Nothing
Nothing <*> Nothing <*> Nothing
-- Nothing
If the function or one of the argument is "missing", we return Nothing.
(<*>) is the explicit application operator when working with applicatives (instead of using a blank space, as when we work with pure values).
It's instructive to explore what (<*>) does with other instances of Applicative, like []:
ghci> [succ,pred] <*> pure 3
[4,2]
ghci> [succ,pred] <*> [3]
[4,2]
ghci> pure succ <*> [2,5]
[3,6]
ghci> [succ] <*> [2,5]
[3,6]
ghci> [(+),(*)] <*> pure 2 <*> pure 3
[5,6]
ghci> [(+),(*)] <*> [2,1] <*> pure 3
[5,4,6,3]
ghci> [(+),(*)] <*> [2,1] <*> [3,7]
[5,9,4,8,6,14,3,7]
I have to give a (simple) talk about Yesod. And yes,.. i've never or really really rarely used haskell as well.
University lecturer.....huh.
So i read a book about yesod and in some chapters the author is using some operators like <$> and <*>.
Can someone explain in easy words, what this operators do? Its pretty hard to google for that chars and if tried to read the documentation of Control.Applicative but to be honest, its hard to get for an haskell beginner.
so i hope anyone have a simple answer for me :)
an example of the book where these operators are used:
......
personForm :: Html -> MForm Handler (FormResult Person, Widget)
personForm = renderDivs $ Person
<$> areq textField "Name" Nothing
<*> areq (jqueryDayField def
{ jdsChangeYear = True -- give a year dropdown
, jdsYearRange = "1900:-5" -- 1900 till five years ago
}) "Birthday" Nothing
<*> aopt textField "Favorite color" Nothing
<*> areq emailField "Email address" Nothing
<*> aopt urlField "Website" Nothing
data Person = Person
{ personName :: Text
, personBirthday :: Day
, personFavoriteColor :: Maybe Text
, personEmail :: Text
, personWebsite :: Maybe Text
}
deriving Show
.....
.....................................
Hey,
Thanks a lot and amazingly most of the answers are useful. Sadly a only can hit "solved" on one answer.
Thanks a lot, the tutorial (that I really didn't find on Google) is pretty good
I am always very careful when making answers that are made up mostly of links, but this is one amazing tutorial that explains Functors, Applicatives and gives a bit of understaning on Monads.
The simplest answer is the type, of course. These operators come from type typeclasses Functor and its subclass Applicative.
class Functor f where
fmap :: (a -> b) -> (f a -> f b)
(<$>) = fmap -- synonym
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
The simplest intuitive answer is that Functors and Applicatives let you annotate simple values with "metadata" and (<$>), (<*>), and friends let you transform your "regular" value-level functions to work on "annotated" values.
go x y -- works if x and y are regular values
go <$> pure x <*> pure y -- uses `pure` to add "default" metadata
-- but is otherwise identical to the last one
Like any simple answer, it's kind of a lie, though. "Metadata" is a very oversimplified term. Better ones are "computational context" or "effect context" or "container".
If you're familiar with Monads then you are already very familiar with this concept. All Monads are Applicatives and so you can think of (<$>) and (<*>) as providing an alternative syntax for some do notation
do x_val <- x go <$> x
y_val <- y <*> y
return (go x_val y_val)
It has fewer symbols and emphasizes the idea of "applying" go to two arguments instead of emphasizing the imperative notion of "get the value that x generates, then get the value that y generates, then apply those values to go, then re-wrap the result" like do syntax does.
One final intuition I can throw out there is to think of Applicative in a very different way. Applicative is equivalent to another class called Monoidal.
class Functor f => Monoidal f where
init :: f () -- similar to pure
prod :: f a -> f b -> f (a, b) -- similar to (<*>)
so that Monoidal Functors let you (a) instantiate them with a trivial value
init :: [()]
init = []
init :: Maybe ()
init = Just ()
and also smash two of them together to produce their product
prod :: [a] -> [b] -> [(a, b)]
prod as bs = [(a, b) | a <- as, b <- bs]
prod :: Maybe a -> Maybe b -> Maybe (a, b)
prod (Just a) (Just b) = (Just (a, b))
prod _ _ = Nothing
This means that with a Monoidal functor you can smash a whole lot of values together and then fmap a value-level function over the whole bunch
go <$> maybeInt `prod` (maybeChar `prod` maybeBool) where
go :: (Int, (Char, Bool)) -> Double -- it's pure!
go (i, (c, b)) = ...
which is essentially what you're doing with (<$>) and (<*>), just with fewer tuples
go <$> maybeInt <*> maybeChar <*> maybeBool where
go :: Int -> Char -> Bool -> Double
go i c b = ...
Finally, here's how you convert between the two notions
-- forward
init = pure ()
prod x y = (,) <$> x <*> y
-- back
pure a = const a <$> init
f <*> x = ($) <$> prod f x
which shows how you can think of (<*>) as taking a normal value-level application ($) and injecting it up into the product inside of the Functor.
I don't suppose it helps to say that <$> is just an infix synonym for fmap. However, maybe these examples help clarify:
GHCi> (*2) <$> (Just 3)
Just 6
GHCi> (*2) <$> (Nothing)
Nothing
GHCi> (*3) <$> (Right 7)
Right 21
GHCi> (*2) <$> (Left "error")
Left "error"
GHCi> (+ 1) <$> [2,4,6,8]
[3,5,7,9]
Now compare that to this:
GHCi> (*) <$> (Just 2) <*> (Just 5)
Just 10
GHCi> (*) <$> (Just 2) <*> (Nothing)
Nothing
GHCi> (*) <$> (Right 3) <*> (Right 7)
Right 21
GHCi> (*) <$> (Left "error") <*> (Right 7)
Left "error"
GHCi> (+) <$> [1,2,3] <*> [10,20,30]
[11,21,31,12,22,32,13,23,33]
GHCi> (+) <$> [1,2,3] <*> []
[]
And then to this:
GHCi> (Just (*2)) <*> (Just 5)
Just 10
GHCi> (Right (*3)) <*> (Right 7)
Right 21
GHCi> [(+1),(+2),(+3)] <*> [10,20,30]
[11,21,31,12,22,32,13,23,33]
Really, that should show you all you need to know for lecture purposes, assuming you have learned from this that (*) <$> (Just 2) <*> (Just 5) is equivalent to Just (2 * 5)
(In the first set of examples, by the way, the functions on the left hand side are all applicatives.)
Put simply, <$> takes the function on the left and lifts it into the context of the "things in boxes" on the right, so that it can be applied to the things in boxes, in a way that obeys the special rules of the boxes (e.g. Nothng causing the whole chain to fail).
<*> takes a partially-bound function in a box on the left and applies it to the value in a box on the right. A partially bound function being one which has been given some but not all of its arguments. So (*) <$> (Right 3) <*> (Right 7) <*> (Right 4) would fail - with a not-very-helpful error message - because once * has been applied to 3 and 7 it is no longer a partial function and nobody knows what to do with the 4.
Used together, <$> and <*> allow a function to be applied to its arguments, all inside a box. You get the result in a box.
This can all only be done if the box is itself a functor; that is the crucial constraint for all of this. A functor is a function for which somebody has defined an fmap function which allows it to be transformed from a function that applies to one type into a function that applies to another type (while not changing the essential character of the function). If you like, Monads (boxes for things) know how to transform functions so that they can be applied to their things.
If you're not ready to learn about functors, applicatives, and monads yet, this may give you an intuition for how to use <$> and <*>. (I myself learned how to use them by looking at examples, before I really understood that other stuff.) Without the <$> and <*>, the first part of that code would look something like this:
......
personForm :: Html -> MForm Handler (FormResult Person, Widget)
personForm = do
name <- areq textField "Name" Nothing
bday <- areq (jqueryDayField def
{ jdsChangeYear = True -- give a year dropdown
, jdsYearRange = "1900:-5" -- 1900 till five years ago
}) "Birthday" Nothing
colour <- aopt textField "Favorite color" Nothing
email <- areq emailField "Email address" Nothing
url <- aopt urlField "Website" Nothing
renderDivs $ Person name bday colour email url
In other words, <$> and <*> can eliminate the need to create a lot of symbols that we only use once.
I am playing with Parsec and I want to combine two parsers into one with the result put in a pair, and then feed it another function to operate on the parse result to write something like this:
try (pFactor <&> (char '*' *> pTerm) `using` (*))
So I wrote this:
(<&>) :: (Monad m) => m a -> m b -> m (a, b)
pa <&> pb = do
a <- pa
b <- pb
return (a, b)
And
using :: (Functor f) => f (a, b) -> (a -> b -> c) -> f c
p `using` f = (uncurry f) <$> p
Is there anything similar to (<&>) which has been implemented somewhere? Or could this be written pointfree? I tried fmap (,) but it seems hard to match the type.
Better than <&> or liftM2 would be
(,) <$> a <*> b
since Applicative style seems to be gaining popularity and is very succinct. Using applicative style for things like this will eliminate the need for <&> itself, since it is much clearer than (,) <$> a <*> b.
Also this doesn't even require a monad - it will work for Applicatives too.
Is there anything similar to (<&>) which has been implemented somewhere? Or could this be written pointfreely? I tried fmap (,) but it seems hard to match the type.
I don't now if it's implemented anywhere, but <&> should be the same as liftM2 (,). The difference to fmap is, that liftM2 lifts a binary function into the monad.
Using applicative style, there is no need to put the intermediate results into a tuple just to immediately apply an uncurried function. Just apply the function "directly" using <$> and <*>.
try ((*) <$> pFactor <*> (char '*' *> pTerm))
In general, assuming sane instances of Monad and Applicative,
do x0 <- m0
x1 <- m1
...
return $ f x0 x1 ...
is equivalent to
f <$> m0 <*> m1 <*> ...
except that the latter form is more general and only requires an Applicative instance. (All monads should also be applicative functors, although the language does not enforce this).
Note, that if you go the opposite direction from Applicative you'll see that the way you want to combine parsers fits nicely into Arrow paradigm and Arrow parsers implementation.
E.g.:
import Control.Arrow
(<&>) = (&&&)
p `using` f = p >>^ uncurry f
Yes, you could use applicative style but I don't believe that answers either of your questions.
Is there some already defined combinator that takes two arbitrary monads and sticks their value types in a pair and then sticks that pair in the context?
Not in any of the standard packages.
Can you do that last bit point free?
I'm not sure if there is a way to make a curried function with an arity greater than 1 point free. I don't think there is.
Hope that answers your questions.