In this talk at 52:40 this slide below is discussed.
I don't quite understand this definition.
Why is the x needed in the definition ?
Why is the output not needed ?
For example why is the definition not (f . g) x -> y = f (g x) -> y or something like that ?
Is it easier to understand this definition if I look at it as a rewriting rule ? Whenever the expression evaluator encounters a pattern like on the left side it should rewrite it to the right side?
Is the rewriting rule interpretation the only correct way to understand this definition ? (This interpretation is the only way that this definition makes sense to me but I am not even sure if this is the correct way to interpret this definition.) I have to admit that I am quite a beginner in Haskell.
EDIT:
Or this definition is just a sugar for a lambda expression ?
Or the lambda expression is sugar for the rewriting rule ?
Composition can be written in several equivalent ways.
(.) = \f g -> \x -> f (g x)
(.) = \f g x -> f (g x)
f . g = \x -> f (g x)
The last example says "a composition of two functions gives a function, such that..."
More equivalent expressions:
(.) f g = \x -> f (g x)
(.) f g x = f (g x)
(f . g) x = f (g x)
Perhaps the infix notation is confusing you? Let's look at another way to write that definition:
(.) :: (b -> c) -> (a -> b) -> a -> c
(.) f g x = f (g x) -- definition 1
So we can think of (.) as a function which takes three parameters, f (a function), g (another function), and x (a value). It then returns f ( g x). To invoke this function, we could write an expression like:
(.) head tail "test"
which would return 'e'. However, functions with names beginning with certain special characters (like .) can be used infix style, like so:
(head . tail) "test"
Now, another way to define . is like this:
(f . g) x = f (g x) -- definition 2
This is identical to "definition 1", but the notation may look a little strange, with the function "name" . appearing after the first parameter! But this is just another example of infix notation.
Now, let's look at how this function is actually defined in Prelude:
(.) :: (b -> c) -> (a -> b) -> a -> c
(.) f g = \x -> f (g x)
This is equivalent to definitions 1 and 2, but uses a syntax you may not be familiar with, lambda notation. The right hand side introduces an unnamed function with a single parameter x, and defines that function to be f (g x). So the whole thing says that that the function (.) f g is defined to be that unnamed function! You may wonder why anyone would write this in such a strange way. It will make more sense when you've worked with infix notation and lambdas for a while.
-> is needed for the type definition of the function, i.e to say what types the functions take as arguments and what type has his result. Explanations:
f :: ... is something like "function f has type ...
(a -> b) means the type "a function which takes an argument of type a and returns an argument of type b"
(a -> b) -> (b->c) -> (a->c) means "a function which takes a function of type a->b and a function of type b->c and return a function of type a->c" (this is a simplified formulation. Please refer to the note below)
The second line the the definition of f.g. Its like defining functions in math. There you define a function h by saying what shall be result of h(x) for any given argument x (you can write h(x)=x² for example). You have read the line
(f . g) x = f (g x)
as
(f . g)(x) := f(g(x))
which shall be read as: "The result of the function f . g for any given x shall be f(g(x))"
Conclusion: -> is like the arrow in mathematics, which you might have seen in terms like f : R -> R and = is like := (f(x):=x² means in mathematics, that f(x) is defined to be x²)
Note: The actual type of (a -> b) -> (b->c) -> (a->c) is (as mentioned by #Ben Voigt): "function which takes a function of type a->b and returns a function which maps a function of type b->c onto ta function of type a->c". #jhegedus: Please let me note in the comments if you need explanation for it.
That's not the source code of (.), although it is close. Here's the actual source:
-- | Function composition.
{-# INLINE (.) #-}
-- Make sure it has TWO args only on the left, so that it inlines
-- when applied to two functions, even if there is no final argument
(.) :: (b -> c) -> (a -> b) -> a -> c
(.) f g = \x -> f (g x)
(f . g) is a function. The source uses a lambda form to be that function. A lambda form must provide local bindings for each of its arguments. In this case, there is only one argument, and it is locally bound to the name 'x'. That's why 'x' (which is of type 'a') must be mentioned.
Since it is marked INLINE, it will effectively rewrite the code during the optimizer passes. (IIRC, that's after desugaring (conversion to Core) and before conversion to STG.)
Lambdas are not sugar, they are fundamental. let/where are sugar for lambdas.
Function definitions are almost sugar for lambdas, but the optimizer (in GHC as least) uses the arity in the definition to determine when/how to inline a function. The type "(b -> c) -> (a -> b) -> a -> c" can be thought of as any arity from 0 to 3, and can be defined with any of those arities.
Unnecessary parentheses can be used to strongly hint at the arity you want to use, although the slide does that backwards from the convention I've seen. For example, adding parentheses around "a -> c" to get "(b -> c) -> (a -> b) -> (a -> c)"; that type is generally thought of a binary function type. The slides use that type, but then use a ternary definition.
Related
I'm reading the second edition of Programming in Haskell and I've came across this sentence:
... there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
This doesn't seem right to me, though. I can see that there is only one valid definition of fmap for each Functor type, but surely I could define any number of functions with the type (a -> b) -> f a -> f b which aren't equivalent to each other?
Why is this the case? Or, is it just a mistake by the author?
You've misread what the author was saying.
...any function with the same polymorphic type as fmap...
This means, any function with the signature
Functor f => (a -> b) -> f a -> f b
must be equivalant to fmap. (Unless you permit bottom values, of course.)
That statement is true; it can be seen quite easily if you try to define such a function: because you know nothing about f except that it's a functor, the only way to obtain a non-⊥ f b value is by fmapping over the f a one.
What's a bit less clear cut is the logical implication in the quote:
there is only one way to make any given parameterised type into a functor, and hence any function with the same polymorphic type as fmap must be equal to fmap.
I think what the author means there is, because a Functor f => (a -> b) -> f a -> f b function must necessarily invoke fmap, and because fmap is always the only valid functor-mapping for a parameterised type, any Functor f => (a -> b) -> f a -> f b will indeed also in practice obey the functor laws, i.e. it will be the fmap.
I agree that the “hence” is a bit badly phrased, but in principle the quote is correct.
I think that the quote refers to this scenario. Assume we define a parameterized type:
data F a = .... -- whatever
for which we can write not only one, but two fmap implementations
fmap1 :: (a -> b) -> F a -> F b
fmap2 :: (a -> b) -> F a -> F b
satisfying the functor laws
fmap1 id = id
fmap1 (f . g) = fmap1 f . fmap1 g
fmap2 id = id
fmap2 (f . g) = fmap2 f . fmap2 g
Under these assumptions, we have that fmap1 = fmap2.
This is a theoretical consequence of the "free theorem" associated to fmap's polymorphic type (see the comment under Lemma 1).
Pragmatically, this ensures that the instance we obtain from deriving Functor is the only possible one.
It is a mistake. Here's some examples of functions with the same type as fmap for lists that are not fmap:
\f -> const []
\f -> concatMap (replicate 2 . f)
\f -> map (f . head) . chunksOf 2
\f -> map f . reverse
There are many more. In general, given a function ixf from list lengths to lists of numbers no bigger than that length (that is, valid indices into the list), we can build
maybeIt'sFmapLol :: (Int -> [Int]) -> (a -> b) -> [a] -> [b]
maybeIt'sFmapLol ixf elemf xs = [map elemf xs !! ix | ix <- ixf (length xs)]
Use suitably lazy variants of Int to handle infinite lists. A similar function schema can be cooked up for other container-like functors.
In many articles I have read that monad >>= operator is a way to represent function composition. But for me it is closer to some kind of advanced function application
($) :: (a -> b) -> a -> b
(>>=) :: Monad m => m a -> (a -> m b) -> m b
For composition we have
(.) :: (b -> c) -> (a -> b) -> a -> c
(>=>) :: Monad m => (a -> m b) -> (b -> m c) -> a -> m c
Please clarify.
Clearly, >>= is not a way to represent function composition. Function composition is simply done with .. However, I don't think any of the articles you've read meant this, either.
What they meant was “upgrading” function composition to work directly with “monadic functions”, i.e. functions of the form a -> m b. The technical term for such functions is Kleisli arrows, and indeed they can be composed with <=< or >=>. (Alternatively, you can use the Category instance, then you can also compose them with . or >>>.)
However, talking about arrows / categories tends to be confusing especially to beginners, just like point-free definitions of ordinary functions are often confusing. Luckily, Haskell allows us to express functions also in a more familiar style that focuses on the results of functions, rather the functions themselves as abstract morphisms†. It's done with lambda abstraction: instead of
q = h . g . f
you may write
q = (\x -> (\y -> (\z -> h z) (g y)) (f x))
...of course the preferred style would be (this being only syntactic sugar for lambda abstraction!)‡
q x = let y = f x
z = g y
in h z
Note how, in the lambda expression, basically composition was replaced by application:
q = \x -> (\y -> (\z -> h z) $ g y) $ f x
Adapted to Kleisli arrows, this means instead of
q = h <=< g <=< f
you write
q = \x -> (\y -> (\z -> h z) =<< g y) =<< f x
which again looks of course much nicer with flipped operators or syntactic sugar:
q x = do y <- f x
z <- g y
h z
So, indeed, =<< is to <=< like $ is to .. The reason it still makes sense to call it a composition operator is that, apart from “applying to values”, the >>= operator also does the nontrivial bit about Kleisli arrow composition, which function composition doesn't need: joining the monadic layers.
†The reason this works is that Hask is a cartesian closed category, in particular a well-pointed category. In such a category, arrows can, broadly speaking, be defined by the collection of all their results when applied to simple argument values.
‡#adamse remarks that let is not really syntactic sugar for lambda abstraction. This is particularly relevant in case of recursive definitions, which you can't directly write with a lambda. But in simple cases like this here, let does behave like syntactic sugar for lambdas, just like do notation is syntactic sugar for lambdas and >>=. (BTW, there's an extension which allows recursion even in do notation... it circumvents the lambda-restriction by using fixed-point combinators.)
Just as an illustration, consider this:
($) :: (a -> b) -> a -> b
let g=g in (g $) :: a -> b
g :: (a -> b)
_____
Functor f => / \
(<$>) :: (a -> b) -> f a -> f b
let g=g in (g <$>) :: f a -> f b
g :: (a -> b)
___________________
Applicative f => / / \
(<*>) :: f (a -> b) -> f a -> f b
let h=h in (h <*>) :: f a -> f b
h :: f (a -> b)
_____________
Monad m => /.------. \
(=<<) :: (a -> m b) -> m a -> m b
let k=k in (k =<<) :: m a -> m b
k :: (a -> m b)
So yes, each one of those, (g <$>), (h <*>) or (k =<<), is some kind of a function application, promoted into either Functor, Applicative Functor, or a Monad "context". And (g $) is just a regular kind of application of a regular kind of function.
With Functors, functions have no influence on the f component of the overall thing. They work strictly on the inside and can't influence the "wrapping".
With Applicatives, the functions come wrapped in an f, which wrapping combines with that of an argument (as part of the application) to produce the wrapping of the result.
With Monads, functions themselves now produce the wrapped results, pulling their arguments somehow from the wrapped argument (as part of the application).
We can see the three operators as some kind of a marking on a function, like mathematicians like to write say f' or f^ or f* (and in the original work by Eugenio Moggi(1) f* is exactly what was used, denoting the promoted function (f =<<)).
And of course, with the promoted functions :: f a -> f b, we get to chain them, because now the types line up. The promotion is what allows the composition.
(1) "Notions of computation and monads", Eugenio Moggi, July 1991.
more about compositionality, with a picture: Monads with Join() instead of Bind()
So the functor is "magically working inside" "the pipes"; applicative is "prefabricated pipes built from components in advance"; and monads are "building pipe networks as we go". An illustration:
The (.) operator has the signature:
(.) :: (b -> c) -> (a -> b) -> a -> c
(.) f g x = f $ g x
This looks a bit similar to the composition function in primitive recursive functions with one g.
I'm not interested in extending the number of g-functions, but (a number of) functions that apply the (.) function on a function g with multiple operands. In other words something like:
(..) :: (c -> d) -> (a -> b -> c) -> a -> b -> d
(..) f g x y = f $ g x y
A search on Hoogle doesn't result in any function. Is there a package that can handle this with an arbitrary number of operands?
To answer-ify my comments:
Multi-argument function composition operators are very easy to define, and luckily someone has done this for you already. The composition package has a nice set of operators for you to use to compose functions in this manner. I also find that instead of using haskell.org's hoogle engine, fpcomplete's version searches through more packages making it easier to find what I'm looking for.
I have been reading this article and in one of their section it is stated:
Lenses compose backwards. Can't we make (.) behave like functions?
You're right, we could. We don't for various reasons, but the
intuition is right. Lenses should combine just like functions. One
thing that's important about that is id can either pre- or post-
compose with any lens without affecting it.
What does that mean by Lenses compose backwards ?
Also, what does this mean: Can't we make (.) behave like functions ?
(.) is a function and by using it with Lens does it make (.) to behave like something else ?
The Lens type:
type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t
For our illustrative purposes, we can stick to the less general simple lens type, Lens'. The right side then becomes:
forall f. Functor f => (a -> f a) -> s -> f s
Intuitively, (a -> f a) is an operation on a part of a structure of type s, which is promoted to an operation on the whole structure, (s -> f s). (The functor type constructor f is part of the trickery which allows lenses to generalize getters, setters and lots of other things. We do not need to worry about it for now.) In other words:
From the user point of view, a lens allows to, given a whole, focus on a part of it.
Implementation-wise, a lens is a function which takes a function of the part and results in a function of the whole.
(Note how, in the descriptions I just made, "part" and "whole" appear in different orders.)
Now, a lens is a function, and functions can be composed. As we know, (.) has type:
(.) :: (y -> z) -> (x -> y) -> (x -> z)
Let us make the involved types simple lenses (For the sake of clarity, I will drop the constraint and the forall). x becomes a -> f a, y becomes s -> f s and z becomes t -> f t. The specialized type of (.) would then be:
((s -> f s) -> t -> f t) -> ((a -> f a) -> s -> f s) -> ((a -> f a) -> t -> f t)
The lens we get as result has type (a -> f a) -> (t -> f t). So, a composed lens firstLens . secondLens takes an operation on the part focused by secondLens and makes it an operation on the whole structure firstLens aims at. That just happens to match the order in which OO-style field references are composed, which is opposite to the order in which vanilla Haskell record accessors are composed.
You could think of the Getter part of a lens as a function, which you can extract using view. For example, the lens way of writing the fst function is:
view _1 :: (a,b) -> a
Now observe:
view _1 . view _2 :: (c, (a,b)) -> a -- First take the second pair element, then the first
view (_1 . _2) :: ((b,a) ,c) -> a -- This is "backwards" (exactly the opposite order of the above)
For lenses, (.) doesn't behave like it would for functions. For functions, f . g means "first apply g, then f", but for lenses, it means first use the lens f, then use the lens g. Actually, the (.) function is the same for both types, but lens' types make it seem like it's backwards.
I keep reusing lambda expressions such as
\x -> (f x, g x)
where I apply the same input to two functions and encapsulate the result in a pair. I can write a function capturing this
combine :: (a -> b) -> (a -> c) -> a -> (b,c)
combine f g x = (f x, g x)
Now the above lambda expression is just combine f g. I have two questions.
I'm interested to know if there is a standard library function that does this that I just can't find.
Out of curiosity, I'd like to rewrite this function in point-free style, but I'm having a lot of trouble with it.
Control.Arrow has the function (&&&) for this. It has a "more general" type, which unfortunately means that Hoogle doesn't find it (maybe this should be considered a bug in Hoogle?).
You can usually figure this sort of thing automatically with pointfree, which lambdabot in #haskell has as a plugin.
For example:
<shachaf> #pl combine f g x = (f x, g x)
<lambdabot> combine = liftM2 (,)
Where liftM2 with the (r ->) instance of Monad has type (a -> b -> c) -> (r -> a) -> (r -> b) -> r -> c. Of course, there are many other ways of writing this point-free, depending on what primitives you allow.
I'm interested to know if there is a standard library function that does this that I just can't find.
It's easy to miss because of the type class, but look at Control.Arrow. Plain Arrows can't be curried or applied, so the Arrow combinators are pointfree by necessity. If you specialize them to (->), you'll find the one you want is this:
(&&&) :: (Arrow a) => a b c -> a b c' -> a b (c, c')
There are other, similar functions, such as the equivalent operation for Either, which specialized to (->) looks like this:
(|||) :: (a -> c) -> (b -> c) -> Either a b -> c
Which is the same as either.
Out of curiosity, I'd like to rewrite this function in point-free style, but I'm having a lot of trouble with it.
Since you're duplicating an input, you need some way of doing that pointfree--the most common way is via the Applicative or Monad instance for (->), for example \f g -> (,) <$> f <*> g. This is essentially an implicit, inline Reader monad, and the argument being split up is the "environment" value. Using this approach, join f x becomes f x x, pure or return become const, fmap becomes (.), and (<*>) becomes the S combinator \f g x -> f x (g x).
There are actually quite a few ways of doing this. The most common way is to use the (&&&) function from Control.Arrow:
f &&& g
However, often you have more functions or need to pass the result to another function, in which case it is much more convenient to use applicative style. Then
uncurry (+) . (f &&& g)
becomes
liftA2 (+) f g
As noted this can be used with more than one function:
liftA3 zip3 f g h