While musing what more useful standard class to suggest to this one
class Coordinate c where
createCoordinate :: x -> y -> c x y
getFirst :: c x y -> x
getSecond :: c x y -> y
addCoordinates :: (Num x, Num y) => c x y -> c x y -> c x y
it occured me that instead of something VectorSpace-y or R2, a rather more general beast might lurk here: a Type -> Type -> Type whose two contained types can both be extracted. Hm, perhaps they can be extracted?
Turns out neither the comonad nor bifunctors package contains something called Bicomonad. Question is, would such a class even make sense, category-theoretically? Unlike Bimonad (which also isn't defined, and I couldn't really see how might look), a naïve definition seems plausible:
class Bifunctor c => Bicomonad c where
fst :: c x y -> x
snd :: c x y -> y
bidup :: c x y -> c (c x y) (c x y)
probably with the laws
fst . bidup ≡ id
snd . bidup ≡ id
bimap fst snd . bidup ≡ id
bimap bidup bidup . bidup ≡ bidup . bidup
but I find it disquieting that both fields of the result of bidup contain the same type, and there are quite a number of other, perhaps “better” conceivable signatures.
Any thoughts?
This is not an answer, but for Bimonad, how about this?
class Biapplicative p => Bimonad p where
(>>==) :: p a b -> (a -> b -> p c d) -> p c d
biap :: Bimonad p => p (a -> b) (c -> d) -> p a c -> p b d
biap p q = p >>== \ab cd -> q >>== \a c -> bipure (ab a) (cd c)
instance Bimonad (,) where
(a,b) >>== f = f a b
I don't know if this is categorically right/interesting, or even remotely useful, but it smells right from a Haskell perspective. Does it match your Bicomonad or something similar?
Related
Say I have functions which accept the same parameters and I want to test if their outputs are equivalent for the same input.
f :: a -> b -> c
g :: a -> b -> c
f a b == g a b
How can I package the parameters a and b in x so I can write the following instead.
f x == g x
What are the best ways to accomplish this without needing to wrap the functions themselves?
The only way to do exactly what you’re asking is to use uncurry:
let
x = (a, b)
in uncurry f x == uncurry g x
(Or uncurryN for N arguments.)
However, instead of packaging the arguments in a tuple, you could use the (->) x instance of Applicative (i.e., functions taking x as input) to implicitly “spread” the arguments to the parameters of both functions, so at least you only have to mention them once. This instance is commonly used in point-free code.
For example, using liftA2 specialised to this instance:
-- General type:
liftA2 :: Applicative f => (a -> b -> c) -> f a -> f b -> f c
-- Specialised to ‘(->) x’ (using TypeApplications syntax):
liftA2 #((->) _) :: (a -> b -> c) -> (x -> a) -> (x -> b) -> (x -> c)
You get this pattern:
liftA2 h f g x
-- =
(h <$> f <*> g) x
-- =
h (f x) (g x)
To lift more arguments, you add another liftA2 or … <$> … <*> …:
liftA2 (liftA2 h) f g x y
-- =
(liftA2 h <$> f <*> g) x y
-- =
h (f x y) (g x y)
So in a case like yours:
f, g :: Int -> Char -> Bool
f i c = chr i == c
g i c = i == ord c
(liftA2 . liftA2) (==) f g :: Int -> Char -> Bool
-- =
liftA2 (liftA2 (==)) f g
-- =
(\ x y -> f x y == g x y)
The N in liftAN corresponds to the number of functions; the number of liftAN calls corresponds to the number of arguments.
I heard about this construction which is loosely described as “a traversal represented in data, applied to some structure, without the need for the applicative”
It can be defined as:
data X a b r =
| Done r
| Step a (X a b (b -> r))
A word description would be as follows:
the type X a b r describes the shape of a structure
which contains things of type a
and for each a you get the opportunity to produce something of type b
and provided you do that for each a,
you get something of type r.
Thus a “traversal” of a list, [a], has type X a b [b], because if you can turn each a of the list into a b then you get a [b].
My question is: what is this thing called? Is there a reference to more information about it?
Example usage:
instance Functor (X a b) where
fmap f (Done r) = f r
fmap f (Step a next) = Step a (fmap (f .) next)
f :: [a] -> X a b [b]
f [] = Done []
f (a:as) = Step a (fmap (flip (:)) as)
g :: Applicative f => (a -> f b) -> X a b r -> f r
g f (Done r) = pure r
g f (Step a next) = g f next <*> f a
More generally:
instance Applicative (X a b) where
pure x = Done x
Done f <*> y = fmap (\y -> f y) y
Step a next <*> y = Step a (fmap flip next <*> y)
t :: Traversable t => t a -> X a b (t b)
t = traverse (\a -> Step a (Done id))
And, assuming I haven’t made any errors, we should find that:
flip g . t == traverse
Edit: I’ve thought about this some more. There is something this doesn’t have which a traversal has: a traversal can split up the computation into something that isn’t “one at a time,” for example to traverse a binary tree one can traverse the left and right half “in parallel.” Here is a structure that I think gives the same effect:
data Y a b r =
| Done r
| One a (b -> r)
| forall s t. Split (Y a b s) (Y a b t) (s -> t -> r)
(Slightly vague syntax as I don’t remember it and don’t want to write this as a gadt)
f1 :: X a b r -> Y a b r
f1 (Done x) = Done x
f1 (Step a next) = Split (One a id) (f1 next) (flip ($))
f2 :: Y a b r -> X a b r
f2 (Done x) = Done x
f2 (One a f) = Step a (Done f)
f2 (Split x y f) = f <$> f2 x <*> f2 y
I ended up figuring it out. See the video and slides of a talk I gave:
slides/pdf
video
Original question:
In my effort to understand generic recursion schemes (i.e., that use Fix) I have found it useful to write list-only versions of the various schemes. It makes it much easier to understand the actual schemes (without the additional overhead of the Fix stuff).
However, I have not yet figured out how to define list-only versions of zygo and futu.
Here are my specialised definitions so far:
cataL :: (a -> b -> b) -> b -> [a] -> b
cataL f b (a : as) = f a (cataL f b as)
cataL _ b [] = b
paraL :: (a -> [a] -> b -> b) -> b -> [a] -> b
paraL f b (a : as) = f a as (paraL f b as)
paraL _ b [] = b
-- TODO: histo
-- DONE: zygo (see below)
anaL :: (b -> (a, b)) -> b -> [a]
anaL f b = let (a, b') = f b in a : anaL f b'
anaL' :: (b -> Maybe (a, b)) -> b -> [a]
anaL' f b = case f b of
Just (a, b') -> a : anaL' f b'
Nothing -> []
apoL :: ([b] -> Maybe (a, Either [b] [a])) -> [b] -> [a]
apoL f b = case f b of
Nothing -> []
Just (x, Left c) -> x : apoL f c
Just (x, Right e) -> x : e
-- DONE: futu (see below)
hyloL :: (a -> c -> c) -> c -> (b -> Maybe (a, b)) -> b -> c
hyloL f z g = cataL f z . anaL' g
hyloL' :: (a -> c -> c) -> c -> (c -> Maybe (a, c)) -> c
hyloL' f z g = case g z of
Nothing -> z
Just (x,z') -> f x (hyloL' f z' g)
How do you define histo, zygo and futu for lists?
Zygomorphism is the high-falutin' mathsy name we give to folds built from two semi-mutually recursive functions. I'll give an example.
Imagine a function pm :: [Int] -> Int (for plus-minus) which intersperses + and - alternately through a list of numbers, such that pm [v,w,x,y,z] = v - (w + (x - (y + z))). You can write it out using primitive recursion:
lengthEven :: [a] -> Bool
lengthEven = even . length
pm0 [] = 0
pm0 (x:xs) = if lengthEven xs
then x - pm0 xs
else x + pm0 xs
Clearly pm0 is not compositional - you need to inspect the length of the whole list at each position to determine whether you're adding or subtracting. Paramorphism models primitive recursion of this sort, when the folding function needs to traverse the whole subtree at each iteration of the fold. So we can at least rewrite the code to conform to an established pattern.
paraL :: (a -> [a] -> b -> b) -> b -> [a] -> b
paraL f z [] = z
paraL f z (x:xs) = f x xs (paraL f z xs)
pm1 = paraL (\x xs acc -> if lengthEven xs then x - acc else x + acc) 0
But this is inefficient. lengthEven traverses the whole list at each iteration of the paramorphism resulting in an O(n2) algorithm.
We can make progress by noting that both lengthEven and para can be expressed as a catamorphism with foldr...
cataL = foldr
lengthEven' = cataL (\_ p -> not p) True
paraL' f z = snd . cataL (\x (xs, acc) -> (x:xs, f x xs acc)) ([], z)
... which suggests that we may be able to fuse the two operations into a single pass over the list.
pm2 = snd . cataL (\x (isEven, total) -> (not isEven, if isEven
then x - total
else x + total)) (True, 0)
We had a fold which depended on the result of another fold, and we were able to fuse them into one traversal of the list. Zygomorphism captures exactly this pattern.
zygoL :: (a -> b -> b) -> -- a folding function
(a -> b -> c -> c) -> -- a folding function which depends on the result of the other fold
b -> c -> -- zeroes for the two folds
[a] -> c
zygoL f g z e = snd . cataL (\x (p, q) -> (f x p, g x p q)) (z, e)
On each iteration of the fold, f sees its answer from the last iteration as in a catamorphism, but g gets to see both functions' answers. g entangles itself with f.
We'll write pm as a zygomorphism by using the first folding function to count whether the list is even or odd in length and the second one to calculate the total.
pm3 = zygoL (\_ p -> not p) (\x isEven total -> if isEven
then x - total
else x + total) True 0
This is classic functional programming style. We have a higher order function doing the heavy lifting of consuming the list; all we had to do was plug in the logic to aggregate results. The construction evidently terminates (you need only prove termination for foldr), and it's more efficient than the original hand-written version to boot.
Aside: #AlexR points out in the comments that zygomorphism has a big sister called mutumorphism, which captures mutual recursion in all
its glory. mutu generalises zygo in that both the folding
functions are allowed to inspect the other's result from the previous
iteration.
mutuL :: (a -> b -> c -> b) ->
(a -> b -> c -> c) ->
b -> c ->
[a] -> c
mutuL f g z e = snd . cataL (\x (p, q) -> (f x p q, g x p q)) (z, e)
You recover zygo from mutu simply by ignoring the extra argument.
zygoL f = mutuL (\x p q -> f x p)
Of course, all of these folding patterns generalise from lists to the fixed point of an arbitrary functor:
newtype Fix f = Fix { unFix :: f (Fix f) }
cata :: Functor f => (f a -> a) -> Fix f -> a
cata f = f . fmap (cata f) . unFix
para :: Functor f => (f (Fix f, a) -> a) -> Fix f -> a
para f = snd . cata (\x -> (Fix $ fmap fst x, f x))
zygo :: Functor f => (f b -> b) -> (f (b, a) -> a) -> Fix f -> a
zygo f g = snd . cata (\x -> (f $ fmap fst x, g x))
mutu :: Functor f => (f (b, a) -> b) -> (f (b, a) -> a) -> Fix f -> a
mutu f g = snd . cata (\x -> (f x, g x))
Compare the definition of zygo with that of zygoL. Also note that zygo Fix = para, and that the latter three folds can be implemented in terms of cata. In foldology everything is related to everything else.
You can recover the list version from the generalised version.
data ListF a r = Nil_ | Cons_ a r deriving Functor
type List a = Fix (ListF a)
zygoL' :: (a -> b -> b) -> (a -> b -> c -> c) -> b -> c -> List a -> c
zygoL' f g z e = zygo k l
where k Nil_ = z
k (Cons_ x y) = f x y
l Nil_ = e
l (Cons_ x (y, z)) = g x y z
pm4 = zygoL' (\_ p -> not p) (\x isEven total -> if isEven
then x - total
else x + total) True 0
Histomorphism models dynamic programming, the technique of tabulating the results of previous subcomputations. (It's sometimes called course-of-value induction.) In a histomorphism, the folding function has access to a table of the results of earlier iterations of the fold. Compare this with the catamorphism, where the folding function can only see the result of the last iteration. The histomorphism has the benefit of hindsight - you can see all of history.
Here's the idea. As we consume the input list, the folding algebra will output a sequence of bs. histo will jot down each b as it emerges, attaching it to the table of results. The number of items in the history is equal to the number of list layers you've processed - by the time you've torn down the whole list, the history of your operation will have a length equal to that of the list.
This is what the history of iterating a list(ory) looks like:
data History a b = Ancient b | Age a b (History a b)
History is a list of pairs of things and results, with an extra result at the end corresponding to the []-thing. We'll pair up each layer of the input list with its corresponding result.
cataL = foldr
history :: (a -> History a b -> b) -> b -> [a] -> History a b
history f z = cataL (\x h -> Age x (f x h) h) (Ancient z)
Once you've folded up the whole list from right to left, your final result will be at the top of the stack.
headH :: History a b -> b
headH (Ancient x) = x
headH (Age _ x _) = x
histoL :: (a -> History a b -> b) -> b -> [a] -> b
histoL f z = headH . history f z
(It happens that History a is a comonad, but headH (née extract) is all we need to define histoL.)
History labels each layer of the input list with its corresponding result. The cofree comonad captures the pattern of labelling each layer of an arbitrary structure.
data Cofree f a = Cofree { headC :: a, tailC :: f (Cofree f a) }
(I came up with History by plugging ListF into Cofree and simplifying.)
Compare this with the free monad,
data Free f a = Free (f (Free f a))
| Return a
Free is a coproduct type; Cofree is a product type. Free layers up a lasagne of fs, with values a at the bottom of the lasagne. Cofree layers up the lasagne with values a at each layer. Free monads are generalised externally-labelled trees; cofree comonads are generalised internally-labelled trees.
With Cofree in hand, we can generalise from lists to the fixpoint of an arbitrary functor,
newtype Fix f = Fix { unFix :: f (Fix f) }
cata :: Functor f => (f b -> b) -> Fix f -> b
cata f = f . fmap (cata f) . unFix
histo :: Functor f => (f (Cofree f b) -> b) -> Fix f -> b
histo f = headC . cata (\x -> Cofree (f x) x)
and once more recover the list version.
data ListF a r = Nil_ | Cons_ a r deriving Functor
type List a = Fix (ListF a)
type History' a b = Cofree (ListF a) b
histoL' :: (a -> History' a b -> b) -> b -> List a -> b
histoL' f z = histo g
where g Nil_ = z
g (Cons_ x h) = f x h
Aside: histo is the dual of futu. Look at their types.
histo :: Functor f => (f (Cofree f a) -> a) -> (Fix f -> a)
futu :: Functor f => (a -> f (Free f a)) -> (a -> Fix f)
futu is histo with the arrows flipped and with Free replaced by
Cofree. Histomorphisms see the past; futumorphisms predict the future.
And much like cata f . ana g can be fused into a hylomorphism,
histo f . futu g can be fused into a
chronomorphism.
Even if you skip the mathsy parts, this paper by Hinze and Wu features a good, example-driven tutorial on histomorphisms and their usage.
Since no one else has answered for futu yet, I'll try to stumble my way through. I'm going to use ListF a b = Base [a] = ConsF a b | NilF
Taking the type in recursion-schemes: futu :: Unfoldable t => (a -> Base t (Free (Base t) a)) -> a -> t.
I'm going to ignore the Unfoldable constraint and substitute [b] in for t.
(a -> Base [b] (Free (Base [b]) a)) -> a -> [b]
(a -> ListF b (Free (ListF b) a)) -> a -> [b]
Free (ListF b) a) is a list, possibly with an a-typed hole at the end. This means that it's isomorphic to ([b], Maybe a). So now we have:
(a -> ListF b ([b], Maybe a)) -> a -> [b]
Eliminating the last ListF, noticing that ListF a b is isomorphic to Maybe (a, b):
(a -> Maybe (b, ([b], Maybe a))) -> a -> [b]
Now, I'm pretty sure that playing type-tetris leads to the only sensible implementation:
futuL f x = case f x of
Nothing -> []
Just (y, (ys, mz)) -> y : (ys ++ fz)
where fz = case mz of
Nothing -> []
Just z -> futuL f z
Summarizing the resulting function, futuL takes a seed value and a function which may produce at least one result, and possibly a new seed value if it produced a result.
At first I thought this was equivalent to
notFutuL :: (a -> ([b], Maybe a)) -> a -> [b]
notFutuL f x = case f x of
(ys, mx) -> ys ++ case mx of
Nothing -> []
Just x' -> notFutuL f x'
And in practice, perhaps it is, more or less, but the one significant difference is that the real futu guarantees productivity (i.e. if f always returns, you will never be stuck waiting forever for the next list element).
I have been reading this article for understanding Lenses. I know this is different from
Edward Knett's lens package, but nonetheless it's useful for fundamentals.
So, A Lens is defined like this:
type Lens a b = (a -> b, b -> a -> a)
It has been mentioned that Lenses form a category and I have been
trying out to create an instance for Category typeclass. For a start, I
wrote the type definition for the functions:
(.) :: Lens y z -> Lens x y -> Lens x z
id :: Lens x x
And after this, I just stare it for all day. What exactly is the
thought process for writing it's definition?
I found this article (Lenses from Scratch on fpcomplete by Joseph Abrahamson) to be very good, it starts from the same representation of lenses you started with, defines composition for it and continues along the path to a representation more similar to lens
EDIT: I find type holes to be excellent when doing this kind of things:
(<.>):: Lens y z -> Lens x y -> Lens x z
(getA,setA) <.> (getB,setB) = (_,_)
So now we have 2 holes, the first in the tuple says (output cleaned):
Found hole ‘_’ with type: x -> z
...
Relevant bindings include
setB :: y -> x -> x
getB :: x -> y
setA :: z -> y -> y
getA :: y -> z
(<.>) :: Lens y z -> Lens x y -> Lens x z
Looking hard at the bindings, we already have what we need! getB :: x -> y and getA :: y -> z together with function composition (.) :: (b -> c) -> (a -> b) -> a -> c
So we happily insert this:
(<.>):: Lens y z -> Lens x y -> Lens x z
(getA,setA) <.> (getB,setB) = (getA . getB, _)
And continue with the second type hole, which says:
Found hole ‘_’ with type: z -> x -> x
Relevant bindings include
setB :: y -> x -> x
getB :: x -> y
setA :: z -> y -> y
getA :: y -> z
The most similar thing we have is setA :: z -> y -> y, we start by inserting a lambda, capturing the arguments:
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> _)
changing your type hole to:
Found hole ‘_’ with type: x
Relevant bindings include
x :: x
z :: z
setB :: y -> x -> x
getB :: x -> y
setA :: z -> y -> y
getA :: y -> z
we could insert x which type checks, but does not give us what we want (nothing happens when setting). The only other binding that could give us an x is setB, so we insert that:
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> setB _ _)
Our first type hole says:
Found hole ‘_’ with type: y
Relevant bindings include
x :: x
z :: z
setB :: y -> x -> x
getB :: x -> y
setA :: z -> y -> y
getA :: y -> z
So we need an y, looking at what is in scope, getB can give us a y if we give it a x, which we happen to have, but this would lead us to a useless lens doing nothing again. The alternative is to use setA:
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> setB (setA _ _) _)
(Speeding things a little up from here on)
Again the first hole wants something of type z which he happen to have as an argument to our lambda:
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> setB (setA z _) _)
To fill the first type hole of type y we can use getB :: x -> y giving it the argument of our lambda:
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> setB (setA z (getB x)) _)
Which leaves us with one remaining type hole, which can trivially be replaced by x, leading to the final definition:
(<.>):: Lens y z -> Lens x y -> Lens x z
(getA,setA) <.> (getB,setB) = (getA . getB, \z x -> setB (setA z (getB x)) x)
You can try to define id for yourself, using type holes and hoogle if necessary
Try this:
(.) :: Lens y z -> Lens x y -> Lens x z
(getZfromY , setZinY) . (getYfromX , setYinX) = (getZfromX , setZinX)
where getZfromX someX = ...
setZinX someZ someX = ...
The idea is: combine the two getters to make the new getter, and combine the two setters to make a new setter.
For the identity, think about:
id :: Lens x x
id = (getXfromX , setXinX)
where getXfromX someX = ...
setXinX newX oldX = ...
It seems to be a fairly straighforward process. But also need to check that you get a category - this requires equational reasoning - because, for example, there is at least one more way to implement the setter of id with type x->x->x - only one of those will make a category.
So, let's start with getting functions of the right type.
Lens y z -> Lens x y -> Lens x z ==
(y->z, z->y->y) -> (x->y, y->x->x) -> (x->z, z->x->x)
It seems clear how to get x->z from x->y and y->z - compose. Well, and you have ways to construct new x from old x and new y, and a way to get old y from old x, so if you can construct new y from z and old y, you are done.
(.) (yz, zyy) (xy, yxx) = (yz . xy, \z x -> yxx (zyy z (xy x)) x)
Similarly for id:
Lens x x ==
(x->x, x->x->x)
So
id = (id, const)
So far so good, the types check. Now let's check that we've got a category. There is one law:
f . id = f = id . f
Checking one way (a bit informal, so need to bear in mind that . and id refer to different things in f . id and fg . id):
f . id = (fg, fs) . (id, const) =
(fg . id, \z x -> const (fs z (id x)) x) =
(fg, \z x -> fs z (id x)) = (fg, fs)
Checking the other way:
id . f = (id, const) . (fg, fs) =
(id . fg, \z x -> fs (const z (fg x)) x) =
(fg, \z x -> fs z x) = (fg, fs)
So i know that:
(.) = (f.g) x = f (g x)
And it's type is (B->C)->(A->B)->A->C
But what about:
(.)(.) = _? = _?
How this is represented? I thought of:
(.)(.) = (f.g)(f.g)x = f(g(f(g x))) // this
(.)(.) = (f.g.h)x = f(g(h x)) // or this
But as far as i tried to get type of it, it's not correct to what GHCi tells me.
So what are both "_?"
Also - what does function/operator $ do?
First off, you're being sloppy with your notation.
(.) = (f.g) x = f (g x) -- this isn't true
What is true:
(.) f g x = (f.g) x = f (g x)
(.) = \f g x -> f (g x)
And its type is given by
(.) :: (b -> c) -> (a -> b) -> a -> c
-- n.b. lower case, because they're type *variables*
Meanwhile
(.)(.) :: (a -> b -> d) -> a -> (c -> b) -> c -> d
-- I renamed the variables ghci gave me
Now let's work out
(.)(.) = (\f' g' x' -> f' (g' x')) (\f g x -> f (g x))
= \g' x' -> (\f g x -> f (g x)) (g' x')
= \g' x' -> \g x -> (g' x') (g x)
= \f y -> \g x -> (f y) (g x)
= \f y g x -> f y (g x)
= \f y g x -> (f y . g) x
= \f y g -> f y . g
And ($)?
($) :: (a -> b) -> a -> b
f $ x = f x
($) is just function application. But whereas function application via juxtaposition is high precedence, function application via ($) is low precedence.
square $ 1 + 2 * 3 = square (1 + 2 * 3)
square 1 + 2 * 3 = (square 1) + 2 * 3 -- these lines are different
As dave4420 mentions,
(.) :: (b -> c) -> (a -> b) -> a -> c
So what is the type of (.) (.)? dave4420 skips that part, so here it is: (.) accepts a value of type b -> c as its first argument, so
(.) :: ( b -> c ) -> (a -> b) -> a -> c
(.) :: (d -> e) -> ((f -> d) -> f -> e)
so we have b ~ d->e and c ~ (f -> d) -> f -> e, and the resulting type of (.)(.) is (a -> b) -> a -> c. Substituting, we get
(a -> d -> e) -> a -> (f -> d) -> f -> e
Renaming, we get (a -> b -> c) -> a -> (d -> b) -> d -> c. This is a function f that expects a binary function g, a value x, a unary function h and another value y:
f g x h y = g x (h y)
That's the only way this type can be realized: g x :: b -> c, h y :: b and so g x (h y) :: c, as needed.
Of course in Haskell, a "unary" function is such that expects one or more arguments; similarly a "binary" function is such that expects two or more arguments. But not less than two (so using e.g. succ is out of the question).
We can also tackle this by writing equations, combinators-style1. Equational reasoning is easy:
(.) (.) x y z w q =
((.) . x) y z w q =
(.) (x y) z w q =
(x y . z) w q =
x y (z w) q
We just throw as much variables as needed into the mix and then apply the definition back and forth. q here was an extra, so we can throw it away and get the final definition,
_BB x y z w = x y (z w)
(coincidentally, (.) is known as B-combinator).
1 a b c = (\x -> ... body ...) is equivalent to a b c x = ... body ..., and vice versa, provided that x does not appear among {a,b,c}.