Functional composition with multi-valued functions in haskell? - haskell

I was wondering if it was possible to do functional composition with functions that take more than one argument. I want to be able to do something like this
x = (+3).(*)
setting x equal to a function that adds three to the product of two numbers.

There are multiple ways to do it, but they're all somewhat awkward.
((+3).) . (*)
≡ fmap (+3) . (*)
≡ curry $ (+3) . uncurry (*)
≡ \l r -> l*r + 3
Oh, wait, this was the signature where there's also a compact definition, guess what it's called...
((.).(.)) (+3) (*)
I'd argue that the lambda solution, being most explicit, is rather the best here.
What helps, and is often done just locally as a one(or two)-liner, is to define this composition as a custom infix:
(.:) :: (c->d) -> (a->b->c) -> a->b->d
f .: i = \l r -> f $ i l r
Which allows you to write simply (+3) .: (*).
BTW, for the similar (b->b->c) -> (a->b) -> a->a->c (precompose the right function to both arguments of the infix) there exists a widely-used standard implementation.

Yes, I'd use something like this:
http://hackage.haskell.org/packages/archive/composition/latest/doc/html/Data-Composition.html

You could also use the B1 or blackbird combinator from Data.Aviary.Birds. I think for real work I'd use a lambda though.

Related

Does the function monad really offer something more than the function applicative functor? If so, what?

For the function monad I find that (<*>) and (>>=)/(=<<) have two strikingly similar types. In particular, (=<<) makes the similarity more apparent:
(<*>) :: (r -> a -> b) -> (r -> a) -> (r -> b)
(=<<) :: (a -> r -> b) -> (r -> a) -> (r -> b)
So it's like both (<*>) and (>>=)/(=<<) take a binary function and a unary function, and constrain one of the two arguments of the former to be determined from the other one, via the latter. After all, we know that for the function applicative/monad,
f <*> g = \x -> f x (g x)
f =<< g = \x -> f (g x) x
And they look so strikingly similar (or symmetric, if you want), that I can't help but think of the question in the title.
As regards monads being "more powerful" than applicative functors, in the hardcopy of LYAH's For a Few Monads More chapter, the following is stated:
[…] join cannot be implemented by just using the functions that functors and applicatives provide.
i.e. join can't be implemented in terms of (<*>), pure, and fmap.
But what about the function applicative/mondad I mentioned above?
I know that join === (>>= id), and that for the function monad that boils down to \f x -> f x x, i.e. a binary function is made unary by feeding the one argument of the latter as both arguments of the former.
Can I express it in terms of (<*>)? Well, actually I think I can: isn't flip ($) <*> f === join f correct? Isn't flip ($) <*> f an implementation of join which does without (>>=)/(=<<) and return?
However, thinking about the list applicative/monad, I can express join without explicitly using (=<<)/(>>=) and return (and not even (<*>), fwiw): join = concat; so probably also the implementation join f = flip ($) <*> f is kind of a trick that doesn't really show if I'm relying just on Applicative or also on Monad.
When you implement join like that, you're using knowledge of the function type beyond what Applicative gives you. This knowledge is encoded in the use of ($). That's the "application" operator, which is the core of what a function even is. Same thing happens with your list example: you're using concat, which is based on knowledge of the nature of lists.
In general, if you can use the knowledge of a particular monad, you can express computations of any power. For example, with Maybe you can just match on its constructors and express anything that way. When LYAH says that monad is more powerful than applicative, it means "as abstractions", not applied to any particular monad.
edit2: The problem with the question is that it is vague. It uses a notion ("more powerful") that is not defined at all and leaves the reader guessing as to its meaning. Thus we can only get meaningless answers. Of course anything can be coded while using all arsenal of Haskell at our disposal. This is a vacuous statement. And it wasn't the question.
The cleared-up question, as far as I can see, is: using the methods from Monad / Applicative / Functor respectively as primitives, without using explicit pattern matching at all, is the class of computations that can be thus expressed strictly larger for one or the other set of primitives in use. Now this can be meaningfully answered.
Functions are opaque though. No pattern matching is present anyway. Without restricting what we can use, again there's no meaning to the question. The restriction then becomes, the explicit use of named arguments, the pointful style of programming, so that we only allow ourselves to code in combinatory style.
So then, for lists, with fmap and app (<*>) only, there's so much computations we can express, and adding join to our arsenal does make that larger. Not so with functions. join = W = CSI = flip app id. The end.
Having implemented app f g x = (f x) (g x) = id (f x) (g x) :: (->) r (a->b) -> (->) r a -> (->) r b, I already have flip app id :: (->) r (r->b) -> (->) r b, I might as well call it join since the type fits. It already exists whether I wrote it or not. On the other hand, from app fs xs :: [] (a->b) -> [] a -> [] b, I can't seem to get [] ([] b) -> [] b. Both ->s in (->) r (a->b) are the same; functions are special.
(BTW, I don't see at the moment how to code the lists' app explicitly without actually coding up join as well. Using list comprehensions is equivalent to using concat; and concat is not implementation of join, it is join).
join f = f <*> id
is simple enough so there's no doubts.
(edit: well, apparently there were still doubts).
(=<<) = (<*>) . flip for functions. that's it. that's what it means that for functions Monad and Applicative Functor are the same. flip is a generally applicable combinator. concat isn't. There's a certain conflation there, with functions, sure. But there's no specific functions-manipulating function there (like concat is a specific lists-manipulating function), or anywhere, because functions are opaque.
Seen as a particular data type, it can be subjected to pattern matching. As a Monad though it only knows about >>= and return. concat does use pattern matching to do its work. id does not.
id here is akin to lists' [], not concat. That it works is precisely what it means that functions seen as Applicative Functor or Monad are the same. Of course in general Monad has more power than Applicative, but that wasn't the question. If you could express join for lists with <*> and [], I'd say it'd mean they have the same power for lists as well.
In (=<<) = (<*>) . flip, both flip and (.) do nothing to the functions they get applied to. So they have no knowledge of those functions' internals. Like, foo = foldr (\x acc -> x+1) 0 will happen to correctly calculate the length of the argument list if that list were e.g. [1,2]. Saying this, building on this, is using some internal knowledge of the function foo (same as concat using internal knowledge of its argument lists, through pattern matching). But just using basic combinators like flip and (.) etc., isn't.

Combining bind and return

Consider:
x `f` y = x >>= (return . y)
This function f seems very similar to <$> and flip liftM but <$> doesn't seem to work and I'd have to define an infix operator for flip liftM to make it look nice and I'm presuming one already exists?
Is there a function like what I've described and what is it?
It is flip liftM, but not <$>. It's also almost exactly the same as flip <$>, but the latter is for the Functor typeclass, not Monad. (In the latest standard libraries the relationship between Functor and Monad is not yet reflected in the typeclass hierarchy, but it will be).
If you want to find where this is defined, you go to FP Complete's Hoogle, enter the type you are looking for
Functor f => f a -> (a -> b) -> f b
and discover it is defined in lens.
Your function
x `f` y = x >>= (return . y)
is equivalent to flip fmap, so if you don't mind swapping the order, you can import Data.Functor, define fmap and write it as
y <$> x
(There's no need to wait for Functor to be a superclass of Monad; you can go ahead today and define it.)
This has nice precedence so you can write stuff like
munge = Just . remove bits . add things <$> operation 1
>>= increase something <$> operation 2
instead of
munge' = do
thing1 <- operation 1
let thing2 = Just . remove bits. add things $ thing1
thing3 <- operation 2
return . increase something $ thing3
but even nicer, if you import Control.Applicative instead (which also exports <$>), you can combine multiple things, for example:
addLine = (+) <$> readLine <*> readLine >>= print
instead of
addLine' = do
one <- readLine
two <- readLine
print (one + two)
Future-proofing your code
If the Functor-Applicative-proposal goes ahead, you'll have to make all your Monads Applicatives (and hence Functors). You may as well start now.
If your Monad isn't already an Applicative, you can define pure = return and
mf <*> mx = do
f <- mf
x <- mx
return (f x)
If it's not a Functor, you can define
fmap f mx = do
x <- mx
return (f x)
The proposal suggests using (<*>) = ap and fmap = liftM, both from Control.Monad, but the definitions above are easy too, and you may well find it even easier in your own Monad.
Data.Generics.Serialization.Standard exports (>>$) which is defined as flip liftM. Not exactly a general-purpose module to depend upon, but you can if you want to. I've seen similar definitions in other application-specific modules. This is an indication that no general-purpose module defines such a function.
The least painful solution is probably to define your own, at least until the big Monad hierarchy overhaul happens.

A flipped version of the <$ operator

I was using Parsec and trying to write it in an Applicative style, utilising the various nice infix operators that Applicative and Functor provide, when I came across (<$) :: Functor f => a -> f b -> f a (part of Functor).
For Parsec (or anything with an Applicative instance I would assume), this makes stuff like pure x <* y a bit shorter to write by just saying x <$ y.
What I was wondering now is whether there is any concrete reason for the absence of an operator like ($>) = flip (<$) :: Functor f => f a -> b -> f b, which would allow me to express my parser x *> pure y in the neater form x $> y.
I know I could always define $> myself, but since there are both <* and *> and the notion of a dual / opposite / 'flipped thingie' appears quite ubiquitously in haskell, I thought it should be in the standard library together with <$.
Firstly, a trivial point, you mean Functor f => f a -> b -> f b.
Secondly, you go to FP Complete's Hoogle, type in the desired type signature, and discover that it is in the comonad and semigroupoids packages.
I could not tell you, though, why it isn't in any more common package. It seems a reasonable candidate for inclusion in a more standard location, such as Control.Applicative.

Flipped / reversed fmap (<$>)?

I found defining the following
(%) = flip fmap
I can write code like this:
readFile "/etc/passwd" % lines % filter (not . null)
To me it makes more sense than the alternative:
filter (not . null) <$> lines <$> readFile "/etc/passwd"
Obviously, it's just a matter of order.
Does anyone else do this? Is there a valid reason not to write code like this?
(<&>) :: Functor f => f a -> (a -> b) -> f b
Now available from Data.Functor in base.
https://hackage.haskell.org/package/base-4.12.0.0/docs/Data-Functor.html#v:-60--38--62-
Your operator (%) is exactly the operator (<&>) from the lens package.
It can be imported with:
import Control.Lens.Operators ((<&>))
There is a similar function for the Applicative type class called <**>; it's a perfectly reasonable thing to want or use for Functor as well. Unfortunately, the semantics are a bit different for <**>, so it can't be directly widened to apply to Functor as well.
-- (.) is to (<$>) as flip (.) is to your (%).
I usually define (&) = flip (.) and it's just like your example, you can apply function composition backwords. Allows for easier to understand points-free code in my opinion.
Personally I wouldn't use such an operators because then I have to learn two orders in which to read programs.

When should I use $ (and can it always be replaced with parentheses)?

From what I'm reading, $ is described as "applies a function to its arguments." However, it doesn't seem to work quite like (apply ...) in Lisp, because it's a binary operator, so really the only thing it looks like it does is help to avoid parentheses sometimes, like foo $ bar quux instead of foo (bar quux). Am I understanding it right? Is the latter form considered "bad style"?
$ is preferred to parentheses when the distance between the opening and closing parens would otherwise be greater than good readability warrants, or if you have several layers of nested parentheses.
For example
i (h (g (f x)))
can be rewritten
i $ h $ g $ f x
In other words, it represents right-associative function application. This is useful because ordinary function application associates to the left, i.e. the following
i h g f x
...can be rewritten as follows
(((i h) g) f) x
Other handy uses of the ($) function include zipping a list with it:
zipWith ($) fs xs
This applies each function in a list of functions fs to a corresponding argument in the list xs, and collects the results in a list. Contrast with sequence fs x which applies a list of functions fs to a single argument x and collects the results; and fs <*> xs which applies each function in the list fs to every element of the list xs.
You're mostly understanding it right---that is, about 99% of the use of $ is to help avoid parentheses, and yes, it does appear to be preferred to parentheses in most cases.
Note, though:
> :t ($)
($) :: (a -> b) -> a -> b
That is, $ is a function; as such, it can be passed to functions, composed with, and anything else you want to do with it. I think I've seen it used by people screwing with combinators before.
The documentation of ($) answers your question. Unfortunately it isn't listed in the automatically generated documentation of the Prelude.
However it is listed in the sourcecode which you can find here:
http://darcs.haskell.org/packages/base/Prelude.hs
However this module doesn't define ($) directly. The following, which is imported by the former, does:
http://darcs.haskell.org/packages/base/GHC/Base.lhs
I included the relevant code below:
infixr 0 $
...
-- | Application operator. This operator is redundant, since ordinary
-- application #(f x)# means the same as #(f '$' x)#. However, '$' has
-- low, right-associative binding precedence, so it sometimes allows
-- parentheses to be omitted; for example:
--
-- > f $ g $ h x = f (g (h x))
--
-- It is also useful in higher-order situations, such as #'map' ('$' 0) xs#,
-- or #'Data.List.zipWith' ('$') fs xs#.
{-# INLINE ($) #-}
($) :: (a -> b) -> a -> b
f $ x = f x
Lots of good answers above, but one omission:
$ cannot always be replace by parentheses
But any application of $ can be eliminated by using parentheses, and any use of ($) can be replaced by id, since $ is a specialization of the identity function. Uses of (f$) can be replaced by f, but a use like ($x) (take a function as argument and apply it to x) don't have any obvious replacement that I see.
If I look at your question and the answers here, Apocalisp and you are both right:
$ is preferred to parentheses under certain circumstances (see his answer)
foo (bar quux) is certainly not bad style!
Also, please check out difference between . (dot) and $ (dollar sign), another SO question very much related to yours.

Resources