When should I use $ (and can it always be replaced with parentheses)? - haskell

From what I'm reading, $ is described as "applies a function to its arguments." However, it doesn't seem to work quite like (apply ...) in Lisp, because it's a binary operator, so really the only thing it looks like it does is help to avoid parentheses sometimes, like foo $ bar quux instead of foo (bar quux). Am I understanding it right? Is the latter form considered "bad style"?

$ is preferred to parentheses when the distance between the opening and closing parens would otherwise be greater than good readability warrants, or if you have several layers of nested parentheses.
For example
i (h (g (f x)))
can be rewritten
i $ h $ g $ f x
In other words, it represents right-associative function application. This is useful because ordinary function application associates to the left, i.e. the following
i h g f x
...can be rewritten as follows
(((i h) g) f) x
Other handy uses of the ($) function include zipping a list with it:
zipWith ($) fs xs
This applies each function in a list of functions fs to a corresponding argument in the list xs, and collects the results in a list. Contrast with sequence fs x which applies a list of functions fs to a single argument x and collects the results; and fs <*> xs which applies each function in the list fs to every element of the list xs.

You're mostly understanding it right---that is, about 99% of the use of $ is to help avoid parentheses, and yes, it does appear to be preferred to parentheses in most cases.
Note, though:
> :t ($)
($) :: (a -> b) -> a -> b
That is, $ is a function; as such, it can be passed to functions, composed with, and anything else you want to do with it. I think I've seen it used by people screwing with combinators before.

The documentation of ($) answers your question. Unfortunately it isn't listed in the automatically generated documentation of the Prelude.
However it is listed in the sourcecode which you can find here:
http://darcs.haskell.org/packages/base/Prelude.hs
However this module doesn't define ($) directly. The following, which is imported by the former, does:
http://darcs.haskell.org/packages/base/GHC/Base.lhs
I included the relevant code below:
infixr 0 $
...
-- | Application operator. This operator is redundant, since ordinary
-- application #(f x)# means the same as #(f '$' x)#. However, '$' has
-- low, right-associative binding precedence, so it sometimes allows
-- parentheses to be omitted; for example:
--
-- > f $ g $ h x = f (g (h x))
--
-- It is also useful in higher-order situations, such as #'map' ('$' 0) xs#,
-- or #'Data.List.zipWith' ('$') fs xs#.
{-# INLINE ($) #-}
($) :: (a -> b) -> a -> b
f $ x = f x

Lots of good answers above, but one omission:
$ cannot always be replace by parentheses
But any application of $ can be eliminated by using parentheses, and any use of ($) can be replaced by id, since $ is a specialization of the identity function. Uses of (f$) can be replaced by f, but a use like ($x) (take a function as argument and apply it to x) don't have any obvious replacement that I see.

If I look at your question and the answers here, Apocalisp and you are both right:
$ is preferred to parentheses under certain circumstances (see his answer)
foo (bar quux) is certainly not bad style!
Also, please check out difference between . (dot) and $ (dollar sign), another SO question very much related to yours.

Related

How to get simpler but equivalent version of a Haskell expression

Although I have been learning Haskell for some time, there is one common problem I run into constantly. Let's take this expression as an example:
e f $ g . h i . j
One may wonder, given $ and . from Prelude, what are type constraints on e or h for expression to be valid?
Is it possible to get a 'simpler' but equivalent representation? For me, 'simpler' would be one that uses parentheses everywhere and eliminates need to define operator precedence rules.
If not, which Haskell report sections do I need to read to have complete picture?
This might be relevant for many novice Haskell programmers. I know many programmers that add parentheses so that they do not need to memorize (or understand) precedence tables like this one: http://docs.oracle.com/javase/tutorial/java/nutsandbolts/operators.html
Is it possible to get a 'simpler' but equivalent representation? Of course, this is called parsing, and is done by compilers, interpreters, etc.
90% of the time, all you need to remember is how $ , . and function application f x work together. This is because $ and function application are really simple - they bind the loosest and tightest respectively - they are like addition and the exponent in bodmas.
From your example
e f $ g . h i . j
the function applications bind first, so we have
(e f) $ g . (h i) . j
Function application is left associative so
f g h ==> ((f g) h)
You may have to google currying to understand why the above can be used like foo(a, b) in other languages.
In the next step do everything in the middle - I just use brackets or a table to remember this bit, it's usually straightforward. For example there are several operators like >> and >>= that are used at the same time when you are working with monads. I just add brackets when ghc complains.
So no we have
(e f) $ (g . ((h i) . j))
The order of the brackets doesn't matter as function composition is associative, however Haskell makes it right associative.
So then we have
((e f) (g . ((h i) . j)))
The above (simple) example demonstrates why those operators exist in the first place.

Haskell - Lambda calculus equivalent syntax?

While writing some lambda functions in Haskell, I was originally writing the functions like:
tru = \t f -> t
fls = \t f -> f
However, I soon noticed from the examples online that such functions are frequently written like:
tru = \t -> \f -> t
fls = \t -> \f -> f
Specifically, each of the items passed to the function have their own \ and -> as opposed to above. When checking the types of these they appear to be the same. My question is, are they equivalent or do they actually differ in some way? And not only for these two functions, but does it make a difference for functions in general? Thank you much!
They're the same, Haskell automatically curries things to keep things syntax nice. The following are all equivalent**
foo a b = (a, b)
foo a = \b -> (a, b)
foo = \a b -> (a, b)
foo = \a -> \b -> (a, b)
-- Or we can simply eta convert leaving
foo = (,)
If you want to be idiomatic, prefer either the first or the last. Introducing unnecessary lambdas is good for teaching currying, but in real code just adds syntactic clutter.
However in raw lambda calculus (not Haskell) most manually curry with
\a -> \b -> a b
Because people don't write a lot of lambda calculus by hand and when they do they tend to stick unsugared lambda calculus to keep things simple.
** modulo the monomorphism restriction, which won't impact you anyways with a type signature.
Though, as jozefg said, they are themselves equivalent, they may lead to different execution behaviour when combined with local variable bindings. Consider
f, f' :: Int -> Int -> Int
with the two definitions
f a x = μ*x
where μ = sum [1..a]
and
f' a = \x -> μ*x
where μ = sum [1..a]
These sure look equivalent, and certainly will always yield the same results.
GHCi, version 7.6.2: http://www.haskell.org/ghc/ :? for help
...
[1 of 1] Compiling Main            ( def0.hs, interpreted )
Ok, modules loaded: Main.
*Main> sum $ map (f 10000) [1..10000]
2500500025000000
*Main> sum $ map (f' 10000) [1..10000]
2500500025000000
However, if you try this yourself, you'll notice that with f takes quite a lot of time whereas with f' you get the result immediately. The reason is that f' is written in a form that prompts GHC to compile it so that actually f' 10000 is evaluated before starting to map it over the list. In that step, the value μ is calculated and stored in the closure of (f' 10000). On the other hand, f is treated simply as "one function of two variables"; (f 10000) is merely stored as a closure containing the parameter 10000 and μ is not calculated at all at first. Only when map applies (f 10000) to each element in the list, the whole sum [1..a] is calculated, which takes some time for each element in [1..10000]. With f', this was not necessary because μ was pre-calculated.
In principle, common-subexpression elimination is an optimisation that GHC is able to do itself, so you might at times get good performance even with a definition like f. But you can't really count on it.

Functional composition with multi-valued functions in haskell?

I was wondering if it was possible to do functional composition with functions that take more than one argument. I want to be able to do something like this
x = (+3).(*)
setting x equal to a function that adds three to the product of two numbers.
There are multiple ways to do it, but they're all somewhat awkward.
((+3).) . (*)
≡ fmap (+3) . (*)
≡ curry $ (+3) . uncurry (*)
≡ \l r -> l*r + 3
Oh, wait, this was the signature where there's also a compact definition, guess what it's called...
((.).(.)) (+3) (*)
I'd argue that the lambda solution, being most explicit, is rather the best here.
What helps, and is often done just locally as a one(or two)-liner, is to define this composition as a custom infix:
(.:) :: (c->d) -> (a->b->c) -> a->b->d
f .: i = \l r -> f $ i l r
Which allows you to write simply (+3) .: (*).
BTW, for the similar (b->b->c) -> (a->b) -> a->a->c (precompose the right function to both arguments of the infix) there exists a widely-used standard implementation.
Yes, I'd use something like this:
http://hackage.haskell.org/packages/archive/composition/latest/doc/html/Data-Composition.html
You could also use the B1 or blackbird combinator from Data.Aviary.Birds. I think for real work I'd use a lambda though.

Haskell type signature in lambda expression

Suppose I have a lambda expression in my program like:
\x -> f $ x + 1
and I want to specify for type safety that x must be an Integer. Something like:
-- WARNING: bad code
\x::Int -> f $ x + 1
You can just write \x -> f $ (x::Int) + 1 instead. Or, perhaps more readable, \x -> f (x + 1 :: Int). Note that type signatures generally encompass everything to their left, as far left as makes syntactic sense, which is the opposite of lambdas extending to the right.
The GHC extension ScopedTypeVariables incidentally allows writing signatures directly in patterns, which would allow \(x::Int) -> f $ x + 1. But that extension also adds a bunch of other stuff you might not want to worry about; I wouldn't turn it on just for a syntactic nicety.
I want to add to C.A.McCann's answer by noting that you don't need ScopedTypeVariables. Even if you never use the variable, you can always still do:
\x -> let _ = (x :: T) in someExpressionThatDoesNotUseX

Flipped / reversed fmap (<$>)?

I found defining the following
(%) = flip fmap
I can write code like this:
readFile "/etc/passwd" % lines % filter (not . null)
To me it makes more sense than the alternative:
filter (not . null) <$> lines <$> readFile "/etc/passwd"
Obviously, it's just a matter of order.
Does anyone else do this? Is there a valid reason not to write code like this?
(<&>) :: Functor f => f a -> (a -> b) -> f b
Now available from Data.Functor in base.
https://hackage.haskell.org/package/base-4.12.0.0/docs/Data-Functor.html#v:-60--38--62-
Your operator (%) is exactly the operator (<&>) from the lens package.
It can be imported with:
import Control.Lens.Operators ((<&>))
There is a similar function for the Applicative type class called <**>; it's a perfectly reasonable thing to want or use for Functor as well. Unfortunately, the semantics are a bit different for <**>, so it can't be directly widened to apply to Functor as well.
-- (.) is to (<$>) as flip (.) is to your (%).
I usually define (&) = flip (.) and it's just like your example, you can apply function composition backwords. Allows for easier to understand points-free code in my opinion.
Personally I wouldn't use such an operators because then I have to learn two orders in which to read programs.

Resources