Related
In Programming in Haskell by Hutton
In general, if # is an operator, then expressions of the form (#), (x #), and (# y) for arguments x and
y are called sections, whose meaning as functions can be
formalised using lambda expressions as follows:
(#) = \x -> (\y -> x # y)
(x #) = \y -> x # y
(# y) = \x -> x # y
What are the difference and relation between "section" and "currying"?
Is a section the result of applying the currying operation to a multi-argument function?
Thanks.
A section is just special syntax for applying an infix operator to a single argument. (# y) is the more useful of the two, as (x #) is equivalent to (#) x (which is just applying the infix operator as a function to a single argument in the usual fashion).
curry f x y = f (x,y). uncurry g (x,y) = g x y.
(+ 3) 4 = (+) 4 3 = 4 + 3. (4 +) 3 = (+) 4 3 = 4 + 3.
A section is a result of partial application of a curried function: (+ 3) = flip (+) 3, (4 +) = (+) 4.
A curried function (like g or (+)) expects its arguments one at a time. An uncurried function (like f) expects its arguments in a tuple.
To partially apply an uncurried function we have first to turn it into a curried function, with curry. To partially apply a curried function we don't need to do anything, just apply it to an argument.
curry :: ((a, b) -> c ) -> ( a -> (b -> c))
uncurry :: (a -> (b -> c)) -> ((a, b) -> c )
x :: a
g :: a -> (b -> c)
--------------------
g x :: b -> c
x :: a
f :: (a, b) -> c
---------------------------
curry f :: a -> (b -> c)
curry f x :: b -> c
Left sections and right sections are syntactical devices for partially applying an infix operator to a single argument (see also chepner's answer). For the sake of accuracy, we should note that currying is not the same thing as partial application:
Currying is converting a function that takes N arguments into a function that takes a single argument and returns a function that takes N-1 arguments.
Partial application is making a function that takes N-1 arguments out of a function that takes N arguments by supplying one of the arguments.
In Haskell, it happens that everything is curried; all functions take just one argument (even uncurried functions in Haskell take a tuple, which is, strictly speaking, a single argument -- you might want to play with the curry and uncurry functions to see how that works). Still, we very often think informally of functions that return functions as functions of multiple arguments. From that vantage point, a nice consequence of currying by default is that partial application of a function to its first argument becomes trivial: while, for instance, elem takes a value and a container and tests if the value is an element of the contaier, elem "apple" takes a container (of strings) and tests if "apple" is an element of it.
As for operators, when we write, for instance...
5 / 2
... we are applying the operator / to the arguments 5 and 2. The operator can also be used in prefix form, rather than infix:
(/) 5 2
In prefix form, the operator can be partially applied in the usual way:
(/) 5
That, however, arguably looks a little awkward -- after all, 5 here is the numerator, and not the denominator. I'd say left section syntax is easier on the eye in this case:
(5 /)
Furthermore, partial application to the second argument is not quite as straightforward to write, requiring a lambda, or flip. In the case of operators, a right section can help with that:
(/ 2)
Note that sections also work with functions made into operators through backtick syntax, so this...
(`elem` ["apple", "grape", "orange"])
... takes a string and tests whether it can be found in ["apple", "grape", "orange"].
What is the difference between the dot (.) and the dollar sign ($)?
As I understand it, they are both syntactic sugar for not needing to use parentheses.
The $ operator is for avoiding parentheses. Anything appearing after it will take precedence over anything that comes before.
For example, let's say you've got a line that reads:
putStrLn (show (1 + 1))
If you want to get rid of those parentheses, any of the following lines would also do the same thing:
putStrLn (show $ 1 + 1)
putStrLn $ show (1 + 1)
putStrLn $ show $ 1 + 1
The primary purpose of the . operator is not to avoid parentheses, but to chain functions. It lets you tie the output of whatever appears on the right to the input of whatever appears on the left. This usually also results in fewer parentheses, but works differently.
Going back to the same example:
putStrLn (show (1 + 1))
(1 + 1) doesn't have an input, and therefore cannot be used with the . operator.
show can take an Int and return a String.
putStrLn can take a String and return an IO ().
You can chain show to putStrLn like this:
(putStrLn . show) (1 + 1)
If that's too many parentheses for your liking, get rid of them with the $ operator:
putStrLn . show $ 1 + 1
They have different types and different definitions:
infixr 9 .
(.) :: (b -> c) -> (a -> b) -> (a -> c)
(f . g) x = f (g x)
infixr 0 $
($) :: (a -> b) -> a -> b
f $ x = f x
($) is intended to replace normal function application but at a different precedence to help avoid parentheses. (.) is for composing two functions together to make a new function.
In some cases they are interchangeable, but this is not true in general. The typical example where they are is:
f $ g $ h $ x
==>
f . g . h $ x
In other words in a chain of $s, all but the final one can be replaced by .
Also note that ($) is the identity function specialised to function types. The identity function looks like this:
id :: a -> a
id x = x
While ($) looks like this:
($) :: (a -> b) -> (a -> b)
($) = id
Note that I've intentionally added extra parentheses in the type signature.
Uses of ($) can usually be eliminated by adding parenthesis (unless the operator is used in a section). E.g.: f $ g x becomes f (g x).
Uses of (.) are often slightly harder to replace; they usually need a lambda or the introduction of an explicit function parameter. For example:
f = g . h
becomes
f x = (g . h) x
becomes
f x = g (h x)
($) allows functions to be chained together without adding parentheses to control evaluation order:
Prelude> head (tail "asdf")
's'
Prelude> head $ tail "asdf"
's'
The compose operator (.) creates a new function without specifying the arguments:
Prelude> let second x = head $ tail x
Prelude> second "asdf"
's'
Prelude> let second = head . tail
Prelude> second "asdf"
's'
The example above is arguably illustrative, but doesn't really show the convenience of using composition. Here's another analogy:
Prelude> let third x = head $ tail $ tail x
Prelude> map third ["asdf", "qwer", "1234"]
"de3"
If we only use third once, we can avoid naming it by using a lambda:
Prelude> map (\x -> head $ tail $ tail x) ["asdf", "qwer", "1234"]
"de3"
Finally, composition lets us avoid the lambda:
Prelude> map (head . tail . tail) ["asdf", "qwer", "1234"]
"de3"
The short and sweet version:
($) calls the function which is its left-hand argument on the value which is its right-hand argument.
(.) composes the function which is its left-hand argument on the function which is its right-hand argument.
One application that is useful and took me some time to figure out from the very short description at Learn You a Haskell: Since
f $ x = f x
and parenthesizing the right hand side of an expression containing an infix operator converts it to a prefix function, one can write ($ 3) (4 +) analogous to (++ ", world") "hello".
Why would anyone do this? For lists of functions, for example. Both:
map (++ ", world") ["hello", "goodbye"]
map ($ 3) [(4 +), (3 *)]
are shorter than
map (\x -> x ++ ", world") ["hello", "goodbye"]
map (\f -> f 3) [(4 +), (3 *)]
Obviously, the latter variants would be more readable for most people.
Haskell: difference between . (dot) and $ (dollar sign)
What is the difference between the dot (.) and the dollar sign ($)?. As I understand it, they are both syntactic sugar for not needing to use parentheses.
They are not syntactic sugar for not needing to use parentheses - they are functions, - infixed, thus we may call them operators.
Compose, (.), and when to use it.
(.) is the compose function. So
result = (f . g) x
is the same as building a function that passes the result of its argument passed to g on to f.
h = \x -> f (g x)
result = h x
Use (.) when you don't have the arguments available to pass to the functions you wish to compose.
Right associative apply, ($), and when to use it
($) is a right-associative apply function with low binding precedence. So it merely calculates the things to the right of it first. Thus,
result = f $ g x
is the same as this, procedurally (which matters since Haskell is evaluated lazily, it will begin to evaluate f first):
h = f
g_x = g x
result = h g_x
or more concisely:
result = f (g x)
Use ($) when you have all the variables to evaluate before you apply the preceding function to the result.
We can see this by reading the source for each function.
Read the Source
Here's the source for (.):
-- | Function composition.
{-# INLINE (.) #-}
-- Make sure it has TWO args only on the left, so that it inlines
-- when applied to two functions, even if there is no final argument
(.) :: (b -> c) -> (a -> b) -> a -> c
(.) f g = \x -> f (g x)
And here's the source for ($):
-- | Application operator. This operator is redundant, since ordinary
-- application #(f x)# means the same as #(f '$' x)#. However, '$' has
-- low, right-associative binding precedence, so it sometimes allows
-- parentheses to be omitted; for example:
--
-- > f $ g $ h x = f (g (h x))
--
-- It is also useful in higher-order situations, such as #'map' ('$' 0) xs#,
-- or #'Data.List.zipWith' ('$') fs xs#.
{-# INLINE ($) #-}
($) :: (a -> b) -> a -> b
f $ x = f x
Conclusion
Use composition when you do not need to immediately evaluate the function. Maybe you want to pass the function that results from composition to another function.
Use application when you are supplying all arguments for full evaluation.
So for our example, it would be semantically preferable to do
f $ g x
when we have x (or rather, g's arguments), and do:
f . g
when we don't.
... or you could avoid the . and $ constructions by using pipelining:
third xs = xs |> tail |> tail |> head
That's after you've added in the helper function:
(|>) x y = y x
My rule is simple (I'm beginner too):
do not use . if you want to pass the parameter (call the function), and
do not use $ if there is no parameter yet (compose a function)
That is
show $ head [1, 2]
but never:
show . head [1, 2]
A great way to learn more about anything (any function) is to remember that everything is a function! That general mantra helps, but in specific cases like operators, it helps to remember this little trick:
:t (.)
(.) :: (b -> c) -> (a -> b) -> a -> c
and
:t ($)
($) :: (a -> b) -> a -> b
Just remember to use :t liberally, and wrap your operators in ()!
All the other answers are pretty good. But there’s an important usability detail about how ghc treats $, that the ghc type checker allows for instatiarion with higher rank/ quantified types. If you look at the type of $ id for example you’ll find it’s gonna take a function whose argument is itself a polymorphic function. Little things like that aren’t given the same flexibility with an equivalent upset operator. (This actually makes me wonder if $! deserves the same treatment or not )
The most important part about $ is that it has the lowest operator precedence.
If you type info you'll see this:
λ> :info ($)
($) :: (a -> b) -> a -> b
-- Defined in ‘GHC.Base’
infixr 0 $
This tells us it is an infix operator with right-associativity that has the lowest possible precedence. Normal function application is left-associative and has highest precedence (10). So $ is something of the opposite.
So then we use it where normal function application or using () doesn't work.
So, for example, this works:
λ> head . sort $ "example"
λ> e
but this does not:
λ> head . sort "example"
because . has lower precedence than sort and the type of (sort "example") is [Char]
λ> :type (sort "example")
(sort "example") :: [Char]
But . expects two functions and there isn't a nice short way to do this because of the order of operations of sort and .
I think a short example of where you would use . and not $ would help clarify things.
double x = x * 2
triple x = x * 3
times6 = double . triple
:i times6
times6 :: Num c => c -> c
Note that times6 is a function that is created from function composition.
I'm very new to Haskell and FP in general. I've read many of the writings that describe what currying is, but I haven't found an explanation to how it actually works.
Here is a function: (+) :: a -> (a -> a)
If I do (+) 4 7, the function takes 4 and returns a function that takes 7 and returns 11. But what happens to 4 ? What does that first function do with 4? What does (a -> a) do with 7?
Things get more confusing when I think about a more complicated function:
max' :: Int -> (Int -> Int)
max' m n | m > n = m
| otherwise = n
what does (Int -> Int) compare its parameter to? It only takes one parameter, but it needs two to do m > n.
Understanding higher-order functions
Haskell, as a functional language, supports higher-order functions (HOFs). In mathematics HOFs are called functionals, but you don't need any mathematics to understand them. In usual imperative programming, like in Java, functions can accept values, like integers and strings, do something with them, and return back a value of some other type.
But what if functions themselves were no different from values, and you could accept a function as an argument or return it from another function? f a b c = a + b - c is a boring function, it sums a and b and then substracts c. But the function could be more interesting, if we could generalize it, what if we'd want sometimes to sum a and b, but sometimes multiply? Or divide by c instead of subtracting?
Remember, (+) is just a function of 2 numbers that returns a number, there's nothing special about it, so any function of 2 numbers that returns a number could be in place of it. Writing g a b c = a * b - c, h a b c = a + b / c and so on just doesn't cut it for us, we need a general solution, we are programmers after all! Here how it is done in Haskell:
let f g h a b c = a `g` b `h` c in f (*) (/) 2 3 4 -- returns 1.5
And you can return functions too. Below we create a function that accepts a function and an argument and returns another function, which accepts a parameter and returns a result.
let g f n = (\m -> m `f` n); f = g (+) 2 in f 10 -- returns 12
A (\m -> m `f` n) construct is an anonymous function of 1 argument m that applies f to that m and n. Basically, when we call g (+) 2 we create a function of one argument, that just adds 2 to whatever it receives. So let f = g (+) 2 in f 10 equals 12 and let f = g (*) 5 in f 5 equals 25.
(See also my explanation of HOFs using Scheme as an example.)
Understanding currying
Currying is a technique that transforms a function of several arguments to a function of 1 argument that returns a function of 1 argument that returns a function of 1 argument... until it returns a value. It's easier than it sounds, for example we have a function of 2 arguments, like (+).
Now imagine that you could give only 1 argument to it, and it would return a function? You could use this function later to add this 1st argument, now encased in this new function, to something else. E.g.:
f n = (\m -> n - m)
g = f 10
g 8 -- would return 2
g 4 -- would return 6
Guess what, Haskell curries all functions by default. Technically speaking, there are no functions of multiple arguments in Haskell, only functions of one argument, some of which may return new functions of one argument.
It's evident from the types. Write :t (++) in interpreter, where (++) is a function that concatenates 2 strings together, it will return (++) :: [a] -> [a] -> [a]. The type is not [a],[a] -> [a], but [a] -> [a] -> [a], meaning that (++) accepts one list and returns a function of type [a] -> [a]. This new function can accept yet another list, and it will finally return a new list of type [a].
That's why function application syntax in Haskell has no parentheses and commas, compare Haskell's f a b c with Python's or Java's f(a, b, c). It's not some weird aesthetic decision, in Haskell function application goes from left to right, so f a b c is actually (((f a) b) c), which makes complete sense, once you know that f is curried by default.
In types, however, the association is from right to left, so [a] -> [a] -> [a] is equivalent to [a] -> ([a] -> [a]). They are the same thing in Haskell, Haskell treats them exactly the same. Which makes sense, because when you apply only one argument, you get back a function of type [a] -> [a].
On the other hand, check the type of map: (a -> b) -> [a] -> [b], it receives a function as its first argument, and that's why it has parentheses.
To really hammer down the concept of currying, try to find the types of the following expressions in the interpreter:
(+)
(+) 2
(+) 2 3
map
map (\x -> head x)
map (\x -> head x) ["conscience", "do", "cost"]
map head
map head ["conscience", "do", "cost"]
Partial application and sections
Now that you understand HOFs and currying, Haskell gives you some syntax to make code shorter. When you call a function with 1 or multiple arguments to get back a function that still accepts arguments, it's called partial application.
You understand already that instead of creating anonymous functions you can just partially apply a function, so instead of writing (\x -> replicate 3 x) you can just write (replicate 3). But what if you want to have a divide (/) operator instead of replicate? For infix functions Haskell allows you to partially apply it using either of arguments.
This is called sections: (2/) is equivalent to (\x -> 2 / x) and (/2) is equivalent to (\x -> x / 2). With backticks you can take a section of any binary function: (2`elem`) is equivalent to (\xs -> 2 `elem` xs).
But remember, any function is curried by default in Haskell and therefore always accepts one argument, so sections can be actually used with any function: let (+^) be some weird function that sums 4 arguments, then let (+^) a b c d = a + b + c in (2+^) 3 4 5 returns 14.
Compositions
Other handy tools to write concise and flexible code are composition and application operator. Composition operator (.) chains functions together. Application operator ($) just applies function on the left side to the argument on the right side, so f $ x is equivalent to f x. However ($) has the lowest precedence of all operators, so we can use it to get rid of parentheses: f (g x y) is equivalent to f $ g x y.
It is also helpful when we need to apply multiple functions to the same argument: map ($2) [(2+), (10-), (20/)] would yield [4,8,10]. (f . g . h) (x + y + z), f (g (h (x + y + z))), f $ g $ h $ x + y + z and f . g . h $ x + y + z are equivalent, but (.) and ($) are different things, so read Haskell: difference between . (dot) and $ (dollar sign) and parts from Learn You a Haskell to understand the difference.
You can think of it like that the function stores the argument and returns a new function that just demands the other argument(s). The new function already knows the first argument, as it is stored together with the function. This is handled internally by the compiler. If you want to know how this works exactly, you may be interested in this page although it may be a bit complicated if you are new to Haskell.
If a function call is fully saturated (so all arguments are passed at the same time), most compilers use an ordinary calling scheme, like in C.
Does this help?
max' = \m -> \n -> if (m > n)
then m
else n
Written as lambdas. max' is a value of a lambda that itself returns a lambda given some m, which returns the value.
Hence max' 4 is
max' 4 = \n -> if (4 > n)
then 4
else n
Something that may help is to think about how you could implement curry as a higher order function if Haskell didn't have built in support for it. Here is a Haskell implementation that works for a function on two arguments.
curry :: (a -> b -> c) -> a -> (b -> c)
curry f a = \b -> f a b
Now you can pass curry a function on two arguments and the first argument and it will return a function on one argument (this is an example of a closure.)
In ghci:
Prelude> let curry f a = \b -> f a b
Prelude> let g = curry (+) 5
Prelude> g 10
15
Prelude> g 15
20
Prelude>
Fortunately we don't have to do this in Haskell (you do in Lisp if you want currying) because support is built into the language.
If you come from C-like languages, their syntax might help you to understand it. For example in PHP the add function could be implemented as such:
function add($a) {
return function($b) use($a) {
return $a + $b;
};
}
Haskell is based on Lambda calculus. Internally what happens is that everything gets converted into a function. So your compiler evaluates (+) as follows
(+) :: Num a => a -> a -> a
(+) x y = \x -> (\y -> x + y)
That is, (+) :: a -> a -> a is essentially the same as (+) :: a -> (a -> a). Hope this helps.
How does CPS in curried languages like lambda calculus or Ocaml even make sense? Technically, all function have one argument. So say we have a CPS version of addition in one such language:
cps-add k n m = k ((+) n m)
And we call it like
(cps-add random-continuation 1 2)
This is then the same as:
(((cps-add random-continuation) 1) 2)
I already see two calls there that aren't tail calls and in reality a complexly nested expression, the (cps-add random-continuation) returns a value, namely a function that consumes a number, and then returns a function which consumes another number and then delivers the sum of both to that random-continuation. But we can't work around this value returning by simply translating this into CPS again, because we can only give each function one argument. We need to have at least two to make room for the continuation and the 'actual' argument.
Or am I missing something completely?
Since you've tagged this with Haskell, I'll answer in that regard: In Haskell, the equivalent of doing a CPS transform is working in the Cont monad, which transforms a value x into a higher-order function that takes one argument and applies it to x.
So, to start with, here's 1 + 2 in regular Haskell: (1 + 2) And here it is in the continuation monad:
contAdd x y = do x' <- x
y' <- y
return $ x' + y'
...not terribly informative. To see what's going on, let's disassemble the monad. First, removing the do notation:
contAdd x y = x >>= (\x' -> y >>= (\y' -> return $ x' + y'))
The return function lifts a value into the monad, and in this case is implemented as \x k -> k x, or using an infix operator section as \x -> ($ x).
contAdd x y = x >>= (\x' -> y >>= (\y' -> ($ x' + y')))
The (>>=) operator (read "bind") chains together computations in the monad, and in this case is implemented as \m f k -> m (\x -> f x k). Changing the bind function to prefix form and substituting in the lambda, plus some renaming for clarity:
contAdd x y = (\m1 f1 k1 -> m1 (\a1 -> f1 a1 k1)) x (\x' -> (\m2 f2 k2 -> m2 (\a2 -> f2 a2 k2)) y (\y' -> ($ x' + y')))
Reducing some function applications:
contAdd x y = (\k1 -> x (\a1 -> (\x' -> (\k2 -> y (\a2 -> (\y' -> ($ x' + y')) a2 k2))) a1 k1))
contAdd x y = (\k1 -> x (\a1 -> y (\a2 -> ($ a1 + a2) k1)))
And a bit of final rearranging and renaming:
contAdd x y = \k -> x (\x' -> y (\y' -> k $ x' + y'))
In other words: The arguments to the function have been changed from numbers, into functions that take a number and return the final result of the entire expression, just as you'd expect.
Edit: A commenter points out that contAdd itself still takes two arguments in curried style. This is sensible because it doesn't use the continuation directly, but not necessary. To do otherwise, you'd need to first break the function apart between the arguments:
contAdd x = x >>= (\x' -> return (\y -> y >>= (\y' -> return $ x' + y')))
And then use it like this:
foo = do f <- contAdd (return 1)
r <- f (return 2)
return r
Note that this is really no different from the earlier version; it's simply packaging the result of each partial application as taking a continuation, not just the final result. Since functions are first-class values, there's no significant difference between a CPS expression holding a number vs. one holding a function.
Keep in mind that I'm writing things out in very verbose fashion here to make explicit all the steps where something is in continuation-passing style.
Addendum: You may notice that the final expression looks very similar to the de-sugared version of the monadic expression. This is not a coincidence, as the inward-nesting nature of monadic expressions that lets them change the structure of the computation based on previous values is closely related to continuation-passing style; in both cases, you have in some sense reified a notion of causality.
Short answer : of course it makes sense, you can apply a CPS-transform directly, you will only have lots of cruft because each argument will have, as you noticed, its own attached continuation
In your example, I will consider that there is a +(x,y) uncurried primitive, and that you're asking what is the translation of
let add x y = +(x,y)
(This add faithfully represent OCaml's (+) operator)
add is syntaxically equivalent to
let add = fun x -> (fun y -> +(x, y))
So you apply a CPS transform¹ and get
let add_cps = fun x kx -> kx (fun y ky -> ky +(x,y))
If you want a translated code that looks more like something you could have willingly written, you can devise a finer transformation that actually considers known-arity function as non-curried functions, and tream all parameters as a whole (as you have in non-curried languages, and as functional compilers already do for obvious performance reasons).
¹: I wrote "a CPS transform" because there is no "one true CPS translation". Different translations have been devised, producing more or less continuation-related garbage. The formal CPS translations are usually defined directly on lambda-calculus, so I suppose you're having a less formal, more hand-made CPS transform in mind.
The good properties of CPS (as a style that program respect, and not a specific transformation into this style) are that the order of evaluation is completely explicit, and that all calls are tail-calls. As long as you respect those, you're relatively free in what you can do. Handling curryfied functions specifically is thus perfectly fine.
Remark : Your (cps-add k 1 2) version can also be considered tail-recursive if you assume the compiler detect and optimize that cps-add actually always take 3 arguments, and don't build intermediate closures. That may seem far-fetched, but it's the exact same assumption we use when reasoning about tail-calls in non-CPS programs, in those languages.
yes, technically all functions can be decomposed into functions with one method, however, when you want to use CPS the only thing you are doing is saying is that at a certain point of computation, run the continuation method.
Using your example, lets have a look. To make things a little easier, let's deconstruct cps-add into its normal form where it is a function only taking one argument.
(cps-add k) -> n -> m = k ((+) n m)
Note at this point that the continuation, k, is not being evaluated (Could this be the point of confusion for you?).
Here we have a method called cps-add k that receives a function as an argument and then returns a function that takes another argument, n.
((cps-add k) n) -> m = k ((+) n m)
Now we have a function that takes an argument, m.
So I suppose what I am trying to point out is that currying does not get in the way of CPS style programming. Hope that helps in some way.
What is the difference between the dot (.) and the dollar sign ($)?
As I understand it, they are both syntactic sugar for not needing to use parentheses.
The $ operator is for avoiding parentheses. Anything appearing after it will take precedence over anything that comes before.
For example, let's say you've got a line that reads:
putStrLn (show (1 + 1))
If you want to get rid of those parentheses, any of the following lines would also do the same thing:
putStrLn (show $ 1 + 1)
putStrLn $ show (1 + 1)
putStrLn $ show $ 1 + 1
The primary purpose of the . operator is not to avoid parentheses, but to chain functions. It lets you tie the output of whatever appears on the right to the input of whatever appears on the left. This usually also results in fewer parentheses, but works differently.
Going back to the same example:
putStrLn (show (1 + 1))
(1 + 1) doesn't have an input, and therefore cannot be used with the . operator.
show can take an Int and return a String.
putStrLn can take a String and return an IO ().
You can chain show to putStrLn like this:
(putStrLn . show) (1 + 1)
If that's too many parentheses for your liking, get rid of them with the $ operator:
putStrLn . show $ 1 + 1
They have different types and different definitions:
infixr 9 .
(.) :: (b -> c) -> (a -> b) -> (a -> c)
(f . g) x = f (g x)
infixr 0 $
($) :: (a -> b) -> a -> b
f $ x = f x
($) is intended to replace normal function application but at a different precedence to help avoid parentheses. (.) is for composing two functions together to make a new function.
In some cases they are interchangeable, but this is not true in general. The typical example where they are is:
f $ g $ h $ x
==>
f . g . h $ x
In other words in a chain of $s, all but the final one can be replaced by .
Also note that ($) is the identity function specialised to function types. The identity function looks like this:
id :: a -> a
id x = x
While ($) looks like this:
($) :: (a -> b) -> (a -> b)
($) = id
Note that I've intentionally added extra parentheses in the type signature.
Uses of ($) can usually be eliminated by adding parenthesis (unless the operator is used in a section). E.g.: f $ g x becomes f (g x).
Uses of (.) are often slightly harder to replace; they usually need a lambda or the introduction of an explicit function parameter. For example:
f = g . h
becomes
f x = (g . h) x
becomes
f x = g (h x)
($) allows functions to be chained together without adding parentheses to control evaluation order:
Prelude> head (tail "asdf")
's'
Prelude> head $ tail "asdf"
's'
The compose operator (.) creates a new function without specifying the arguments:
Prelude> let second x = head $ tail x
Prelude> second "asdf"
's'
Prelude> let second = head . tail
Prelude> second "asdf"
's'
The example above is arguably illustrative, but doesn't really show the convenience of using composition. Here's another analogy:
Prelude> let third x = head $ tail $ tail x
Prelude> map third ["asdf", "qwer", "1234"]
"de3"
If we only use third once, we can avoid naming it by using a lambda:
Prelude> map (\x -> head $ tail $ tail x) ["asdf", "qwer", "1234"]
"de3"
Finally, composition lets us avoid the lambda:
Prelude> map (head . tail . tail) ["asdf", "qwer", "1234"]
"de3"
The short and sweet version:
($) calls the function which is its left-hand argument on the value which is its right-hand argument.
(.) composes the function which is its left-hand argument on the function which is its right-hand argument.
One application that is useful and took me some time to figure out from the very short description at Learn You a Haskell: Since
f $ x = f x
and parenthesizing the right hand side of an expression containing an infix operator converts it to a prefix function, one can write ($ 3) (4 +) analogous to (++ ", world") "hello".
Why would anyone do this? For lists of functions, for example. Both:
map (++ ", world") ["hello", "goodbye"]
map ($ 3) [(4 +), (3 *)]
are shorter than
map (\x -> x ++ ", world") ["hello", "goodbye"]
map (\f -> f 3) [(4 +), (3 *)]
Obviously, the latter variants would be more readable for most people.
Haskell: difference between . (dot) and $ (dollar sign)
What is the difference between the dot (.) and the dollar sign ($)?. As I understand it, they are both syntactic sugar for not needing to use parentheses.
They are not syntactic sugar for not needing to use parentheses - they are functions, - infixed, thus we may call them operators.
Compose, (.), and when to use it.
(.) is the compose function. So
result = (f . g) x
is the same as building a function that passes the result of its argument passed to g on to f.
h = \x -> f (g x)
result = h x
Use (.) when you don't have the arguments available to pass to the functions you wish to compose.
Right associative apply, ($), and when to use it
($) is a right-associative apply function with low binding precedence. So it merely calculates the things to the right of it first. Thus,
result = f $ g x
is the same as this, procedurally (which matters since Haskell is evaluated lazily, it will begin to evaluate f first):
h = f
g_x = g x
result = h g_x
or more concisely:
result = f (g x)
Use ($) when you have all the variables to evaluate before you apply the preceding function to the result.
We can see this by reading the source for each function.
Read the Source
Here's the source for (.):
-- | Function composition.
{-# INLINE (.) #-}
-- Make sure it has TWO args only on the left, so that it inlines
-- when applied to two functions, even if there is no final argument
(.) :: (b -> c) -> (a -> b) -> a -> c
(.) f g = \x -> f (g x)
And here's the source for ($):
-- | Application operator. This operator is redundant, since ordinary
-- application #(f x)# means the same as #(f '$' x)#. However, '$' has
-- low, right-associative binding precedence, so it sometimes allows
-- parentheses to be omitted; for example:
--
-- > f $ g $ h x = f (g (h x))
--
-- It is also useful in higher-order situations, such as #'map' ('$' 0) xs#,
-- or #'Data.List.zipWith' ('$') fs xs#.
{-# INLINE ($) #-}
($) :: (a -> b) -> a -> b
f $ x = f x
Conclusion
Use composition when you do not need to immediately evaluate the function. Maybe you want to pass the function that results from composition to another function.
Use application when you are supplying all arguments for full evaluation.
So for our example, it would be semantically preferable to do
f $ g x
when we have x (or rather, g's arguments), and do:
f . g
when we don't.
... or you could avoid the . and $ constructions by using pipelining:
third xs = xs |> tail |> tail |> head
That's after you've added in the helper function:
(|>) x y = y x
My rule is simple (I'm beginner too):
do not use . if you want to pass the parameter (call the function), and
do not use $ if there is no parameter yet (compose a function)
That is
show $ head [1, 2]
but never:
show . head [1, 2]
A great way to learn more about anything (any function) is to remember that everything is a function! That general mantra helps, but in specific cases like operators, it helps to remember this little trick:
:t (.)
(.) :: (b -> c) -> (a -> b) -> a -> c
and
:t ($)
($) :: (a -> b) -> a -> b
Just remember to use :t liberally, and wrap your operators in ()!
All the other answers are pretty good. But there’s an important usability detail about how ghc treats $, that the ghc type checker allows for instatiarion with higher rank/ quantified types. If you look at the type of $ id for example you’ll find it’s gonna take a function whose argument is itself a polymorphic function. Little things like that aren’t given the same flexibility with an equivalent upset operator. (This actually makes me wonder if $! deserves the same treatment or not )
The most important part about $ is that it has the lowest operator precedence.
If you type info you'll see this:
λ> :info ($)
($) :: (a -> b) -> a -> b
-- Defined in ‘GHC.Base’
infixr 0 $
This tells us it is an infix operator with right-associativity that has the lowest possible precedence. Normal function application is left-associative and has highest precedence (10). So $ is something of the opposite.
So then we use it where normal function application or using () doesn't work.
So, for example, this works:
λ> head . sort $ "example"
λ> e
but this does not:
λ> head . sort "example"
because . has lower precedence than sort and the type of (sort "example") is [Char]
λ> :type (sort "example")
(sort "example") :: [Char]
But . expects two functions and there isn't a nice short way to do this because of the order of operations of sort and .
I think a short example of where you would use . and not $ would help clarify things.
double x = x * 2
triple x = x * 3
times6 = double . triple
:i times6
times6 :: Num c => c -> c
Note that times6 is a function that is created from function composition.