Understanding currying and HOFs - haskell

I am currently studying functional programming and it's most important feature : Higher Order Functions.
It's not as crystal clear as I'd like currently and therefore I'd like to understand perfectly how HOFs work.
Considering this function
{- Curried addition. -}
plusc :: Num a => a -> (a -> a)
plusc = (+)
To what extent can we say that this function uses currying and is a HOF ?
EDIT : Basically, I don't understand how the definition of the function stands for an addition (parameters, associativity, etc )

I wouldn't personally call plusc a HOF, because its arguments aren't functions. A way to spot an obvious HOF is to look for a parens in the signature that aren't at the leftmost side:
{- equivalent signature -}
plusc :: Num a => a -> a -> a
When we remove optional parens, it's obvious that the function isn't a HOF that takes functions, but it's curried.
Note: Since every curried function can return a function, though, we might say that after partially applying it, it returns a function, and as such operates on functions - so it is a HOF. I don't think this is particularly helpful way of describing/learning the concept, but I suppose the definition would span both parameters and results.
An uncurried version would simply group its arguments:
plusUnc :: Num a => (a, a) -> a
Now a HOF might take such a function and turn it into some other one:
imu :: Num a => (a -> a -> a) -> (a -> a -> a)
imu f = \a b -> f a b
Note: The lambda impl could obviously be simplified, I spelled it out just for illustration.
Note that f is the "lower" order function that's being passed into imu. To use it:
imuPlus = imu plusc -- a function is being passed
imuPlus 1 2 -- == 3
Note: since we're mixing both concepts (and you asked for both), imu is also curried. An uncurried version could look like this:
imuUnc :: ((a -> a -> a), (a, a)) -> a
Now it is a HOF (it has a function in the parameters), but it doesn't return a function, which differs from the examples above.
It's just much easier to use when it's curried, though, mostly because of partial application.

Related

A Haskell function is higher order if and only if its type has more than one arrow?

A professor teaching a class I am attending claimed the following.
A higher-order function could have only one arrow when checking its type.
I don't agree with this statement I tried to prove it is wrong. I tried to set up some function but then I found that my functions probably aren't higher-order functions. Here is what I have:
f x y z = x + y + z
f :: a -> a-> a -> a
g = f 3
g :: a -> a -> a
h = g 5
h :: a -> a
At the end of the day, I think my proof was wrong, but I am still not convinced that higher-order functions can only have more than one arrow when checking the type.
So, is there any resource or perhaps someone could prove that higher-order function may have only one arrow?
Strictly speaking, the statement is correct. This is because the usual definition of the term "higher-order function", taken here from Wikipedia, is a function that does one or both of the following:
takes a function as an argument, or
returns a function as its result
It is clear then that no function with a single arrow in its type signature can be a higher-order function, because in a signature a -> b, there is no "room" to create something of the form x -> y on either side of an arrow - there simply aren't enough arrows.
(This argument actually has a significant flaw, which you may have spotted, and which I'll address below. But it's probably true "in spirit" for what your professor meant.)
The converse is also, strictly speaking, true in Haskell - although not in most other languages. The distinguishing feature of Haskell here is that functions are curried. For example, a function like (+), whose signature is:
a -> a -> a
(with a Num a constraint that I'll ignore because it could just confuse the issue if we're supposed to be counting "arrows"), is usually thought of as being a function of two arguments: it takes 2 as and produces another a. In most languages, which all of course have an analagous function/operator, this would never be described as a higher-order function. But in Haskell, because functions are curried, the above signature is really just a shorthand for the parenthesised version:
a -> (a -> a)
which clearly is a higher-order function. It takes an a and produces a function of type a -> a. (Recall, from above, that returning a function is one of the things that characterises a HOF.) In Haskell, as I said, these two signatures are one and the same thing. (+) really is a higher-order function - we just often don't notice that because we intend to feed it two arguments, by which we really mean to feed it one argument, result in a function, then feed that function the second argument. Thanks to Haskell's convenient, parenthesis-free, syntax for applying functions to arguments, there isn't really any distinction. (This again contrasts from non-functional languages: the addition "function" there always takes exactly 2 arguments, and only giving it one will usually be an error. If the language has first-class functions, you can indeed define the curried form, for example this in Python:
def curried_add(x):
return lambda y: x + y
but this is clearly a different function from the straightforward function of two arguments that you would normally use, and usually less convenient to apply because you need to call it as curried_add(x)(y) rather than just say add(x,y).
So, if we take currying into account, the statement of your professor is strictly true.
Well, with the following exception, which I alluded to above. I've been assuming that something with a signature of the form
a -> b
is not a HOF*. That of course doesn't apply if a or b is a function. Often, that function's type will include an arrow, and we're tacitly assuming here that neither a or b contains arrows. Well, Haskell has type synonyms, so we could easily define, say:
type MyFunctionType = Int -> Int
and then a function with signature MyFunctionType -> a or a -> MyFunctionType is most certainly a HOF, even though it doesn't "look like one" from just a glance at the signature.
*To be clear here,a and b refer to specific types which are as yet unspecified - I am not referring to an actual signature a -> b which would mean a polymorphic function that applies to any types a and b, which would not necessarily be functions.
Your functions are higher order. Indeed, take for example your function:
f :: a -> a -> a -> a
f x y z = x + y + z
This is a less verbose form of:
f :: a -> (a -> (a -> a))
So it is a function that takes an a and returns a function. A higher order function is a function that (a) takes a function as parameter, or (b) returns a function. Both can be true at the same time. Here your function f returns a function.
A function thus always has type a -> b with a the input type, and b the return type. In case a has an arrow (like (c -> d) -> b), then it is a higher order function, since it takes a function as parameter.
If b has an arrow, like a -> (c -> d), then this is a higher order function as well, since it returns a function.
Yes, as Haskell functions are curried always, I can come up with minimal examples of higher order functions and examples:
1) Functions that takes a function at least as parameter, such as:
apply :: (a -> b) -> a -> b
apply f x = f x
2) at least 3 arguments:
sum3 :: Int -> Int -> Int
sum3 a b c = a + b + c
so that can be read as:
sum3 :: Int -> (Int -> Int)

Example of deep understanding of currying

Reading https://wiki.haskell.org/Currying
it states :
Much of the time, currying can be ignored by the new programmer. The
major advantage of considering all functions as curried is
theoretical: formal proofs are easier when all functions are treated
uniformly (one argument in, one result out). Having said that, there
are Haskell idioms and techniques for which you need to understand
currying.
What is a Haskell technique/idiom that a deeper understanding of currying is required ?
Partial function application isn't really a distinct feature of Haskell; it is just a consequence of curried functions.
map :: (a -> b) -> [a] -> [b]
In a language like Python, map always takes two arguments: a function of type a -> b and a list of type [a]
map(f, [x, y, z]) == [f(x), f(y), f(z)]
This requires you to pretend that the -> syntax is just for show, and that the -> between (a -> b) and [a] is not really the same as the one between [a] -> [b]. However, that is not the case; it's the exact same operator, and it is right-associative. The type of map can be explicitly parenthesized as
map :: (a -> b) -> ([a] -> [b])
and suddenly it seems much less interesting that you might give only one argument (the function) to map and get back a new function of type [a] -> [b]. That is all partial function application is: taking advantage of the fact that all functions are curried.
In fact, you never really give more than one argument to a function. To go along with -> being right-associative, function application is left-associative, meaning a "multi-argument" call like
map f [1,2,3]
is really two function applications, which becomes clearer if we parenthesize it.
(map f) [1,2,3]
map is first "partially" applied to one argument f, which returns a new function. This function is then applied to [1,2,3] to get the final result.

Usefulness of "function arrows associate to the right"?

Reading http://www.seas.upenn.edu/~cis194/spring13/lectures/04-higher-order.html it states
In particular, note that function arrows associate to the right, that
is, W -> X -> Y -> Z is equivalent to W -> (X -> (Y -> Z)). We can
always add or remove parentheses around the rightmost top-level arrow
in a type.
Function arrows associate to the right but as function application associates to the left then what is usefulness of this information ? I feel I'm not understanding something as to me it is a meaningless point that function arrows associate to the right. As function application always associates to the left then this the only associativity I should be concerned with ?
Function arrows associate to the right but [...] what is usefulness of this information?
If you see a type signature like, for example, f : String -> Int -> Bool you need to know the associativity of the function arrow to understand what the type of f really is:
if the arrow associates to the left, then the type means (String -> Int) -> Bool, that is, f takes a function as argument and returns a boolean.
if the arrow associates to the right, then the type means String -> (Int -> Bool), that is, f takes a string as argument and returns a function.
That's a big difference, and if you want to use f, you need to know which one it is. Since the function arrow associates to the right, you know that it has to be the second option: f takes a string and returns a function.
Function arrows associate to the right [...] function application associates to the left
These two choices work well together. For example, we can call the f from above as f "answer" 42 which really means (f "answer") 42. So we are passing the string "answer" to f which returns a function. And then we're passing the number 42 to that function, which returns a boolean. In effect, we're almost using f as a function with two arguments.
This is the standard way of writing functions with two (or more) arguments in Haskell, so it is a very common use case. Because of the associativity of function application and of the function arrow, we can write this common use case without parentheses.
When defining a two-argument curried function, we usually write something like this:
f :: a -> b -> c
f x y = ...
If the arrow did not associate to the right, the above type would instead have to be spelled out as a -> (b -> c). So the usefulness of ->'s associativity is that it saves us from writing too many parentheses when declaring function types.
If an operator # is 'right associative', it means this:
a # b # c # d = a # (b # (c # d))
... for any number of arguments. It behaves like foldr
This means that:
a -> b -> c -> d = a -> (b -> (c -> d))
Note: a -> (b -> (c -> d)) =/= ((a -> b) -> c) -> d ! This is very important.
What this tells us is that, say, foldr:
λ> :t foldr
foldr :: (a -> b -> b) -> b -> [a] -> b
Takes a function of type (a -> b -> b), and then returns... a function that takes a b, and then returns... a function that takes a [a], and then returns... a b. This means that we can apply functions like this
f a b c
because
f a b c = ((f a) b) c
and f will return two functions each time an argument is given.
Essentially, this isn't very useful as such, but is important information for when we want to interpret and call function types.
However, in functions like (++), associativity matters. If (++) were left associative, it would be very slow, so it's right associative.
Early functional language Lisp suffered from excessively nested parenthesis (which make code (or even text (if you do not mind to consider a broader context)) difficult to read. With time functional language designers opted to make functional code easy to read and write for pros even at cost of confusing rookies with less uniform rules.
In functional code,
function type declaration like (String -> Int) -> Bool are much more rare than functions like String -> (Int -> Bool), because functions that return functions are trade mark of functional style. Thus associating arrows to right helps reduce parentheses number (on overage, you might need to map a function to a primitive type). For function applications it is vise-versa.
The main purposes is convenience, because partial function application goes from left to right.
Every time you partially apply a function to a set of values, the remaining type has to be valid.
You can think of arrow types as a queue of types, where the queue itself is a type. During partial function application, you dequeue as many types from the queue as the number of arguments, yielding whatever remains of the queue. The resulting queue is still a valid type.
This is why types associate to the right. If types associate to the left, it will behave like a stack, and you won't be able to partially apply it the same way without leaving "holes" or undefined domains. For instance, say you have the following function:
foo :: a -> b -> c -> d
If Haskell types were left-associative, then passing a single parameter to foo would yield the following invalid type:
((? -> b) -> c) -> d
You will then be forced to circumvent it by adding parentheses, which could hamper readability.

When are type signatures necessary in Haskell?

Many introductory texts will tell you that in Haskell type signatures are "almost always" optional. Can anybody quantify the "almost" part?
As far as I can tell, the only time you need an explicit signature is to disambiguate type classes. (The canonical example being read . show.) Are there other cases I haven't thought of, or is this it?
(I'm aware that if you go beyond Haskell 2010 there are plenty for exceptions. For example, GHC will never infer rank-N types. But rank-N types are a language extension, not part of the official standard [yet].)
Polymorphic recursion needs type annotations, in general.
f :: (a -> a) -> (a -> b) -> Int -> a -> b
f f1 g n x =
if n == (0 :: Int)
then g x
else f f1 (\z h -> g (h z)) (n-1) x f1
(Credit: Patrick Cousot)
Note how the recursive call looks badly typed (!): it calls itself with five arguments, despite f having only four! Then remember that b can be instantiated with c -> d, which causes an extra argument to appear.
The above contrived example computes
f f1 g n x = g (f1 (f1 (f1 ... (f1 x))))
where f1 is applied n times. Of course, there is a much simpler way to write an equivalent program.
Monomorphism restriction
If you have MonomorphismRestriction enabled, then sometimes you will need to add a type signature to get the most general type:
{-# LANGUAGE MonomorphismRestriction #-}
-- myPrint :: Show a => a -> IO ()
myPrint = print
main = do
myPrint ()
myPrint "hello"
This will fail because myPrint is monomorphic. You would need to uncomment the type signature to make it work, or disable MonomorphismRestriction.
Phantom constraints
When you put a polymorphic value with a constraint into a tuple, the tuple itself becomes polymorphic and has the same constraint:
myValue :: Read a => a
myValue = read "0"
myTuple :: Read a => (a, String)
myTuple = (myValue, "hello")
We know that the constraint affects the first part of the tuple but does not affect the second part. The type system doesn't know that, unfortunately, and will complain if you try to do this:
myString = snd myTuple
Even though intuitively one would expect myString to be just a String, the type checker needs to specialize the type variable a and figure out whether the constraint is actually satisfied. In order to make this expression work, one would need to annotate the type of either snd or myTuple:
myString = snd (myTuple :: ((), String))
In Haskell, as I'm sure you know, types are inferred. In other words, the compiler works out what type you want.
However, in Haskell, there are also polymorphic typeclasses, with functions that act in different ways depending on the return type. Here's an example of the Monad class, though I haven't defined everything:
class Monad m where
return :: a -> m a
(>>=) :: m a -> (a -> m b) -> m b
fail :: String -> m a
We're given a lot of functions with just type signatures. Our job is to make instance declarations for different types that can be treated as Monads, like Maybe t or [t].
Have a look at this code - it won't work in the way we might expect:
return 7
That's a function from the Monad class, but because there's more than one Monad, we have to specify what return value/type we want, or it automatically becomes an IO Monad. So:
return 7 :: Maybe Int
-- Will return...
Just 7
return 6 :: [Int]
-- Will return...
[6]
This is because [t] and Maybe have both been defined in the Monad type class.
Here's another example, this time from the random typeclass. This code throws an error:
random (mkStdGen 100)
Because random returns something in the Random class, we'll have to define what type we want to return, with a StdGen object tupelo with whatever value we want:
random (mkStdGen 100) :: (Int, StdGen)
-- Returns...
(-3650871090684229393,693699796 2103410263)
random (mkStdGen 100) :: (Bool, StdGen)
-- Returns...
(True,4041414 40692)
This can all be found at learn you a Haskell online, though you'll have to do some long reading. This, I'm pretty much 100% certain, it the only time when types are necessary.

What are the benefits of currying?

I don't think I quite understand currying, since I'm unable to see any massive benefit it could provide. Perhaps someone could enlighten me with an example demonstrating why it is so useful. Does it truly have benefits and applications, or is it just an over-appreciated concept?
(There is a slight difference between currying and partial application, although they're closely related; since they're often mixed together, I'll deal with both terms.)
The place where I realized the benefits first was when I saw sliced operators:
incElems = map (+1)
--non-curried equivalent: incElems = (\elems -> map (\i -> (+) 1 i) elems)
IMO, this is totally easy to read. Now, if the type of (+) was (Int,Int) -> Int *, which is the uncurried version, it would (counter-intuitively) result in an error -- but curryied, it works as expected, and has type [Int] -> [Int].
You mentioned C# lambdas in a comment. In C#, you could have written incElems like so, given a function plus:
var incElems = xs => xs.Select(x => plus(1,x))
If you're used to point-free style, you'll see that the x here is redundant. Logically, that code could be reduced to
var incElems = xs => xs.Select(curry(plus)(1))
which is awful due to the lack of automatic partial application with C# lambdas. And that's the crucial point to decide where currying is actually useful: mostly when it happens implicitly. For me, map (+1) is the easiest to read, then comes .Select(x => plus(1,x)), and the version with curry should probably be avoided, if there is no really good reason.
Now, if readable, the benefits sum up to shorter, more readable and less cluttered code -- unless there is some abuse of point-free style done is with it (I do love (.).(.), but it is... special)
Also, lambda calculus would get impossible without using curried functions, since it has only one-valued (but therefor higher-order) functions.
* Of course it actually in Num, but it's more readable like this for the moment.
Update: how currying actually works.
Look at the type of plus in C#:
int plus(int a, int b) {..}
You have to give it a tuple of values -- not in C# terms, but mathematically spoken; you can't just leave out the second value. In haskell terms, that's
plus :: (Int,Int) -> Int,
which could be used like
incElem = map (\x -> plus (1, x)) -- equal to .Select (x => plus (1, x))
That's way too much characters to type. Suppose you'd want to do this more often in the future. Here's a little helper:
curry f = \x -> (\y -> f (x,y))
plus' = curry plus
which gives
incElem = map (plus' 1)
Let's apply this to a concrete value.
incElem [1]
= (map (plus' 1)) [1]
= [plus' 1 1]
= [(curry plus) 1 1]
= [(\x -> (\y -> plus (x,y))) 1 1]
= [plus (1,1)]
= [2]
Here you can see curry at work. It turns a standard haskell style function application (plus' 1 1) into a call to a "tupled" function -- or, viewed at a higher level, transforms the "tupled" into the "untupled" version.
Fortunately, most of the time, you don't have to worry about this, as there is automatic partial application.
It's not the best thing since sliced bread, but if you're using lambdas anyway, it's easier to use higher-order functions without using lambda syntax. Compare:
map (max 4) [0,6,9,3] --[4,6,9,4]
map (\i -> max 4 i) [0,6,9,3] --[4,6,9,4]
These kinds of constructs come up often enough when you're using functional programming, that it's a nice shortcut to have and lets you think about the problem from a slightly higher level--you're mapping against the "max 4" function, not some random function that happens to be defined as (\i -> max 4 i). It lets you start to think in higher levels of indirection more easily:
let numOr4 = map $ max 4
let numOr4' = (\xs -> map (\i -> max 4 i) xs)
numOr4 [0,6,9,3] --ends up being [4,6,9,4] either way;
--which do you think is easier to understand?
That said, it's not a panacea; sometimes your function's parameters will be the wrong order for what you're trying to do with currying, so you'll have to resort to a lambda anyway. However, once you get used to this style, you start to learn how to design your functions to work well with it, and once those neurons starts to connect inside your brain, previously complicated constructs can start to seem obvious in comparison.
One benefit of currying is that it allows partial application of functions without the need of any special syntax/operator. A simple example:
mapLength = map length
mapLength ["ab", "cde", "f"]
>>> [2, 3, 1]
mapLength ["x", "yz", "www"]
>>> [1, 2, 3]
map :: (a -> b) -> [a] -> [b]
length :: [a] -> Int
mapLength :: [[a]] -> [Int]
The map function can be considered to have type (a -> b) -> ([a] -> [b]) because of currying, so when length is applied as its first argument, it yields the function mapLength of type [[a]] -> [Int].
Currying has the convenience features mentioned in other answers, but it also often serves to simplify reasoning about the language or to implement some code much easier than it could be otherwise. For example, currying means that any function at all has a type that's compatible with a ->b. If you write some code whose type involves a -> b, that code can be made work with any function at all, no matter how many arguments it takes.
The best known example of this is the Applicative class:
class Functor f => Applicative f where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
And an example use:
-- All possible products of numbers taken from [1..5] and [1..10]
example = pure (*) <*> [1..5] <*> [1..10]
In this context, pure and <*> adapt any function of type a -> b to work with lists of type [a]. Because of partial application, this means you can also adapt functions of type a -> b -> c to work with [a] and [b], or a -> b -> c -> d with [a], [b] and [c], and so on.
The reason this works is because a -> b -> c is the same thing as a -> (b -> c):
(+) :: Num a => a -> a -> a
pure (+) :: (Applicative f, Num a) => f (a -> a -> a)
[1..5], [1..10] :: Num a => [a]
pure (+) <*> [1..5] :: Num a => [a -> a]
pure (+) <*> [1..5] <*> [1..10] :: Num a => [a]
Another, different use of currying is that Haskell allows you to partially apply type constructors. E.g., if you have this type:
data Foo a b = Foo a b
...it actually makes sense to write Foo a in many contexts, for example:
instance Functor (Foo a) where
fmap f (Foo a b) = Foo a (f b)
I.e., Foo is a two-parameter type constructor with kind * -> * -> *; Foo a, the partial application of Foo to just one type, is a type constructor with kind * -> *. Functor is a type class that can only be instantiated for type constrcutors of kind * -> *. Since Foo a is of this kind, you can make a Functor instance for it.
The "no-currying" form of partial application works like this:
We have a function f : (A ✕ B) → C
We'd like to apply it partially to some a : A
To do this, we build a closure out of a and f (we don't evaluate f at all, for the time being)
Then some time later, we receive the second argument b : B
Now that we have both the A and B argument, we can evaluate f in its original form...
So we recall a from the closure, and evaluate f(a,b).
A bit complicated, isn't it?
When f is curried in the first place, it's rather simpler:
We have a function f : A → B → C
We'd like to apply it partially to some a : A – which we can just do: f a
Then some time later, we receive the second argument b : B
We apply the already evaluated f a to b.
So far so nice, but more important than being simple, this also gives us extra possibilities for implementing our function: we may be able to do some calculations as soon as the a argument is received, and these calculations won't need to be done later, even if the function is evaluated with multiple different b arguments!
To give an example, consider this audio filter, an infinite impulse response filter. It works like this: for each audio sample, you feed an "accumulator function" (f) with some state parameter (in this case, a simple number, 0 at the beginning) and the audio sample. The function then does some magic, and spits out the new internal state1 and the output sample.
Now here's the crucial bit – what kind of magic the function does depends on the coefficient2 λ, which is not quite a constant: it depends both on what cutoff frequency we'd like the filter to have (this governs "how the filter will sound") and on what sample rate we're processing in. Unfortunately, the calculation of λ is a bit more complicated (lp1stCoeff $ 2*pi * (νᵥ ~*% δs) than the rest of the magic, so we wouldn't like having to do this for every single sample, all over again. Quite annoying, because νᵥ and δs are almost constant: they change very seldom, certainly not at each audio sample.
But currying saves the day! We simply calculate λ as soon as we have the necessary parameters. Then, at each of the many many audio samples to come, we only need to perform the remaining, very easy magic: yⱼ = yⱼ₁ + λ ⋅ (xⱼ - yⱼ₁). So we're being efficient, and still keeping a nice safe referentially transparent purely-functional interface.
1 Note that this kind of state-passing can generally be done more nicely with the State or ST monad, that's just not particularly beneficial in this example
2 Yes, this is a lambda symbol. I hope I'm not confusing anybody – fortunately, in Haskell it's clear that lambda functions are written with \, not with λ.
It's somewhat dubious to ask what the benefits of currying are without specifying the context in which you're asking the question:
In some cases, like functional languages, currying will merely be seen as something that has a more local change, where you could replace things with explicit tupled domains. However, this isn't to say that currying is useless in these languages. In some sense, programming with curried functions make you "feel" like you're programming in a more functional style, because you more typically face situations where you're dealing with higher order functions. Certainly, most of the time, you will "fill in" all of the arguments to a function, but in the cases where you want to use the function in its partially applied form, this is a bit simpler to do in curried form. We typically tell our beginning programmers to use this when learning a functional language just because it feels like better style and reminds them they're programming in more than just C. Having things like curry and uncurry also help for certain conveniences within functional programming languages too, I can think of arrows within Haskell as a specific example of where you would use curry and uncurry a bit to apply things to different pieces of an arrow, etc...
In some cases, you want to think about more than functional programs, you can present currying / uncurrying as a way to state the elimination and introduction rules for and in constructive logic, which provides a connection to a more elegant motivation for why it exists.
In some cases, for example, in Coq, using curried functions versus tupled functions can produce different induction schemes, which may be easier or harder to work with, depending on your applications.
I used to think that currying was simple syntax sugar that saves you a bit of typing. For example, instead of writing
(\ x -> x + 1)
I can merely write
(+1)
The latter is instantly more readable, and less typing to boot.
So if it's just a convenient short cut, why all the fuss?
Well, it turns out that because function types are curried, you can write code which is polymorphic in the number of arguments a function has.
For example, the QuickCheck framework lets you test functions by feeding them randomly-generated test data. It works on any function who's input type can be auto-generated. But, because of currying, the authors were able to rig it so this works with any number of arguments. Were functions not curried, there would be a different testing function for each number of arguments - and that would just be tedious.

Resources