How are point-free functions actually "functions"? - haskell

Conal here argues that nullary-constructed types are not functions. However, point-free functions are described as such for example on Wikipedia, when they take no explicit arguments in their definitions, and it seemingly is rather a property of currying. How exactly are they functions?
Specifically: how are f = map and f = id . map different in this context? As in, f = map is simply just a binding to a value that happens to be a function where f simply "returns" map (similar to how f = 2 "returns" 2) which then takes the arguments. But f = id . map is referred to as a function because it's point-free.

Conal's blog post boils down to saying "non-functions are not functions", e.g. False is not a function. This is pretty obvious; if you consider all possible values and remove the ones which have a function type, then those that remain are... not functions.
That has absolutely nothing to do with the notion of point-free definitions.
Consider the following function definitions:
map1, map2, map3, map4 :: (a -> b) -> [a] -> [b]
map1 = map
map2 = id . map
map3 f = map f
map4 _ [] = []
map4 f (x:xs) = f x : map4 f xs
These are all definitions of the same function (and there are infinitely many more ways to define something equivalent to the map function). map1 is obviously a point-free definition; map4 is obviously not. They also both obviously have a function type (the same one!), so how can we say that point-free definitions are not functions? Only if we change our definition of "function" to something else than what is usually meant by Haskell programmers (which is that a function is something of type x -> y, for some x and y; in this case we're using a -> b as x and [a] -> [b] for y).
And the definition of map3 is "partially point-free" (point-reduced?); the definition names its first argument f, but doesn't mention the second argument.
The point in all this is that "point-free-ness" is a quality of definitions, while "being a function" is a property of values. The notion of point-free function doesn't actually make sense, since a given function can be defined many ways (some of them point-free, others not). Whenever you see someone talking about a point-free function, they mean a point-free definition.
You seem to be concerned that map1 = map isn't a function because it's just a binding to the existing value map, just like x = 2. You're confusing notions here. Remember that functions are first-class in Haskell; "things that are functions" is a subset of "things that are values", not a different class of thing! So when map is an existing value which is a function, then yes map1 = map is just binding a new name to an existing value. It's also defining the function map1; the two are not mutually exclusive.
You answer the question "is this point-free" by looking at code; the definition of a function. You answer the question "is this a function" by looking at types.

Contrary to what some people might believe everything in Haskell is not a function. Seriously. Numbers, strings, booleans, etc. are not functions. Not even nullary functions.
Nullary Functions
A nullary function is a function which takes no arguments and performs some “side-effectful” computation. For example, consider this nullary JavaScript function:
main();
function main() {
alert("Hello World!");
alert("My name is Aadit M Shah.");
}
Functions that take no arguments can only return different results if the are side-effectful. Thus, they are similar to IO actions in Haskell which take no arguments and perform some side-effectful computations:
main = do
putStrLn "Hello World!"
putStrLn "My name is Aadit M Shah."
Unary Functions
In contrast, functions in Haskell can never be nullary. In fact, functions in Haskell are always unary. Functions in Haskell always take one and only one argument. Multiparameter functions in Haskell can be simulated either using currying or using data structures with multiple fields.
add' :: Int -> Int -> Int -- an example of using currying
add' x y = x + y
add'' :: (Int, Int) -> Int -- an example of using multi-field data structures
add'' (x, y) = x + y
Covariance and Contravariance
Functions in Haskell are a data type, just like any other data type you may define in Haskell. However, functions are special because they are contravariant in the argument type and covariant in the return type.
When you define a new algebraic data type, all the fields of its type constructors are covariant (i.e. a source of data) instead of contravariant (i.e. a sink of data). A covariant field produces data while a contravariant field consumes data.
For example, suppose I create a new data type:
data Foo = Bar { field1 :: Char, field2 :: Int }
| Baz { field3 :: Bool }
Here the fields field1, field2 and field3 are covariant. They produce data of the type Char, Int and Bool respectively. Consider:
let x = Baz True -- I create a new value of type Foo
in field3 x -- I can access the value of field3 because it is covariant
Now, consider the definition of a function:
data Function a b = Function { domain :: a -- the argument type
, codomain :: b -- the return type
}
Ofcourse, a function is not actually defined as follows but let's assume that it is. A function has two fields domain and codomain. When we create a value of the type Function we don't know either of these two fields.
We don't know the value of domain because it is contravariant. Hence, it needs to be provided by the user.
We don't know the value of codomain because although it is covariant yet it might depend on the domain and we don't know the value of the domain.
For example, \x -> x + x is a function where the value of the domain is x and the value of the codomain is x + x. Here the domain is contravariant (i.e. a sink of data) because data goes into the function via the domain. Similarly, the codomain is covariant (i.e. a source of data) because data comes out of the function via the codomain.
The fields of algebraic data structures in Haskell (like the Foo we defined earlier) are all covariant because data comes out of those data structures via their fields. Data never goes into these structures like the way it does for the domain field of functions. Hence, they are never contravariant.
Multiparameter Functions
As I explained before, although all functions in Haskell are unary yet we can emulate multiparameter functions either using currying or fields with multiple data structures.
To understand this, I'll use a new notation. The minus sign ([-]) represents a contravariant type. The plus sign ([+]) represents a covariant type. Hence, a function from one type to another is denoted as:
[-] -> [+]
Now, the domain and the codomain of the function could each be individually replaced with other types. For example in currying, the codomain of the function is another function:
[-] -> ([-] -> [+]) -- an example of currying
Notice that when a covariant type is replaced with another type then the variance of the new type is preserved. This makes sense because this is equivalent to a function with two arguments and one return type.
On the other hand if we were to replace the domain with another function:
([+] -> [-]) -> [+]
Notice that when we replace a contravariant type with another type then the variance of the new type is flipped. This makes sense because although ([+] -> [-]) as a whole is contravariant yet its input type becomes the output of the whole function and its output type becomes the input of the whole function. For example:
function f(g) { // g is contravariant for f (an input value for f)
return g(x) + 10; // x is covariant for f (an output value for f)
// x is contravariant for g (an input value for g)
// g(x) is contravariant for f (an input value for f)
// g(x) is covariant for g (an output value for g)
// g(x) + 10 is covariant for f (an output value for f)
}
Currying emulates multiparameter functions because when one function returns another function we get multiple inputs and one output because variance is preserved for the return type:
[-] -> [-] -> [+] -- a binary function
[-] -> [-] -> [-] -> [+] -- a ternary function
A data structure with multiple fields as the domain of a function also emulates multiparameter functions because variance is flipped for the argument type of a function:
([+], [+]) -- the fields of a tuple are covariant
([-], [-]) -> [+] -- a binary function, variance is flipped for arguments
Non Functions
Now, if you take a look at values like numbers, strings and booleans, these values are not functions. However, they are still covariant.
For example, 5 produces a value of 5 itself. Similarly, Just 5 produces a value of Just 5 and fromJust (Just 5) produces a value of 5. None of these expressions consume a value and hence none of them are contravariant. However, in Just 5 the function Just consumes the value 5 and in fromJust (Just 5) the function fromJust consumes the value Just 5.
So everything in Haskell is covariant except for the arguments of functions (which are contravariant). This is important because every expression in Haskell must evaluate to a value (i.e. produce a value, not consume a value). At the same time we want functions to consume a value and produce a new value (hence facilitating transformation of data, beta reduction).
The end effect is that we can never have a contravariant expression. For example, the expression Just is covariant and the expression Just 5 is also covariant. However, in the expression Just 5 the function Just consumes the value 5. Hence, contravariance is restricted to function arguments and bounded by the scope of the function.
Because every expression in Haskell is covariant people often think of non-functional values like 5 as “nullary functions”. Although this intuition is insightful yet it is wrong. The value 5 is not a nullary function. It is an expression which is cannot be beta reduced. Similarly, the value fromJust (Just 5) is not a nullary function. It is an expression which can be beta reduced to 5, which is not a function.
However, the expression fromJust (Just (\x -> x + x)) is a function because it can be beta reduced to \x -> x + x which is a function.
Pointful and Pointfree Functions
Now, consider the function \x -> x + x. This is a pointful function because we are explicitly declaring the argument of the function by giving it the name x.
Every function can also be written in pointfree style (i.e. without explicitly declaring the argument of the function). For example, the function \x -> x + x can be written in pointfree style as join (+) as described in the following answer.
Note that join (+) is a function because it beta reduces to the function \x -> x + x. It doesn't look like a function because it has no points (i.e. explicitly declared arguments). However, it is still a function.
Pointfree functions have nothing to do with currying. Pointfree functions are about writing functions without points (e.g. join (+) instead of \x -> x + x). Currying is when one function returns another function, thereby allowing partial application (e.g. \x -> \y -> x + y which can be written in pointfree style as (+)).
Name Binding
In the binding f = map we are just giving map the alternative name f. Note that f does not “return” map. It is just an alternative name for map. For example, in the binding x = 5 we don't say that x returns 5 because it doesn't. The name x is not a function nor a value. It's just a name which identifies the value of 5. Similarly, in f = map the name f just identifies the value of map. The name f is said to denote a function because map denotes a function.
The binding f = map is pointfree because we haven't explicitly declared any arguments of f. If we wanted to then we could have written f g xs = map g xs. This would be a pointful definition but because of eta conversion we can write it more succinctly in pointfree form as f = map. The concept of eta conversion is that \x -> f x is equivalent to f itself and that the pointful \x -> f x can be converted into the pointfree f and vice versa. Note that f g xs = map g xs is just syntactic sugar for f = \g xs -> map g xs.
On the other hand f = id . map is a function not because it is pointfree but because id . map beta reduces to the function \x -> id (map x). BTW, any function composed with id is equivalent to itself (i.e. id . f = f . id = f). Hence, id . map is equivalent to map itself. There's no difference between f = map and f = id . map.
Just remember that f is not a function that “returns” id . map. It is just a name given to the expression id . map for convenience.
P.S. For an intro to pointfree functions read:
What does (f .) . g mean in Haskell?

Related

A Haskell function is higher order if and only if its type has more than one arrow?

A professor teaching a class I am attending claimed the following.
A higher-order function could have only one arrow when checking its type.
I don't agree with this statement I tried to prove it is wrong. I tried to set up some function but then I found that my functions probably aren't higher-order functions. Here is what I have:
f x y z = x + y + z
f :: a -> a-> a -> a
g = f 3
g :: a -> a -> a
h = g 5
h :: a -> a
At the end of the day, I think my proof was wrong, but I am still not convinced that higher-order functions can only have more than one arrow when checking the type.
So, is there any resource or perhaps someone could prove that higher-order function may have only one arrow?
Strictly speaking, the statement is correct. This is because the usual definition of the term "higher-order function", taken here from Wikipedia, is a function that does one or both of the following:
takes a function as an argument, or
returns a function as its result
It is clear then that no function with a single arrow in its type signature can be a higher-order function, because in a signature a -> b, there is no "room" to create something of the form x -> y on either side of an arrow - there simply aren't enough arrows.
(This argument actually has a significant flaw, which you may have spotted, and which I'll address below. But it's probably true "in spirit" for what your professor meant.)
The converse is also, strictly speaking, true in Haskell - although not in most other languages. The distinguishing feature of Haskell here is that functions are curried. For example, a function like (+), whose signature is:
a -> a -> a
(with a Num a constraint that I'll ignore because it could just confuse the issue if we're supposed to be counting "arrows"), is usually thought of as being a function of two arguments: it takes 2 as and produces another a. In most languages, which all of course have an analagous function/operator, this would never be described as a higher-order function. But in Haskell, because functions are curried, the above signature is really just a shorthand for the parenthesised version:
a -> (a -> a)
which clearly is a higher-order function. It takes an a and produces a function of type a -> a. (Recall, from above, that returning a function is one of the things that characterises a HOF.) In Haskell, as I said, these two signatures are one and the same thing. (+) really is a higher-order function - we just often don't notice that because we intend to feed it two arguments, by which we really mean to feed it one argument, result in a function, then feed that function the second argument. Thanks to Haskell's convenient, parenthesis-free, syntax for applying functions to arguments, there isn't really any distinction. (This again contrasts from non-functional languages: the addition "function" there always takes exactly 2 arguments, and only giving it one will usually be an error. If the language has first-class functions, you can indeed define the curried form, for example this in Python:
def curried_add(x):
return lambda y: x + y
but this is clearly a different function from the straightforward function of two arguments that you would normally use, and usually less convenient to apply because you need to call it as curried_add(x)(y) rather than just say add(x,y).
So, if we take currying into account, the statement of your professor is strictly true.
Well, with the following exception, which I alluded to above. I've been assuming that something with a signature of the form
a -> b
is not a HOF*. That of course doesn't apply if a or b is a function. Often, that function's type will include an arrow, and we're tacitly assuming here that neither a or b contains arrows. Well, Haskell has type synonyms, so we could easily define, say:
type MyFunctionType = Int -> Int
and then a function with signature MyFunctionType -> a or a -> MyFunctionType is most certainly a HOF, even though it doesn't "look like one" from just a glance at the signature.
*To be clear here,a and b refer to specific types which are as yet unspecified - I am not referring to an actual signature a -> b which would mean a polymorphic function that applies to any types a and b, which would not necessarily be functions.
Your functions are higher order. Indeed, take for example your function:
f :: a -> a -> a -> a
f x y z = x + y + z
This is a less verbose form of:
f :: a -> (a -> (a -> a))
So it is a function that takes an a and returns a function. A higher order function is a function that (a) takes a function as parameter, or (b) returns a function. Both can be true at the same time. Here your function f returns a function.
A function thus always has type a -> b with a the input type, and b the return type. In case a has an arrow (like (c -> d) -> b), then it is a higher order function, since it takes a function as parameter.
If b has an arrow, like a -> (c -> d), then this is a higher order function as well, since it returns a function.
Yes, as Haskell functions are curried always, I can come up with minimal examples of higher order functions and examples:
1) Functions that takes a function at least as parameter, such as:
apply :: (a -> b) -> a -> b
apply f x = f x
2) at least 3 arguments:
sum3 :: Int -> Int -> Int
sum3 a b c = a + b + c
so that can be read as:
sum3 :: Int -> (Int -> Int)

Does Haskell "understand" curried function definitions?

In Haskell functions always take one parameter. Multiple parameters are implemented via Currying. That being the case, I can see how a function of two parameters would be defined as "func1" below. It's a function that returns a function (closure) that adds the outer function's single parameter to the returned function's single parameter.
However, although this is how curried functions work, that's not the regular Haskell syntax for defining a two-parameter function. Instead we're taught to define such a function like "func2".
I'd like to know how Haskell understands that func2 should behave the same way as func1. There's nothing about the definition of func2 that suggest to me that it is a function that returns a function. To the contrary it actually looks like a two-parameter function, something we're told doesn't exist!
What's the trick here? Is Haskell just born knowing that we can define multi-parameter functions in this textbook way, and that they work the way we expect anyhow? That is, is this a syntax convention that doesn't seem to be clearly documented (Haskell knows what you mean and will supply the missing function return for you), or is there some other magic at work or something I'm missing?
func1 :: Int -> (Int -> Int)
func1 x = (\y -> x + y)
func2 :: Int -> Int -> Int
func2 x y = x + y
main = do
print (func1 7 9)
print (func2 7 9)
In the language itself, writing a function definition of the form f x y z = _ is equivalent to f = \x y z -> _, which is equivalent to f = \x -> \y -> \z -> _. There's no theoretical reason for this; it's just that those nested lambda abstractions are a terrible eye-/finger-sore and everyone thought that it would be fine to sacrifice a bit of pedantry to make some syntax sugar for it. That's all there is on the surface and is probably all you need to know, for now.
In the implementation of the language, though, things get trickier. In GHC, which is the most common implementation, there actually is a difference between f x y = _ and f = \x -> \y -> _. When GHC compiles Haskell, it assigns arity to declarations. The former definition of f has arity 2, and the latter has arity 0. Take (.) from GHC.Base
(.) f g = \x -> f (g x)
(.) has arity 2, even though its type ((b -> c) -> (a -> b) -> a -> c) says that it can be applied up to thrice. This affects optimization: GHC will only inline a function that is saturated, or has at least as many arguments applied as its arity. In the call (maximum .), (.) will not inline, because it only has one argument (it is unsaturated). In the call (maximum . f), it will inline to \x -> maximum (f x), and in (maximum . f) 1, the (.) will inline first to a lambda abstraction (producing (\x -> maximum (f x)) 1), which will beta-reduce to maximum (f 1). If (.) were implemented
(.) f g x = f (g x)
(.) would have arity 3, which means it would inline less often (specifically the f . g case, which is a very common argument to higher order functions), likely reducing performance, which is exactly what the comment on it says:
Make sure it has TWO args only on the left, so that it inlines
when applied to two functions, even if there is no final argument
Final answer: the two forms should be equivalent, according to the language's semantics, but in GHC the two forms have different characteristics when it comes to optimization, even if they always give the same result.
When talking about type signatures, there is no such thing as a "multi-parameter function". All functions are single-parameter, period. Haskell doesn't need to somehow "translate" multi-parameter functions into single-parameter ones, because the former doesn't exist at all.
All function type signatures look like a -> b, where a is argument type and b is return type. Sometimes b may just happen to contain more arrows ->, in which case we, humans (but not the compiler), may say that the function has multiple parameters.
When talking about the syntax for implementations, i.e. f x y = z - that is merely syntactic sugar, which gets desugared (i.e. mechanically transformed) into f = \x -> \y -> z during compilation.

How to work around F#'s type system

In Haskell, you can use unsafeCoerce to override the type system. How to do the same in F#?
For example, to implement the Y-combinator.
I'd like to offer a different solution, based on embedding the untyped lambda calculus in a typed functional language. The idea is to create a data type that allows us to change between types α and α → α, which subsequently allows to escape the restrictions of a type system. I'm not very familiar with F# so I'll give my answer in Haskell, but I believe it could be adapted easily (perhaps the only complication could be F#'s strictness).
-- | Roughly represents morphism between #a# and #a -> a#.
-- Therefore we can embed a arbitrary closed λ-term into #Any a#. Any time we
-- need to create a λ-abstraction, we just nest into one #Any# constructor.
--
-- The type parameter allows us to embed ordinary values into the type and
-- retrieve results of computations.
data Any a = Any (Any a -> a)
Note that the type parameter isn't significant for combining terms. It just allows us to embed values into our representation and extract them later. All terms of a particular type Any a can be combined freely without restrictions.
-- | Embed a value into a λ-term. If viewed as a function, it ignores its
-- input and produces the value.
embed :: a -> Any a
embed = Any . const
-- | Extract a value from a λ-term, assuming it's a valid value (otherwise it'd
-- loop forever).
extract :: Any a -> a
extract x#(Any x') = x' x
With this data type we can use it to represent arbitrary untyped lambda terms. If we want to interpret a value of Any a as a function, we just unwrap its constructor.
First let's define function application:
-- | Applies a term to another term.
($$) :: Any a -> Any a -> Any a
(Any x) $$ y = embed $ x y
And λ abstraction:
-- | Represents a lambda abstraction
l :: (Any a -> Any a) -> Any a
l x = Any $ extract . x
Now we have everything we need for creating complex λ terms. Our definitions mimic the classical λ-term syntax, all we do is using l to construct λ abstractions.
Let's define the Y combinator:
-- λf.(λx.f(xx))(λx.f(xx))
y :: Any a
y = l (\f -> let t = l (\x -> f $$ (x $$ x))
in t $$ t)
And we can use it to implement Haskell's classical fix. First we'll need to be able to embed a function of a -> a into Any a:
embed2 :: (a -> a) -> Any a
embed2 f = Any (f . extract)
Now it's straightforward to define
fix :: (a -> a) -> a
fix f = extract (y $$ embed2 f)
and subsequently a recursively defined function:
fact :: Int -> Int
fact = fix f
where
f _ 0 = 1
f r n = n * r (n - 1)
Note that in the above text there is no recursive function. The only recursion is in the Any data type, which allows us to define y (which is also defined non-recursively).
In Haskell, unsafeCoerce has the type a -> b and is generally used to assert to the compiler that the thing being coerced actually has the destination type and it's just that the type-checker doesn't know it.
Another, less common use, is to reinterpret a pattern of bits as another type. For example an unboxed Double# could be reinterpreted as an unboxed Int64#. You have to be sure about the underlying representations for this to be safe.
In F#, the first application can be achieved with box |> unbox as John Palmer said in a comment on the question. If possible use explicit type arguments to make sure that you don't accidentally have the wrong coercion inferred, e.g. box<'a> |> unbox<'b> where 'a and 'b are type variables or concrete types that are already in scope in your code.
For the second application, look at the BitConverter class for specific conversions of bit-patterns. In theory you could also do something like interfacing with unmanaged code to achieve this, but that seems very heavyweight.
These techniques won't work for implementing the Y combinator because the cast is only valid if the runtime objects actually do have the target type, but with the Y combinator you actually need to call the same function again but with a different type. For this you need the kinds of encoding tricks mentioned in the question John Palmer linked to.

Evaluation strategy

How should one reason about function evaluation in examples like the following in Haskell:
let f x = ...
x = ...
in map (g (f x)) xs
In GHC, sometimes (f x) is evaluated only once, and sometimes once for each element in xs, depending on what exactly f and g are. This can be important when f x is an expensive computation. It has just tripped a Haskell beginner I was helping and I didn't know what to tell him other than that it is up to the compiler. Is there a better story?
Update
In the following example (f x) will be evaluated 4 times:
let f x = trace "!" $ zip x x
x = "abc"
in map (\i -> lookup i (f x)) "abcd"
With language extensions, we can create situations where f x must be evaluated repeatedly:
{-# LANGUAGE GADTs, Rank2Types #-}
module MultiEvG where
data BI where
B :: (Bounded b, Integral b) => b -> BI
foo :: [BI] -> [Integer]
foo xs = let f :: (Integral c, Bounded c) => c -> c
f x = maxBound - x
g :: (forall a. (Integral a, Bounded a) => a) -> BI -> Integer
g m (B y) = toInteger (m + y)
x :: (Integral i) => i
x = 3
in map (g (f x)) xs
The crux is to have f x polymorphic even as the argument of g, and we must create a situation where the type(s) at which it is needed can't be predicted (my first stab used an Either a b instead of BI, but when optimising, that of course led to only two evaluations of f x at most).
A polymorphic expression must be evaluated at least once for each type it is used at. That's one reason for the monomorphism restriction. However, when the range of types it can be needed at is restricted, it is possible to memoise the values at each type, and in some circumstances GHC does that (needs optimising, and I expect the number of types involved mustn't be too large). Here we confront it with what is basically an inhomogeneous list, so in each invocation of g (f x), it can be needed at an arbitrary type satisfying the constraints, so the computation cannot be lifted outside the map (technically, the compiler could still build a cache of the values at each used type, so it would be evaluated only once per type, but GHC doesn't, in all likelihood it wouldn't be worth the trouble).
Monomorphic expressions need only be evaluated once, they can be shared. Whether they are is up to the implementation; by purity, it doesn't change the semantics of the programme. If the expression is bound to a name, in practice you can rely on it being shared, since it's easy and obviously what the programmer wants. If it isn't bound to a name, it's a question of optimisation. With the bytecode generator or without optimisations, the expression will often be evaluated repeatedly, but with optimisations repeated evaluation would indicate a compiler bug.
Polymorphic expressions must be evaluated at least once for every type they're used at, but with optimisations, when GHC can see that it may be used multiple times at the same type, it will (usually) still be shared for that type during a larger computation.
Bottom line: Always compile with optimisations, help the compiler by binding expressions you want shared to a name, and give monomorphic type signatures where possible.
Your examples are indeed quite different.
In the first example, the argument to map is g (f x) and is passed once to map most likely as partially applied function.
Should g (f x), when applied to an argument within map evaluate its first argument, then this will be done only once and then the thunk (f x) will be updated with the result.
Hence, in your first example, f xwill be evaluated at most 1 time.
Your second example requires a deeper analysis before the compiler can arrive at the conclusion that (f x) is always constant in the lambda expression. Perhaps it will never optimize it at all, because it may have knowledge that trace is not quite kosher. So, this may evaluate 4 times when tracing, and 4 times or 1 time when not tracing.
This is really dependent on GHC's optimizations, as you've been able to tell.
The best thing to do is to study the GHC core that you get after optimizing the program. I would look at the generated Core and examine whether f x had its own let statement outside the map or not.
If you want to be sure, then you should factor f x out into its own variable assigned in a let, but there's not really a guaranteed way to figure it out other than reading through Core.
All that said, with the exception of things like trace that use unsafePerformIO, this will never change the semantics of your program: how it actually behaves.
In GHC without optimizations, the body of a function is evaluated every time the function is called. (A "call" means the function is applied to arguments and the result is evaluated.) In the following example, f x is inside a function, so it will execute each time the function is called.
(GHC may optimize this expression as discussed in the FAQ [1].)
let f x = trace "!" $ zip x x
x = "abc"
in map (\i -> lookup i (f x)) "abcd"
However, if we move f x out of the function, it will execute only once.
let f x = trace "!" $ zip x x
x = "abc"
in map ((\f_x i -> lookup i f_x) (f x)) "abcd"
This can be rewritten more readably as
let f x = trace "!" $ zip x x
x = "abc"
g f_x i = lookup i f_x
in map (g (f x)) "abcd"
The general rule is that, each time a function is applied to an argument, a new "copy" of the function body is created. Function application is the only thing that may cause an expression to re-execute. However, be warned that some functions and function calls do not look like functions syntactically.
[1] http://www.haskell.org/haskellwiki/GHC/FAQ#Subexpression_Elimination

Haskell Monad bind operator confusion

Okay, so I am not a Haskell programmer, but I am absolutely intrigued by a lot of the ideas behind Haskell and am looking into learning it. But I'm stuck at square one: I can't seem to wrap my head around Monads, which seem to be fairly fundamental. I know there are a million questions on SO asking to explain Monads, so I'm going to be a little more specific about what's bugging me:
I read this excellent article (an introduction in Javascript), and thought that I understood Monads completely. Then I read the Wikipedia entry on Monads, and saw this:
A binding operation of polymorphic type (M t)→(t→M u)→(M u), which Haskell represents by the infix operator >>=. Its first argument is a value in a monadic type, its second argument is a function that maps from the underlying type of the first argument to another monadic type, and its result is in that other monadic type.
Okay, in the article that I cited, bind was a function which took only one argument. Wikipedia says two. What I thought I understood about Monads was the following:
A Monad's purpose is to take a function with different input and output types and to make it composable. It does this by wrapping the input and output types with a single monadic type.
A Monad consists of two interrelated functions: bind and unit. Bind takes a non-composable function f and returns a new function g that accepts the monadic type as input and returns the monadic type. g is composable. The unit function takes an argument of the type that f expected, and wraps it in the monadic type. This can then be passed to g, or to any composition of functions like g.
But there must be something wrong, because my concept of bind takes one argument: a function. But (according to Wikipedia) Haskell's bind actually takes two arguments! Where is my mistake?
You are not making a mistake. The key idea to understand here is currying - that a Haskell function of two arguments can be seen in two ways. The first is as simply a function of two arguments. If you have, for example, (+), this is usually seen as taking two arguments and adding them. The other way to see it is as a addition machine producer. (+) is a function that takes a number, say x, and makes a function that will add x.
(+) x = \y -> x + y
(+) x y = (\y -> x + y) y = x + y
When dealing with monads, sometimes it is probably better, as ephemient mentioned above, to think of =<<, the flipped version of >>=. There are two ways to look at this:
(=<<) :: (a -> m b) -> m a -> m b
which is a function of two arguments, and
(=<<) :: (a -> m b) -> (m a -> m b)
which transforms the input function to an easily composed version as the article mentioned. These are equivalent just like (+) as I explained before.
Allow me to tear down your beliefs about Monads. I sincerely hope you realize that I am not trying to be rude; I'm simply trying to avoid mincing words.
A Monad's purpose is to take a function with different input and output types and to make it composable. It does this by wrapping the input and output types with a single monadic type.
Not exactly. When you start a sentence with "A Monad's purpose", you're already on the wrong foot. Monads don't necessarily have a "purpose". Monad is simply an abstraction, a classification which applies to certain types and not to others. The purpose of the Monad abstraction is simply that, abstraction.
A Monad consists of two interrelated functions: bind and unit.
Yes and no. The combination of bind and unit are sufficient to define a Monad, but the combination of join, fmap, and unit is equally sufficient. The latter is, in fact, the way that Monads are typically described in Category Theory.
Bind takes a non-composable function f and returns a new function g that accepts the monadic type as input and returns the monadic type.
Again, not exactly. A monadic function f :: a -> m b is perfectly composable, with certain types. I can post-compose it with a function g :: m b -> c to get g . f :: a -> c, or I can pre-compose it with a function h :: c -> a to get f . h :: c -> m b.
But you got the second part absolutely right: (>>= f) :: m a -> m b. As others have noted, Haskell's bind function takes the arguments in the opposite order.
g is composable.
Well, yes. If g :: m a -> m b, then you can pre-compose it with a function f :: c -> m a to get g . f :: c -> m b, or you can post-compose it with a function h :: m b -> c to get h . g :: m a -> c. Note that c could be of the form m v where m is a Monad. I suppose when you say "composable" you mean to say "you can compose arbitrarily long chains of functions of this form", which is sort of true.
The unit function takes an argument of the type that f expected, and wraps it in the monadic type.
A roundabout way of saying it, but yes, that's about right.
This [the result of applying unit to some value] can then be passed to g, or to any composition of functions like g.
Again, yes. Although it is generally not idiomatic Haskell to call unit (or in Haskell, return) and then pass that to (>>= f).
-- instead of
return x >>= f >>= g
-- simply go with
f x >>= g
-- instead of
\x -> return x >>= f >>= g
-- simply go with
f >=> g
-- or
g <=< f
The article you link is based on sigfpe's article, which uses a flipped definition of bind:
The first thing is that I've flipped the definition of bind and written it as the word 'bind' whereas it's normally written as the operator >>=. So bind f x is normally written as x >>= f.
So, the Haskell bind takes a value enclosed in a monad, and returns a function, which takes a function and then calls it with the extracted value. I might be using non-precise terminology, so maybe better with code.
You have:
sine x = (sin x, "sine was called.")
cube x = (x * x * x, "cube was called.")
Now, translating your JS bind (Haskell does automatic currying, so calling bind f returns a function that takes a tuple, and then pattern matching takes care of unpacking it into x and s, I hope that's understandable):
bind f (x, s) = (y, s ++ t)
where (y, t) = f x
You can see it working:
*Main> :t sine
sine :: Floating t => t -> (t, [Char])
*Main> :t bind sine
bind sine :: Floating t1 => (t1, [Char]) -> (t1, [Char])
*Main> (bind sine . bind cube) (3, "")
(0.956375928404503,"cube was called.sine was called.")
Now, let's reverse arguments of bind:
bind' (x, s) f = (y, s ++ t)
where (y, t) = f x
You can clearly see it's still doing the same thing, but with a bit different syntax:
*Main> bind' (bind' (3, "") cube) sine
(0.956375928404503,"cube was called.sine was called.")
Now, Haskell has a syntax trick that allows you to use any function as an infix operator. So you can write:
*Main> (3, "") `bind'` cube `bind'` sine
(0.956375928404503,"cube was called.sine was called.")
Now rename bind' to >>= ((3, "") >>= cube >>= sine) and you've got what you were looking for. As you can see, with this definition, you can effectively get rid of the separate composition operator.
Translating the new thing back into JavaScript would yield something like this (notice that again, I only reverse the argument order):
var bind = function(tuple) {
return function(f) {
var x = tuple[0],
s = tuple[1],
fx = f(x),
y = fx[0],
t = fx[1];
return [y, s + t];
};
};
// ugly, but it's JS, after all
var f = function(x) { return bind(bind(x)(cube))(sine); }
f([3, ""]); // [0.956375928404503, "cube was called.sine was called."]
Hope this helps, and not introduces more confusion — the point is that those two bind definitions are equivalent, only differing in call syntax.

Resources