In what sense is one function "less defined" than another? - haskell

What I mean is that, from definition:
fix f is the least fixed point of the function f
In other words:
the least defined x such that f x = x.
The least defined value of any nullary type is undefined. There is some ambiguity here still as undefined may mean either "will throw an error" (the better) or "will never terminate" (the worse). As both reasoning and trial show, fix (+1) and fix pred :: Word both never seem to approach termination (even though pred 0 is an error), so the worse one of "never terminate" likely always gets chosen between these two alternatives. (What can escape fix is the boring const 1, but on that later.)
This is not the useful way of applying fix.
The useful way of applying fix is something along the lines of:
fix (\f x -> if x > 7 then f (x - 1) else x)
-- That is, using fix to magically produce a recursive function. (This never ceases to amaze me.) This may be regarded as choosing between 2 functions: \f x -> f (x - 1) and \_ x -> x, one of which does never evaluate its first bound variable. This is a dangerous toy, as we may still obtain a function that is non-terminating for half its domain as easily as by accidentally flipping > to < or - to +.
So, somehow, of these two functions:
f1 = \f x -> f (x - 1)
f2 = \_ x -> x
-- The latter is "least defined ". If we squint hard, we can actually recognise our boring fellow const in it, just flip'd.
Now, this is counter-intuitive. f2 is actually more fault tolerant, so, in a sense, it's defined for more input values than f1. (In particular, this is what lets it escape the otherwise infinite loop of fix). Specifically, f2 is defined for all the same pairs (f, x) of input as f1 plus the (undefined, x). Along the same line, const 1 is the most fault tolerant of all unary functions.
So, how do I understand definedness now?
some justification for the question
There is this answer nearby that kind of offers a different intuition for fix than the one proposed in this question. It also emphasizes that for a full understanding one must pass to an external tutorial on denotational semantics.
I would like to have an answer that either supports, or explains the wrongs of, the intuition proposed here, and also, if domain theory really is behind that cursive ordering put forward in the comments, includes at least some rules of thumb that allow, however limited, but practical application of domain theory. One particular part of the question that I'd like to see light shed upon is whether a fixable function can always be decomposed on a constant function and a reduction function, and what would the definition of these classes of functions be.
My wish is for an actual, practical answer on building useful and safe fix-encoded recursion, backed by solid mathematical reasoning.

In Haskell, functions are pure. They take input and yield output. So the natural question arises: what about functions that don't terminate?
f x = 1 + f x
This function will lock your interpreter in an infinite loop. But mathematically, we have to say it "returns" something, or else it's not a function. We call this special "something went wrong" value the "bottom" value and we denote it with the symbol ⊥.
So, mathematically speaking, the Haskell Int type contains every integer, plus an additional special element: ⊥. We call a type which contains ⊥ a "lifted" type, and pretty much all types you'll use in Haskell are lifted.[1] As it turns out, an infinite loop is not the only way to invoke ⊥. We can also do so by "crashing" the interpreter in other ways. The most common way you'll see is with undefined, a built-in function which halts the program with a generic error.
But there's another problem. Specifically, the halting problem. If ⊥ is supposed to represent infinite loops and other unpredictable issues, there are certain functions we can't write in Haskell. For example, the following pseudocode is nonsensical.
doesItHalt :: a -> Bool
doesItHalt ⊥ = False
doesItHalt _ = True
This would check if the resulting value is ⊥ or not, which would solve the halting problem. Clearly, we can't do this in Haskell. So what functions can we define in Haskell? We can define those which are monotonic with respect to the definedness ordering. We define ⊑ to be this ordering. We say ⊥ is the least-defined value, so ⊥ ⊑ x for all x. ⊑ is a partial ordering, so some elements cannot be compared. While 1 ⊑ 1, it is not true that either 1 ⊑ 2 or 2 ⊑ 1. In pure English, we're saying that 1 is certainly less-than-or-equally-defined-as 1 (clearly; they're the same value), but it doesn't make sense to say 1 is more or less defined than 2. They're just... different values.
In Haskell, we can only define functions which are monotonic with respect to this ordering. So we can only define functions for which, for all values a and b, if a ⊑ b then f a ⊑ f b. Our above doesItHalt fails, since, for instance, ⊥ ⊑ "foo" but f ⊥ = False and f "foo" = True. Like we said before, two fully defined but non-equal values are not comparable. So this function fails to work.
In simple terms, the reason we define the ordering to be this way is because, when we "inspect" a value using pattern matching, that acts as an assertion that it must be at least defined enough for us to see the parts we matched on. If it's not, then we'll always get ⊥, since the program will crash.
It's worth noting, before we discuss fix, that there are "partially defined" values. For example, 1 : ⊥ (in Haskell, we could write this as 1 : undefined) is a list whose first element is defined but for which the tail of the list is undefined. In some sense, this value is "more defined" than a simple ⊥, since we can at least pattern match and extract the first value. So we would say ⊥ ⊑ (1 : ⊥) ⊑ (1 : []). Thus, we end up with a hierarchy of "definedness".
Now, fix says it returns the least-defined fixed point. A fixed point of a function f is a value x such that x = f x. Let's try it out with a few examples and see if we can figure out why it says it that way. Let's define a function
f 0 = 1
f x = x
This function has a lot of fixed points. For any x other than zero, f x = x. By our "least-defined" principle, which one will be returned? ⊥ will, actually, since f ⊥ will return ⊥. If we pass undefined to f, the first pattern match will cause the program to crash, since we're comparing an undefined value against 0. So ⊥ is a fixed-point, and since it's the least possible defined value, it will be returned by fix f. Internally, fix works by applying the function to itself infinitely, so this makes some amount of sense. Applying f (f (f ...)) will keep trying to compare the inner argument against 0 and will never terminate. Now let's try a different function.
g x = 0
Applying this function to itself infinitely gives 0. As it turns out, fix g = 0. Why was 0 returned in this case and not ⊥? As it turns out, ⊥ fails to be a fixed-point of this function. g ⊥ = 0. Since the argument is never inspected and Haskell is a non-strict language, g will return 0, even if you pass undefined or error "Oops!" or some absurd infinitely recursing value as an argument. Thus, the only fixed point of g, even over a lifted type, is 0, since g 0 = 0. Thus, 0 is truly the least-defined fixed point of g.
So, in summary, we define a mathematical framework in order to rigorously describe the notion that certain functions don't terminate. "least-defined" is just a very mathematically precise way of saying that fix won't work on functions which are always completely strict in their argument. If f ⊥ is going to return ⊥ then fix f is also going to return ⊥.
* Most of this answer was paraphrased and summarized from the wiki page on Denotational Semantics. I encourage reading that page for more information; it's written to be very understandable to non-mathematicians and is very informative.
[1] A few primitive types are unlifted. For example, GHC-specific Int# is an integer type that does not contain ⊥ and is used internally in some places.

Related

If Either can be either Left or Right but not both, then why does it correspond to OR instead of XOR in Curry-Howard correspondence?

When I asked this question, one of the answers, now deleted, was suggesting that the type Either corresponds to XOR, rather than OR, in the Curry-Howard correspondence, because it cannot be Left and Right at the same time.
Where is the truth?
If you have a value of type P and a value of type Q (that is, you have both a proof of P and a proof of Q), then you are still able to provide a value of type Either P Q.
Consider
x :: P
y :: Q
...
z :: Either P Q
z = Left x -- Another possible proof would be `Right y`
While Either does not have a specific case that explicitly represents this situation (unlike These), it does not do anything to exclude it (as in exclusive OR).
This third case where both have proofs is a bit different than the other two cases where only one has a proof, which reflects the fact that "not excluding" something is a bit different than "including" something in intuitionistic logic, since Either does not provide a particular witness for this fact. However Either is not an XOR in the way that XOR would typically work since, as I said, it does not exclude the case where both parts have proofs. What Daniel Wagner proposes in this answer, on the other hand, is much closer to an XOR.
Either is kind of like an exclusive OR in terms of what its possible witnesses are. On the other hand, it is like an inclusive OR when you consider whether or not you can actually create a witness in four possible scenarios: having a proof of P and a refutation of Q, having a proof of Q and a refutation of P, having a proof of both or having a refutation of both.[1] While you can construct a value of type Either P Q when you have a proof of both P and Q (similar to an inclusive OR), you cannot distinguish this situation from the situation where only P has a proof or only Q has a proof using only a value of type Either P Q (kind of similar to an exclusive OR). Daniel Wagner's solution, on the other hand, is similar to exclusive OR on both construction and deconstruction.
It is also worth mentioning that These more explicitly represents the possibility of both having proofs. These is similar to inclusive OR on both construction and deconstruction. However, it's also worth noting that there is nothing preventing you from using an "incorrect" constructor when you have a proof of both P and Q. You could extend These to be even more representative of an inclusive OR in this regard with a bit of additional complexity:
data IOR a b
= OnlyFirst a (Not b)
| OnlySecond (Not a) b
| Both a b
type Not a = a -> Void
The potential "wrong constructor" issue of These (and the lack of a "both" witness in Either) doesn't really matter if you are only interested in a proof irrelevant logical system (meaning that there is no way to distinguish between any two proofs of the same proposition), but it might matter in cases where you want more computational relevance in the logic.[2]
In the practical situation of writing computer programs that are actually meant to be executed, computational relevance is often extremely important. Even though 0 and 23 are both proofs that the Int type is inhabited, we certainly like to distinguish between the two values in programs, in general!
Regarding "construction" and "destruction"
Essentially, I just mean "creating values of a type" by construction and "pattern matching" by destruction (sometimes people use the words "introduction" and "elimination" here, particularly in the context of logic).
In the case of Daniel Wagner's solutions:
Construction: When you construct a value of type Xor A B, you must provide a proof of exactly one of A or B and a refutation of the other one. This is similar to exclusive or. It is not possible to construct a value of this unless you have a refutation of either A or B and a proof of the other one. A particularly significant fact is that you cannot construct a value of this type if you have a proof of both A and B and you don't have a refutation of either of them (unlike inclusive OR).
Destruction: When you pattern match on a value of type Xor A B, you always have a proof of one of the types and a refutation of the other. It will never give you a proof of both of them. This follows from its definition.
In the case of IOR:
Construction: When you create a value of type IOR A B, you must do exactly one of the following: (1) provide only a proof of A and a refutation of B, (2) provide a proof of B and a refutation of B, (3) provide a proof of both A and B. This is like inclusive OR. These three possibilities correspond exactly to each of the three constructors of IOR, with no overlap. Note that, unlike the situation with These, you cannot use the "incorrect constructor" in the case where you have a proof of both A and B: the only way to make a value of type IOR A B in this case is to use Both (since you would otherwise need to provide a refutation of either A or B).
Destruction: Since the three possible situations where you have a proof of at least one of A and B are exactly represented by IOR, with a separate constructor for each (and no overlap between the constructors), you will always know exactly which of A and B are true and which is false (if applicable) by pattern matching on it.
Pattern matching on IOR
Pattern matching on IOR works exactly like pattern matching on any other algebraic datatype. Here is an example:
x :: IOR Char Int
x = Both 'c' 3
y :: IOR Char Void
y = OnlyFirst 'a' (\v -> v)
f :: Not p -> IOR p Int
f np = OnlySecond np 7
z :: IOR Void Int
z = f notVoid
g :: IOR p Int -> Int
g w =
case w of
OnlyFirst p q -> -1
OnlySecond p q -> q
Both p q -> q
-- We can show that the proposition represented by "Void" is indeed false:
notVoid :: Not Void
notVoid = \v -> v
Then a sample GHCi session, with the above code loaded:
ghci> g x
3
ghci> g z
7
[1]This gets a bit more complex when you consider that some statements are undecidable and therefore you cannot construct a proof or a refutation for them.
[2]Homotopy type theory would be one example of a proof relevant system, but this is reaching the limit of my knowledge as of now.
The confusion stems from the Boolean truth-table exposition of logic. In particular, when both arguments are True, OR is True, whereas XOR is False. Logically it means that to prove OR it's enough to provide the proof of one of the arguments; but it's okay if the other one is True as well--we just don't care.
In Curry-Howard interpretation, if somebody gives you an element of Either a b, and you were able to extract the value of a from it, you still know nothing about b. It could be inhabited or not.
On the other hand, to prove XOR, you not only need the proof of one argument, you must also provide the proof of the falsehood of the other argument.
So, with Curry-Howard interpretation, if somebody gives you an element of Xor a b and you were able to extract the value of a from it, you would conclude that b is uninhabited (that is, isomorphic to Void). Conversely, if you were able to extract the value of b, then you'd know that a was uninhabited.
The proof of falsehood of a is a function a->Void. Such a function would be able to produce a value of Void, given a value of a, which is clearly impossible. So there can be no values of a. (There is only one function that returns Void, and that's the identity on Void.)
Perhaps try replacing “proof” in the Curry-Howard isomorphism with “evidence”.
Below I will use italics for propositions and proofs (which I will also call evidence), the mathematical side of the isomorphism, and I will use code for types and values.
The question is: suppose I know the type for [values corresponding to] evidence that P is true (I will call this type P), and I know the type for evidence that Q is true (I call this type Q), then what is the type for evidence of the proposition R = P OR Q?
Well there are two ways to prove R: we can prove P, or we can prove Q. We could prove both but that would be more work than necessary.
Now ask what the type should be? It is the type for things which are either evidence of P or evidence of Q. I.e. values which are either things of type P or things of type Q. The type Either P Q contains precisely those values.
What if you have evidence of P AND Q? Well this is just a value of type (P, Q), and we can write a simple function:
f :: (p,q) -> Either p q
f (a,b) = Left a
And this gives us a way to prove P OR Q if we can prove P AND Q. Therefore Either cannot correspond to xor.
What is the type for P XOR Q?
At this point I will say that negations are a bit annoying in this sort of constructive logic.
Let’s convert the question to things we understand, and a simpler thing we don’t:
P XOR Q = (P AND (NOT Q)) OR (Q AND (NOT P))
Ask now: what is the type for evidence of NOT P?
I don’t have an intuitive explanation for why this is the simplest type but if NOT P were true then evidence of P being true would be a contradiction, which we say as proving FALSE, the unprovable thing (aka BOTTOM or BOT). That is, NOT P may be written in simpler terms as: P IMPLIES FALSE. The type for FALSE is called Void (in haskell). It is a type which no values inhabit because there are no proofs of it. Therefore if you could construct a value of that type you would have problems. IMPLIES corresponds to functions and so the type corresponding to NOT P is P -> Void.
We put this with what we know and get the following equivalence in the language of propositions:
P XOR Q = (P AND (NOT Q)) OR (Q AND (NOT P)) = (P AND (Q IMPLIES FALSE)) OR ((P IMPLIES FALSE) AND Q)
The type is then:
type Xor p q = Either (p, q -> Void) (p -> Void, q)

Question on Pattern Matching: Why can I not use the same arguments for a conjuction?

Context from Programming with haskell Textbook Page 32
This version also has the benefit that, under lazy evaluation as
discussed in chapter 12, if the first argument is False, then the
result False is returned without the need to evaluate the second
argument. In practice, the prelude defines ∧ using equations that have
this same property, but make the choice about which equation applies
using the value of the first argument only:
True ∧ b = b
False ∧ _ = False
That is, if the first argument is True, then the result is the value
of the second argument, and, if the first argument is False, then the
result is False. Note that for technical reasons the same name may not
be used for more than one argument in a single equation. For example,
the following definition for the operator ∧ is based upon the
observation that, if the two arguments are equal, then the result is
the same value, otherwise the result is False, but is invalid because
of the above naming requirement:
Question
I do not understand the explanation for not being able to use the expression
b ∧ b = b
__∧_ _= False
To accept a definition such as
myFunction x x = ...
myFunction _ _ = ...
we need to be able to test two values for equality. That is, given two arbitrary values x and y, we need to be able to compute whether x == y holds.
This can be done in many cases, but (perhaps surprisingly) not all. We surely can do that when x and y are Bools or Ints, but not when they are Integer -> Bool or IO (), for instance, since we can not really test functions on infinitely many points, or IO actions on infinitely many worlds.
Consequently, pattern matching is allowed only when variables are used linearly, i.e. when the appear at most once in patterns. So, myFunction x x = ... is disallowed.
If needed, when == is defined (like on Bools) we can use guards to express the same idea:
myFunction x1 x2 | x1 == x2 = ....
| otherwise = ....
In principle, Haskell could automatically translate non-linear patterns into patterns with guards using ==, possible causing an error if == is not available for the type at hand. However, it was not defined in this way, and requires the guard to be explicitly written, if required.
This automatic translation could be convenient but could also be a source of subtle bugs, if one inadvertently reuses a variable in a pattern without realizing it.

How does return statement work in Haskell? [duplicate]

This question already has answers here:
What's so special about 'return' keyword
(3 answers)
Closed 5 years ago.
Consider these functions
f1 :: Maybe Int
f1 = return 1
f2 :: [Int]
f2 = return 1
Both have the same statement return 1. But the results are different. f1 gives value Just 1 and f2 gives value [1]
Looks like Haskell invokes two different versions of return based on return type. I like to know more about this kind of function invocation. Is there a name for this feature in programming languages?
This is a long meandering answer!
As you've probably seen from the comments and Thomas's excellent (but very technical) answer You've asked a very hard question. Well done!
Rather than try to explain the technical answer I've tried to give you a broad overview of what Haskell does behind the scenes without diving into technical detail. Hopefully it will help you to get a big picture view of what's going on.
return is an example of type inference.
Most modern languages have some notion of polymorphism. For example var x = 1 + 1 will set x equal to 2. In a statically typed language 2 will usually be an int. If you say var y = 1.0 + 1.0 then y will be a float. The operator + (which is just a function with a special syntax)
Most imperative languages, especially object oriented languages, can only do type inference one way. Every variable has a fixed type. When you call a function it looks at the types of the argument and chooses a version of that function that fits the types (or complains if it can't find one).
When you assign the result of a function to a variable the variable already has a type and if it doesn't agree with the type of the return value you get an error.
So in an imperative language the "flow" of type deduction follows time in your program Deduce the type of a variable, do something with it and deduce the type of the result. In a dynamically typed language (such as Python or javascript) the type of a variable is not assigned until the value of the variable is computed (which is why there don't seem to be types). In a statically typed language the types are worked out ahead of time (by the compiler) but the logic is the same. The compiler works out what the types of variables are going to be, but it does so by following the logic of the program in the same way as the program runs.
In Haskell the type inference also follows the logic of the program. Being Haskell it does so in a very mathematically pure way (called System F). The language of types (that is the rules by which types are deduced) are similar to Haskell itself.
Now remember Haskell is a lazy language. It doesn't work out the value of anything until it needs it. That's why it makes sense in Haskell to have infinite data structures. It never occurs to Haskell that a data structure is infinite because it doesn't bother to work it out until it needs to.
Now all that lazy magic happens at the type level too. In the same way that Haskell doesn't work out what the value of an expression is until it really needs to, Haskell doesn't work out what the type of an expression is until it really needs to.
Consider this function
func (x : y : rest) = (x,y) : func rest
func _ = []
If you ask Haskell for the type of this function it has a look at the definition, sees [] and : and deduces that it's working with lists. But it never needs to look at the types of x and y, it just knows that they have to be the same because they end up in the same list. So it deduces the type of the function as [a] -> [a] where a is a type that it hasn't bothered to work out yet.
So far no magic. But it's useful to notice the difference between this idea and how it would be done in an OO language. Haskell doesn't convert the arguments to Object, do it's thing and then convert back. Haskell just hasn't been asked explicitly what the type of the list is. So it doesn't care.
Now try typing the following into ghci
maxBound - length ""
maxBound : "Hello"
Now what just happened !? minBound bust be a Char because I put it on the front of a string and it must be an integer because I added it to 0 and got a number. Plus the two values are clearly very different.
So what is the type of minBound? Let's ask ghci!
:type minBound
minBound :: Bounded a => a
AAargh! what does that mean? Basically it means that it hasn't bothered to work out exactly what a is, but is has to be Bounded if you type :info Bounded you get three useful lines
class Bounded a where
minBound :: a
maxBound :: a
and a lot of less useful lines
So if a is Bounded there are values minBound and maxBound of type a.
In fact under the hood Bounded is just a value, it's "type" is a record with fields minBound and maxBound. Because it's a value Haskell doesn't look at it until it really needs to.
So I appear to have meandered somewhere in the region of the answer to your question. Before we move onto return (which you may have noticed from the comments is a wonderfully complex beast.) let's look at read.
ghci again
read "42" + 7
read "'H'" : "ello"
length (read "[1,2,3]")
and hopefully you won't be too surprised to find that there are definitions
read :: Read a => String -> a
class Read where
read :: String -> a
so Read a is just a record containing a single value which is a function String -> a. Its very tempting to assume that there is one read function which looks at a string, works out what type is contained in the string and returns that type. But it does the opposite. It completely ignores the string until it's needed. When the value is needed, Haskell first works out what type it's expecting, once it's done that it goes and gets the appropriate version of the read function and combines it with the string.
now consider something slightly more complex
readList :: Read a => [String] -> a
readList strs = map read strs
under the hood readList actually takes two arguments
readList' (Read a) -> [String] -> [a]
readList' {read = f} strs = map f strs
Again as Haskell is lazy it only bothers looking at the arguments when it's needs to find out the return value, at that point it knows what a is, so the compiler can go and fine the right version of Read. Until then it doesn't care.
Hopefully that's given you a bit of an idea of what's happening and why Haskell can "overload" on the return type. But it's important to remember it's not overloading in the conventional sense. Every function has only one definition. It's just that one of the arguments is a bag of functions. read_str doesn't ever know what types it's dealing with. It just knows it gets a function String -> a and some Strings, to do the application it just passes the arguments to map. map in turn doesn't even know it gets strings. When you get deeper into Haskell it becomes very important that functions don't know very much about the types they're dealing with.
Now let's look at return.
Remember how I said that the type system in Haskell was very similar to Haskell itself. Remember that in Haskell functions are just ordinary values.
Does this mean I can have a type that takes a type as an argument and returns another type? Of course it does!
You've seen some type functions Maybe takes a type a and returns another type which can either be Just a or Nothing. [] takes a type a and returns a list of as. Type functions in Haskell are usually containers. For example I could define a type function BinaryTree which stores a load of a's in a tree like structure. There are of course lots of much stranger ones.
So, if these type functions are similar to ordinary types I can have a typeclass that contains type functions. One such typeclass is Monad
class Monad m where
return a -> m a
(>>=) m a (a -> m b) -> m b
so here m is some type function. If I want to define Monad for m I need to define return and the scary looking operator below it (which is called bind)
As others have pointed out the return is a really misleading name for a fairly boring function. The team that designed Haskell have since realised their mistake and they're genuinely sorry about it. return is just an ordinary function that takes an argument and returns a Monad with that type in it. (You never asked what a Monad actually is so I'm not going to tell you)
Let's define Monad for m = Maybe!
First I need to define return. What should return x be? Remember I'm only allowed to define the function once, so I can't look at x because I don't know what type it is. I could always return Nothing, but that seems a waste of a perfectly good function. Let's define return x = Just x because that's literally the only other thing I can do.
What about the scary bind thing? what can we say about x >>= f? well x is a Maybe a of some unknown type a and f is a function that takes an a and returns a Maybe b. Somehow I need to combine these to get a Maybe b`
So I need to define Nothing >== f. I can't call f because it needs an argument of type a and I don't have a value of type a I don't even know what 'a' is. I've only got one choice which is to define
Nothing >== f = Nothing
What about Just x >>= f? Well I know x is of type a and f takes a as an argument, so I can set y = f a and deduce that y is of type b. Now I need to make a Maybe b and I've got a b so ...
Just x >>= f = Just (f x)
So I've got a Monad! what if m is List? well I can follow a similar sort of logic and define
return x = [x]
[] >>= f = []
(x : xs) >>= a = f x ++ (xs >>= f)
Hooray another Monad! It's a nice exercise to go through the steps and convince yourself that there's no other sensible way of defining this.
So what happens when I call return 1?
Nothing!
Haskell's Lazy remember. The thunk return 1 (technical term) just sits there until someone needs the value. As soon as Haskell needs the value it know what type the value should be. In particular it can deduce that m is List. Now that it knows that Haskell can find the instance of Monad for List. As soon as it does that it has access to the correct version of return.
So finally Haskell is ready To call return, which in this case returns [1]!
The return function is from the Monad class:
class Applicative m => Monad (m :: * -> *) where
...
return :: a -> m a
So return takes any value of type a and results in a value of type m a. The monad, m, as you've observed is polymorphic using the Haskell type class Monad for ad hoc polymorphism.
At this point you probably realize return is not an good, intuitive, name. It's not even a built in function or a statement like in many other languages. In fact a better-named and identically-operating function exists - pure. In almost all cases return = pure.
That is, the function return is the same as the function pure (from the Applicative class) - I often think to myself "this monadic value is purely the underlying a" and I try to use pure instead of return if there isn't already a convention in the codebase.
You can use return (or pure) for any type that is a class of Monad. This includes the Maybe monad to get a value of type Maybe a:
instance Monad Maybe where
...
return = pure -- which is from Applicative
...
instance Applicative Maybe where
pure = Just
Or for the list monad to get a value of [a]:
instance Applicative [] where
{-# INLINE pure #-}
pure x = [x]
Or, as a more complex example, Aeson's parse monad to get a value of type Parser a:
instance Applicative Parser where
pure a = Parser $ \_path _kf ks -> ks a

Monads, composition and the order of computation

All the monad articles often state, that monads allow you to sequence effects in order.
But what about simple composition? Ain't
f x = x + 1
g x = x * 2
result = f g x
requires g x to be computed before f ...?
Do monads do the same but with handling of effects?
Disclaimer: Monads are a lot of things. They are notoriously difficult to explain, so I will not attempt to explain what monads are in general here, since the question does not ask for that. I will assume you have a basic grasp on what the Monad interface is as well as how it works for some useful datatypes, like Maybe, Either, and IO.
What is an effect?
Your question begins with a note:
All the monad articles often state, that monads allow you to sequence effects in order.
Hmm. This is interesting. In fact, it is interesting for a couple reasons, one of which you have identified: it implies that monads let you create some sort of sequencing. That’s true, but it’s only part of the picture: it also states that sequencing happens on effects.
Here’s the thing, though… what is an “effect”? Is adding two numbers together an effect? Under most definitions, the answer would be no. What about printing something to stdout, is that an effect? In that case, I think most people would agree that the answer is yes. However, consider something more subtle: is short-circuiting a computation by producing Nothing an effect?
Error effects
Let’s take a look at an example. Consider the following code:
> do x <- Just 1
y <- Nothing
return (x + y)
Nothing
The second line of that example “short-circuits” due to the Monad instance for Maybe. Could that be considered an effect? In some sense, I think so, since it’s non-local, but in another sense, probably not. After all, if the x <- Just 1 or y <- Nothing lines are swapped, the result is still the same, so the ordering doesn’t matter.
However, consider a slightly more complex example that uses Either instead of Maybe:
> do x <- Left "x failed"
y <- Left "y failed"
return (x + y)
Left "x failed"
Now this is more interesting. If you swap the first two lines now, you get a different result! Still, is this a representation of an “effect” like the ones you allude to in your question? After all, it’s just a bunch of function calls. As you know, do notation is just an alternative syntax for a bunch of uses of the >>= operator, so we can expand it out:
> Left "x failed" >>= \x ->
Left "y failed" >>= \y ->
return (x + y)
Left "x failed"
We can even replace the >>= operator with the Either-specific definition to get rid of monads entirely:
> case Left "x failed" of
Right x -> case Left "y failed" of
Right y -> Right (x + y)
Left e -> Left e
Left e -> Left e
Left "x failed"
Therefore, it’s clear that monads do impose some sort of sequencing, but this is not because they are monads and monads are magic, it’s just because they happen to enable a style of programming that looks more impure than Haskell typically allows.
Monads and state
But perhaps that is unsatisfying to you. Error handling is not compelling because it is simply short-circuiting, it doesn’t actually have any sequencing in the result! Well, if we reach for some slightly more complex types, we can do that. For example, consider the Writer type, which allows a sort of “logging” using the monadic interface:
> execWriter $ do
tell "hello"
tell " "
tell "world"
"hello world"
This is even more interesting than before, since now the result of each computation in the do block is unused, but it still affects the output! This is clearly side-effectful, and order is clearly very important! If we reorder the tell expressions, we would get a very different result:
> execWriter $ do
tell " "
tell "world"
tell "hello"
" worldhello"
But how is this possible? Well, again, we can rewrite it to avoid do notation:
execWriter (
tell "hello" >>= \_ ->
tell " " >>= \_ ->
tell "world")
We could inline the definition of >>= again for Writer, but it’s too long to be terribly illustrative here. The point is, though, that Writer is just a completely ordinary Haskell datatype that doesn’t do any I/O or anything like that, and yet we have used the monadic interface to create something that looks like ordered effects.
We can go even further by creating an interface that looks like mutable state using the State type:
> flip execState 0 $ do
modify (+ 3)
modify (* 2)
6
Once again, if we reorder the expressions, we get a different result:
> flip execState 0 $ do
modify (* 2)
modify (+ 3)
3
Clearly, monads are a useful tool for creating interfaces that look stateful and have a well-defined ordering despite actually just being ordinary function calls.
Why can monads do this?
What gives monads this power? Well, they’re not magic—they’re just ordinary pure Haskell code. But consider the type signature for >>=:
(>>=) :: Monad m => m a -> (a -> m b) -> m b
Notice how the second argument depends on a, and the only way to get an a is from the first argument? This means that >>= needs to “run” the first argument to produce a value before it can apply the second argument. This doesn’t have to do with evaluation order so much as it has to do with actually writing code that will typecheck.
Now, it’s true that Haskell is a lazy language. But Haskell’s laziness doesn’t really matter for any of this because all of this code is actually pure, even the example using State! It’s simply a pattern that encodes computations that look sort of stateful in a pure way, but if you actually implemented State yourself, you’d find that it just passes around the “current state” in the definition of the >>= function. There’s not any actual mutation.
And that’s it. Monads, by virtue of their interface, impose an ordering on how their arguments may be evaluated, and instances of Monad exploit that to make stateful-looking interfaces. You don’t need Monad to have evaluation ordering, though, as you found; obviously in (1 + 2) * 3 the addition will be evaluated before the multiplication.
But what about IO??
Okay, you got me. Here’s the problem: IO is magic.
Monads are not magic, but IO is. All of the above examples are purely functional, but obviously reading a file or writing to stdout is not pure. So how the heck does IO work?
Well, IO is implemented by the GHC runtime, and you could not write it yourself. However, in order to make it work nicely with the rest of Haskell, there needs to be a well-defined evaluation order! Otherwise things would get printed out in the wrong order and all sorts of other hell would break loose.
Well, it turns out the Monad’s interface is a great way to ensure that evaluation order is predictable, since it works for pure code already. So IO leverages the same interface to guarantee the evaluation order is the same, and the runtime actually defines what that evaluation means.
However, don’t be misled! You don’t need monads to do I/O in a pure language, and you don’t need IO to have monadic effects. Early versions of Haskell experimented with a non-monadic way to do I/O, and the other parts of this answer explain how you can have pure monadic effects. Remember that monads are not special or holy, they’re just a pattern that Haskell programmers have found useful because of its various properties.
Yes, the functions you proposed are strict for the standard numerical types. But not all functions are! In
f _ = 3
g x = x * 2
result = f (g x)
it is not the case that g x must be computed before f (g x).
Yes, monads use function composition to sequence effects, and are not the only way to achieve sequenced effects.
Strict semantics and side effects
In most languages, there is sequencing by strict semantics applied first to the function side of an expression, then to each argument in turn, and finally the function is applied to the arguments. So in JS, the function application form,
<Code 1>(<Code 2>, <Code 3>)
runs four pieces of code in a specified order: 1, 2, 3, then it checks that the output of 1 was a function, then it calls the function with those two computed arguments. And it does this because any of those steps can have side-effects. You would write,
const logVal = (log, val) => {
console.log(log);
return val;
};
logVal(1, (a, b) => logVal(4, a+b))(
logVal(2, 2),
logVal(3, 3));
And that works for those languages. These are side-effects, which we can say in this context means that JS's type system doesn't give you any way to know that they are there.
Haskell does have a strict application primitive, but it wanted to be pure, which roughly means that it wanted the type system to track effects. So they introduced a form of metaprogramming where one of their types is a type-level adjective, “programs which compute a _____”. A program interacts with the real world; Haskell code in theory doesn't. You have to define “main is a program which computes a unit type” and then the compiler actually just builds that program for you as an executable binary file. By the time that file is run Haskell is not really in the picture any more!
This is therefore more specific than normal function application, because the abstract problem I wrote in JavaScript is,
I have a program which computes {a function from (X, Y) pairs to programs which compute Zs}.
I also have a program which computes an X, and a program which computes a Y.
I want to put these all together into a program which computes a Z.
That's not just function composition itself. But a function can do that.
Peeking inside monads
A monad is a pattern. The pattern is, sometimes you have an adjective which does not add much when you repeat it. For example there is not much added when you say "a delayed delayed x" or "zero or more (zero or more xs)" or "either a null or else either a null or else an x." Similarly for the IO monad, not much is added by "a program to compute a program to compute an x" that is not available in "a program to compute an x."
The pattern is that there is some canonical merging algorithm which merges:
join: given an <adjective> <adjective> x, I will make you an <adjective> x.
We also add two other properties, the adjective should be outputtish,
map: given an x -> y and an <adjective> x, I will make you an <adjective> y
and universally embeddable,
pure: given an x, I will make you an <adjective> x.
Given these three things and a couple axioms you happen to have a common "monad" idea which you can develop One True Syntax for.
Now this metaprogramming idea obviously contains a monad. In JS we would write,
interface IO<x> {
run: () => Promise<x>
}
function join<x>(pprog: IO<IO<x>>): IO<x> {
return { run: () => pprog.run().then(prog => prog.run()) };
}
function map<x, y>(prog: IO<x>, fn: (in: x) => y): IO<y> {
return { run: () => prog.run().then(x => fn(x)) }
}
function pure<x>(input: x): IO<x> {
return { run: () => Promise.resolve(input) }
}
// with those you can also define,
function bind<x, y>(prog: IO<x>, fn: (in: x) => IO<y>): IO<y> {
return join(map(prog, fn));
}
But the fact that a pattern exists does not mean it is useful! I am claiming that these functions turn out to be all you need to resolve the problem above. And it is not hard to see why: you can use bind to create a function scope inside of which the adjective doesn't exist, and manipulate your values there:
function ourGoal<x, y, z>(
fnProg: IO<(inX: x, inY: y) => IO<z>>,
xProg: IO<x>,
yProg: IO<y>): IO<z> {
return bind(fnProg, fn =>
bind(xProg, x =>
bind(yProg, y => fn(x, y))));
}
How this answers your question
Notice that in the above we choose an order of operations by how we write the three binds. We could have written those in some other order. But we needed all the parameters to run the final program.
This choice of how we sequence our operations is indeed realized in the function calls: you are 100% right. But the way that you are doing it, with only function composition, is flawed because it demotes the effects down to side-effects in order to get the types through.

Can any recursive definition be rewritten using foldr?

Say I have a general recursive definition in haskell like this:
foo a0 a1 ... = base_case
foo b0 b1 ...
| cond1 = recursive_case_1
| cond2 = recursive_case_2
...
Can it always rewritten using foldr? Can it be proved?
If we interpret your question literally, we can write const value foldr to achieve any value, as #DanielWagner pointed out in a comment.
A more interesting question is whether we can instead forbid general recursion from Haskell, and "recurse" only through the eliminators/catamorphisms associated to each user-defined data type, which are the natural generalization of foldr to inductively defined data types. This is, essentially, (higher-order) primitive recursion.
When this restriction is performed, we can only compose terminating functions (the eliminators) together. This means that we can no longer define non terminating functions.
As a first example, we lose the trivial recursion
f x = f x
-- or even
a = a
since, as said, the language becomes total.
More interestingly, the general fixed point operator is lost.
fix :: (a -> a) -> a
fix f = f (fix f)
A more intriguing question is: what about the total functions we can express in Haskell? We do lose all the non-total functions, but do we lose any of the total ones?
Computability theory states that, since the language becomes total (no more non termination), we lose expressiveness even on the total fragment.
The proof is a standard diagonalization argument. Fix any enumeration of programs in the total fragment so that we can speak of "the i-th program".
Then, let eval i x be the result of running the i-th program on the natural x as input (for simplicity, assume this is well typed, and that the result is a natural). Note that, since the language is total, then a result must exist. Moreover, eval can be implemented in the unrestricted Haskell language, since we can write an interpreter of Haskell in Haskell (left as an exercise :-P), and that would work as fine for the fragment. Then, we simply take
f n = succ $ eval n n
The above is a total function (a composition of total functions) which can be expressed in Haskell, but not in the fragment. Indeed, otherwise there would be a program to compute it, say the i-th program. In such case we would have
eval i x = f x
for all x. But then,
eval i i = f i = succ $ eval i i
which is impossible -- contradiction. QED.
In type theory, it is indeed the case that you can elaborate all definitions by dependent pattern-matching into ones only using eliminators (a more strongly-typed version of folds, the generalisation of lists' foldr).
See e.g. Eliminating Dependent Pattern Matching (pdf)

Resources