Self-reference in data structure – Checking for equality - haskell

In my initial attempt at creating a disjoint set data structure I created a Point data type with a parent pointer to another Point:
data Point a = Point
{ _value :: a
, _parent :: Point a
, _rank :: Int
}
To create a singleton set, a Point is created that has itself as its parent (I believe this is called tying the knot):
makeSet' :: a -> Point a
makeSet' x = let p = Point x p 0 in p
Now, when I wanted to write findSet (i.e. follow parent pointers until you find a Point whose parent is itself) I hit a problem: Is it possible to check if this is the case? A naïve Eq instance would of course loop infinitely – but is this check conceptually possible to write?
(I ended up using a Maybe Point for the parent field, see my other question.)

No, what you are asking for is known in the Haskell world as referential identity: the idea that for two values of a certain type, you can check whether they are the same value in memory or two separate values that happen to have exactly the same properties.
For your example, you can ask yourself whether you would consider the following two values the same or not:
pl1 :: Point Int
pl1 = Point 0 (Point 0 pl1 1) 1
pl2 :: Point Int
pl2 = Point 0 pl2 1
Haskell considers both values completely equal. I.e. Haskell does not support referential identity. One of the reasons for this is that it would violate other features that Haskell does support. It is for example the case in Haskell that we can always replace a reference to a function by that function's implementation without changing the meaning (equational reasoning). For example if we take the implementation of pl2: Point 0 pl2 1 and replace pl2 by its definition, we get Point 0 (Point 0 pl2 1) 1, making pl2's definition equivalent to pl1's. This shows that Haskell cannot allow you to observe the difference between pl1 and pl2 without violating properties implied by equational reasoning.
You could use unsafe features like unsafePerformIO (as suggested above) to work around the lack of referential identity in Haskell, but you should know that you are then breaking core principles of Haskell and you may observe weird bugs when GHC starts optimizing (e.g. inlining) your code. It is better to use a different representation of your data, e.g. the one you mentioned using a Maybe Point value.

You could try to do this using StableName (or StablePtr) and unsafePerformIO, but it seems like a worse idea than Maybe Point for this case.

In order the observed the effect you're most likely interested in—pointer equality instead of value equality—you most likely will want to write your algorithm in the ST monad. The ST monad can be thought of kind of like "locally impure IO, globally pure API" though, by the nature of Union Find, you'll likely have to dip the entire lookup process into the impure section of your code.
Fortunately, monads still contain this impurity fairly well.
There is also already an implementation of Union Find in Haskell which uses three methods to achieve the kind of identity you're looking for: using the IO monad, using IntSets, and using the ST monad.

In Haskell data is immutable (except IO a, IORef a, ST a, STRef a, ..) and data is function.
So, any data is "singleton"
When you type
data C = C {a, b :: Int}
changeCa :: C -> Int -> C
changeCa c newa = c {a = newa}
You don't change a variable, you destroy old data and create a new one.
You could try use links and pointers, but it is useless and complex
And last,
data Point a = Point {p :: Point a}
is a infinite list-like data = Point { p=Point { p=Point { p=Point { ....

Related

understanding data structure in Haskell

I have a problem with a homework (the topic is : "functional data structures").
Please understand that I don't want anyone to solve my homework.
I just have a problem with understanding the structure of this :
data Heap e t = Heap {
empty :: t e,
insert :: e -> t e -> t e,
findMin :: t e -> Maybe e,
deleteMin :: t e -> Maybe (t e),
merge :: t e -> t e -> t e,
contains :: e -> t e -> Maybe Int
}
In my understanding "empty" "insert" and so on are functions which can applied to "Heap"-type data.
Now I just want to understand how that "Heap"thing looks like.
So I was typing things like :
a = Heap 42 42
But I get errors I can't really work with.
Maybe it is a dumb question and I'm just stuck at this point for no reason, but it is killing me at the moment.
Thankful to any help
If you truly wish to understand that type, you need to understand a few requisites first.
types and values (and functions)
Firstly, you need to understand what types and values are. I'm going to assume you understand this. You understand, for example, the separation between "hello" as a value and its type, String and you understand clearly what it means when I say a = "hello" :: String and:
a :: String
a = "hello"
If you don't understand that, then you need to research values and types in Haskell. There are a myriad of books that can help here, such as this one, which I helped to author: http://happylearnhaskelltutorial.com
I'm also going to assume you understand what functions and currying are, and how to use both of them.
polymorphic types
Secondly, as your example contains type variables, you'll need to understand what they are. That is, you need to understand what polymoprhic types are. So, for example, Maybe a, or Either a b, and you'll need to understand how Maybe String is different to Maybe Int and what Num a => [a] and even things like what Num a => [Maybe a] is.
Again, there are many free or paid books that can help, the example above covers this, too.
algebraic data types
Next up is algebraic data types. This is a pretty amazingly cool feature that Haskell has. Haskell-like languages such as Elm and Idris have it as well as others like Rust, too. It lets you define your own data types. These aren't just things like Structs in other languages, and yeah, they can even contain functions.
Maybe is actually an example of an algebraic data types. If you understand these, you'll know that:
data Direction = North | South | East | West
defines a data type called Direction whose values can only be one of North, South, East or West, and you'll know that you can also use the polymorhpic type variables above to parameterise your types like so:
data Tree a = EmptyNode | Node (Tree a) (Tree a)
which uses both optionality (as in the sum type of Direction above) as well as parameterization.
In addition to this, you can also have multiple types in each value. These are called product types, and Haskell's algebraic datatypes can be expressed as a combination of Sum types that can contain Product types. For example:
type Location = (Float, Float)
data ShapeNode = StringNode Location String | CircleNode Location Float | SquareNode Location Float Float
That is, each value can be one of StringNode, CircleNode or SquareNode, and in each case there are a different set of fields given to each value. To create a StringNode, for example, you'd need to pass the values of it constructor like this: StringNode (10.0, 5.3) "A String".
Again, the freely available books will go into much more detail about these things, but we're moving in the direction of getting more than a basic understanding of Haskell now.
Finally, in order to fully understand your example, you'll need to know about...
record types
Record types are the same as product types above, except that the fields are labelled rather than being anonymous. So, you could define the shape node data type like this, instead:
type Location = (Float, Float)
data ShapeNode
= StringNode { stringLocation :: Location, stringData :: String }
| CircleNode { circleLocation :: Location, radius :: Float }
| SquareNode { squareLocation :: Location, length :: Float, height :: Float }
Each field is named, and you can't repeat the same name inside data values.
All that you need in addition to this to understand the above example is to realise your example contains all of these things together, along with the fact that you have functions as your record field values in the data type you have.
It's a good idea to thoroughly flesh out your understanding and not skip any steps, then you'll be able to follow these kinds of things much more easily in the future. :) I wish you luck!
Heap is a record with six elements. In order to create a value of that type, you must supply all six elements. Assuming that you have appropriate values and functions, you can create a value like this:
myHeap = Heap myEmpty myInsert myFindMin myDeleteMin myMerge myContains
The doesn't seem like idiomatic Haskell design, however. Why not define generic functions independent of the data, or, if they must be bundled together, a typeclass?

Using different Ordering for Sets

I was reading a Chapter 2 of Purely Functional Data Structures, which talks about unordered sets implemented as binary search trees. The code is written in ML, and ends up showing a signature ORDERED and a functor UnbalancedSet(Element: ORDERED): SET. Coming from more of a C++ background, this makes sense to me; custom comparison function objects form part of the type and can be passed in at construction time, and this seems fairly analogous to the ML functor way of doing things.
When it comes to Haskell, it seems the behavior depends only on the Ord instance, so if I wanted to have a set that had its order reversed, it seems like I'd have to use a newtype instance, e.g.
newtype ReverseInt = ReverseInt Int deriving (Eq, Show)
instance Ord ReverseInt where
compare (ReverseInt a) (ReverseInt b)
| a == b = EQ
| a < b = GT
| a > b = LT
which I could then use in a set:
let x = Set.fromList $ map ReverseInt [1..5]
Is there any better way of doing this sort of thing that doesn't resort to using newtype to create a different Ord instance?
No, this is really the way to go. Yes, having a newtype is sometimes annoying but you get some big benefits:
When you see a Set a and you know a, you immediately know what type of comparison it uses (sort of the same way that purity makes code more readable by not making you have to trace execution). You don't have to know where that Set a comes from.
For many cases, you can coerce your way through multiple newtypes at once. For example, I can turn xs = [1,2,3] :: Int into ys = [ReverseInt 1, ReverseInt 2, ReverseInt 3] :: [ReverseInt] just using ys = coerce xs :: [ReverseInt]. Unfortunately, that isn't the case for Set (and it shouldn't - you'd need the coercion function to be monotonic to not screw up the data structure invariants, and there is not yet a way to express that in the type system).
newtypes end up being more composable than you expect. For example, the ReverseInt type you made already exists in a form that generalizes to reversing any type with an Ord constraint: it is called Down. To be explicit, you could use Down Int instead of ReversedInt, and you get the instance you wrote out for free!
Of course, if you still feel very strongly about this, nothing is stopping you from writing your version of Set which has to have a field which is the comparison function it uses. Something like
data Set a = Set { comparisionKey :: a -> a -> Ordering
, ...
}
Then, every time you make a Set, you would have to pass in the comparison key.

Can I declare a NULL value in Haskell?

Just curious, seems when declaring a name, we always specify some valid values, like let a = 3. Question is, in imperative languages include c/java there's always a keyword of "null". Does Haskell has similar thing? When could a function object be null?
There is a “null” value that you can use for variables of any type. It's called ⟂ (pronounced bottom). We don't need a keyword to produce bottom values; actually ⟂ is the value of any computation which doesn't terminate. For instance,
bottom = let x = x in x -- or simply `bottom = bottom`
will infinitely loop. It's obviously not a good idea to do this deliberately, however you can use undefined as a “standard bottom value”. It's perhaps the closest thing Haskell has to Java's null keyword.
But you definitely shouldn't/can't use this for most of the applications where Java programmers would grab for null.
Since everything in Haskell is immutable, a value that's undefined will always stay undefined. It's not possible to use this as a “hold on a second, I'll define it later” indication†.
It's not possible to check whether a value is bottom or not. For rather deep theoretical reasons, in fact. So you can't use this for values that may or may not be defined.
And you know what? It's really good that Haskell does't allow this! In Java, you constantly need to be wary that values might be null. In Haskell, if a value is bottom than something is plain broken, but this will never be part of intended behaviour / something you might need to check for. If for some value it's intended that it might not be defined, then you must always make this explicit by wrapping the type in a Maybe. By doing this, you make sure that anybody trying to use the value must first check whether it's there. Not possible to forget this and run into a null-reference exception at runtime!
And because Haskell is so good at handling variant types, checking the contents of a Maybe-wrapped value is really not too cumbersome. You can just do it explicitly with pattern matching,
quun :: Int -> String
quun i = case computationWhichMayFail i of
Just j -> show j
Nothing -> "blearg, failed"
computationWhichMayFail :: Int -> Maybe Int
or you can use the fact that Maybe is a functor. Indeed it is an instance of almost every specific functor class: Functor, Applicative, Alternative, Foldable, Traversable, Monad, MonadPlus. It also lifts semigroups to monoids.
Dᴏɴ'ᴛ Pᴀɴɪᴄ now,
you don't need to know what the heck these things are. But when you've learned what they do, you will be able to write very concise code that automagically handles missing values always in the right way, with zero risk of missing a check.
†Because Haskell is lazy, you generally don't need to defer any calculations to be done later. The compiler will automatically see to it that the computation is done when it's necessary, and no sooner.
There is no null in Haskell. What you want is the Maybe monad.
data Maybe a
= Just a
| Nothing
Nothing refers to classic null and Just contains a value.
You can then pattern match against it:
foo Nothing = Nothing
foo (Just a) = Just (a * 10)
Or with case syntax:
let m = Just 10
in case m of
Just v -> print v
Nothing -> putStrLn "Sorry, there's no value. :("
Or use the supperior functionality provided by the typeclass instances for Functor, Applicative, Alternative, Monad, MonadPlus and Foldable.
This could then look like this:
foo :: Maybe Int -> Maybe Int -> Maybe Int
foo x y = do
a <- x
b <- y
return $ a + b
You can even use the more general signature:
foo :: (Monad m, Num a) => m a -> m a -> m a
Which makes this function work for ANY data type that is capable of the functionality provided by Monad. So you can use foo with (Num a) => Maybe a, (Num a) => [a], (Num a) => Either e a and so on.
Haskell does not have "null". This is a design feature. It completely prevents any possibility of your code crashing due to a null-pointer exception.
If you look at code written in an imperative language, 99% of the code expects stuff to never be null, and will malfunction catastrophically if you give it null. But then 1% of the code does expect nulls, and uses this feature to specify optional arguments or whatever. But you can't easily tell, by looking at the code, which parts are expecting nulls as legal arguments, and which parts aren't. Hopefully it's documented — but don't hold your breath!
In Haskell, there is no null. If that argument is declared as Customer, then there must be an actual, real Customer there. You can't just pass in a null (intentionally or by mistake). So the 99% of the code that is expecting a real Customer will always work.
But what about the other 1%? Well, for that we have Maybe. But it's an explicit thing; you have to explicitly say "this value is optional". And you have to explicitly check when you use it. You cannot "forget" to check; it won't compile.
So yes, there is no "null", but there is Maybe which is kinda similar, but safer.
Not in Haskell (or in many other FP languages). If you have some expression of some type T, its evaluation will give a value of type T, with the following exceptions:
infinite recursion may make the program "loop forever" and failing to return anything
let f n = f (n+1) in f 0
runtime errors can abort the program early, e.g.:
division by zero, square root of negative, and other numerical errors
head [], fromJust Nothing, and other partial functions used on invalid inputs
explicit calls to undefined, error "message", or other exception-throwing primitives
Note that even if the above cases might be regarded as "special" values called "bottoms" (the name comes from domain theory), you can not test against these values at runtime, in general. So, these are not at all the same thing as Java's null. More precisely, you can't write things like
-- assume f :: Int -> Int
if (f 5) is a division-by-zero or infinite recursion
then 12
else 4
Some exceptional values can be caught in the IO monad, but forget about that -- exceptions in Haskell are not idiomatic, and roughly only used for IO errors.
If you want an exceptional value which can be tested at run-time, use the Maybe a type, as #bash0r already suggested. This type is similar to Scala's Option[A] or Java's not-so-much-used Optional<A>.
The value is having both a type T and type Maybe T is to be able to precisely identify which functions always succeed, and which ones can fail. In Haskell the following is frowned upon, for instance:
-- Finds a value in a list. Returns -1 if not present.
findIndex :: Eq a => [a] -> a -> Int
Instead this is preferred:
-- Finds a value in a list. Returns Nothing if not present.
findIndex :: Eq a => [a] -> a -> Maybe Int
The result of the latter is less convenient than the one of the former, since the Int must be unwrapped at every call. This is good, since in this way each user of the function is prevented to simply "ignore" the not-present case, and write buggy code.

Correct terminology for continuations

I've been poking around continuations recently, and I got confused about the correct terminology. Here Gabriel Gonzalez says:
A Haskell continuation has the following type:
newtype Cont r a = Cont { runCont :: (a -> r) -> r }
i.e. the whole (a -> r) -> r thing is the continuation (sans the wrapping)
The wikipedia article seems to support this idea by saying
a continuation is an abstract representation of the control state
of a computer program.
However, here the authors say that
Continuations are functions that represent "the remaining computation to do."
but that would only be the (a->r) part of the Cont type. And this is in line to what Eugene Ching says here:
a computation (a function) that requires a continuation function in order
to fully evaluate.
We’re going to be seeing this kind of function a lot, hence, we’ll give it
a more intuitive name. Let’s call them waiting functions.
I've seen another tutorial (Brian Beckman and Erik Meijer) where they call the whole thing (the waiting function) the observable and the function which is required for it to complete the observer.
What is the the continuation, the (a->r)->r thingy or just the (a->r) thing (sans the wrapping)?
Is the wording observable/observer about correct?
Are the citations above really contradictory, is there a common truth?
What is the the continuation, the (a->r)->r thingy or just the (a->r) thing (sans the wrapping)?
I would say that the a -> r bit is the continuation and the (a -> r) -> r is "in continuation passing style" or "is the type of the continuation monad.
I am going to go off on a long digression on the history of continuations which is not really relivant to the question...so be warned.
It is my belief that the first published paper on continuations was "Continuations: A Mathematical Semantics for Handling Full Jumps" by Strachey and Wadsworth (although the concept was already folklore). The idea of that paper is I think a pretty important one. Early semantics for imperative programs attempted to model commands as state transformer functions. For example, consider the simple imperative language given by the following BNF
Command := set <Expression> to <Expression>
| skip
| <Command> ; <Command>
Expression := !<Expression>
| <Number>
| <Expression> + <Expression>
here we use a expressions as pointers. The simplest denotational function interprets the state as functions from natural numbers to natural numbers:
S = N -> N
We can interpret expressions as functions from state to the natural numbers
E[[e : Expression]] : S -> N
and commands as state transducers.
C[[c : Command]] : S -> S
This denotational semantics can be spelled out rather simply:
E[[n : Number]](s) = n
E[[a + b]](s) = E[[a]](s) + E[[b]](s)
E[[!e]](s) = s(E[[e]](s))
C[[skip]](s) = s
C[[set a to b]](s) = \n -> if n = E[[a]](s) then E[[b]](s) else s(n)
C[[c_1;c_2]](s) = (C[[c_2] . C[[c_1]])(s)
As simple program in this language might look like
set 0 to 1;
set 1 to (!0) + 1
which would be interpreted as function that turns a state function s into a new function that is just like s except it maps 0 to 1 and 1 to 2.
This was all well and good, but how do you handle branching? Well, if you think about it alot you can probably come up with a way to handle if and loops that go an exact number of times...but what about general while loops?
Strachey and Wadsworth's showed us how to do it. First of all, they pointed out that these "State transducer functions" were pretty important and so decided to call them "command continuations" or just "continuations."
C = S -> S
From this they defined a new semantics, which we will provisionally define this way
C'[[c : Command]] : C -> C
C'[[c]](cont) = cont . C[[c]]
What is going on here? Well, observe that
C'[[c_1]](C[[c_2]]) = C[[c_1 ; c_2]]
and further
C'[[c_1]](C'[[c_2]](cont) = C'[[c_1 ; c_2]](cont)
Instead of doing it this way, we can inline the definition
C'[[skip]](cont) = cont
C'[[set a to b]](cont) = cont . \s -> \n -> if n = E[[a]](s) then E[[b]](s) else s(n)
C'[[c_1 ; c_2]](cont) = C'[[c_1]](C'[[c_2]](cont)
What has this bought us? Well, a way to interpret while, thats what!
Command := ... | while <Expression> do <Command> end
C'[[while e do c end]](cont) =
let loop = \s -> if E[[e]](s) = 0 then C'[[c]](loop)(s) else cont(s)
in loop
or, using a fixpoint combinator
C'[[while e do c end]](cont)
= Y (\f -> \s -> if E[[e]](s) = 0 then C'[[c]](f)(s) else cont(s))
Anyways...that is history and not particularly important...except in so far as it showed how to interpret programs mathematically, and set the language of "continuation."
Also, the approach to denotational semantics of "1. define a new semantic function in terms of the old 2. inline 3. profit" works surprisingly often. For example, it is often useful to have your semantic domain form a lattice (think, abstract interpretation). How do you get that? Well, one option is to take the powerset of the domain, and inject into this by interpreting your functions as singletons. If you inline this powerset construction you get something that can either model non-determinism or, in the case of abstract interpretation, various amounts of information about a program other than exact certainty as to what it does.
Various other work followed. Here I skip over many greats such as the lambda papers... But, perhaps the most notable was Griffin's landmark paper "A Formulae-as-Types Notion of Control" which showed a connection between continuation passing style and classical logic. Here the connection between "continuation" and "Evaluation context" is emphasized
That is, E represents the rest of the computation that remains to be done after N is evaluated. The context E is called the continuation (or control context) of N at this point in the evalu- ation sequence. The notation of evaluation contexts allows, as we shall see below, a concise specification of the operational semantics of operators that ma- nipulate continuations (indeed, this was its intended use [3, 2, 4, 1]).
making clear that the "continuation" is "just the a -> r bit"
This all looks at things from the point of view of semantics and sees continuations as functions. The thing is, continuations as functions give you more power than you get with something like scheme's callCC. So, another perspective on continuations is that they are variables in the program which internalize the call stack. Parigot had the idea to make continuation variables a seperate syntactic category leading to the elegant lambda-mu calculus in "λμ-Calculus: An algorithmic interpretation of classical natural deduction."
Is the wording observable/observer about correct?
I think it is in so far as it is what Eric Mejier uses. It is non standard terminology in academic PLs.
Are the citations above really contradictory, is there a common truth?
Let us look at the citations again
a continuation is an abstract representation of the control state of a computer program.
In my interpretation (which I think is pretty standard) a continuation models what a program should do next. I think wikipedia is consistent with this.
A Haskell continuation has the following type:
This is a bit odd. But, note that later in the post Gabriel uses language which is more standard and supports my use of language.
That means that if we have a function with two continuations:
(a1 -> r) -> ((a2 -> r) -> r)
Fueled by reading about continuations via Andrzej Filinski's Declarative Continuations and Categorical Duality I adopt the following terminology and understanding.
A continuation on values of a is a "black hole which accepts values of a". You can see it as a black box with one operation—you feed it a value a and then the world ends. Locally at least.
Now let's assume we're in Haskell and I demand that you construct for me a function forall r . (a -> r) -> r. Let's say, for now, that a ~ Int and it'll look like
f :: forall r . (Int -> r) -> r
f cont = _
where the type hole has a context like
r :: Type
cont :: Int -> r
-----------------
_ :: r
Clearly, the only way we can comply with these demands is to pass an Int into the cont function and return it after which no further computation can happen. This models the idea of "feed an Int to the continuation and then the world ends".
So, I would call the function (a -> r) the continuation so long as it's in a context with a fixed-but-unknown r and a demand to return that r. For instance, the following is not so much of a continuation
forall r . (a -> r) -> (r, a)
as we're clearly allowed to pass back out more information from our failing universe than the continuation alone allows.
On "Observable"
I'm personally not a fan of the "observer"/"observable" terminology. In that terminology we might write
newtype Observable a = O { observe :: forall r . (a -> r) -> r }
so that we have observe :: Observable a -> (a -> r) -> r which ensures that exactly one a will be passed to an "observer" a -> r "observing" it. This gives a very operational view to the type above while Cont or even the scarily named Yoneda Identity explains much more declaratively what the type actually is.
I think the point is to somehow hide the complexity of Cont behind metaphor to make it less scary for "the average programmer", but that just adds an extra layer of metaphor for behavior to leak out of. Cont and Yoneda Identity explain exactly what the type is without dressing it up.
I suggest to recall the call convention for C on x86 platforms, because of its use of the stack and registers to pass the arguments around. This will turn out very useful to understand the abstraction.
Suppose, function f calls function g and passes 0 to it. This will look like so:
mov eax, 0
call g -- now eax is the first argument,
-- and the stack has the address of return point, f'
g: -- here goes g that uses eax to compute the return value
mov eax,1 -- which by calling convention is placed in eax
ret -- get the return point, f', off the stack, and jump there
f': ...
You see, placing the return point f' on the stack is the same as passing a function pointer as one of the arguments, and then the return is the same as calling the given function and pass it a value. So from g's point of view the return point to f looks like function of one argument, f' :: a -> r. As you understand, the state of the stack completely captures the state of the computation f was performing, and which needed a from g in order to proceed.
At the same time, at the point g is called it looks like a function that accepts a function of one argument (we place the pointer of that function on the stack), which will eventually compute the value of type r that the code from f': onwards was meant to compute, so the type becomes g :: (a->r)->r.
Since f' is given a value of type a from "somewhere", f' can be seen as the observer of g - which is, conversely, the observable.
This is only intended to give a basic idea and tie somehow to the world you probably already know. The magic of continuations permits to do more tricks than just convert "plain" computation into computation with continuations.
When we refer to a continuation, we mean the part that let us continue calculating a result.
An operation in the Continuation Monad is analogous to a function that is incomplete and so it is waiting on another function to complete it. Although, the Continuation Monad is itself a valid construct that can be used to complete another Continuation Monad, that is what the binding operator (>>=) for the Cont Monad does.
When writing code that involves callCC or Call with Current Continuation, you are passing the current Cont Monad into another Cont Monad so that the second one can make use of it. For example, it might prematurely end execution by calling the first Cont Monad, and from there the cycle can either repeat or diverge into a different Continuation Monad.
The part that is the continuation is different from which perspective you use. In my personal opinion, the best way to describe a continuation is in relation to another construct.
So if we return to our example of two Cont Monads interacting, from the perspective of the first Monad the continuation is the (a -> r) -> r (because that is the unwrapped type of the first Monad) and from the perspective of the second Monad the continuation is the (a -> r) (because that is the unwrapped type of the first monad when a is substituted for (a -> r)).

overview, but very over in functional programming

What does a very general function look like in functional programming?
Somebody said "we don't have objects, but we have higher order functions". Do higher order functions replace objects?
While programming object-oriented apps, I try to go from a more general to a more detailed idea, lots of times. If I try to do that in functional programming, am I going to need lots of higher order functions?
This answer is oriented towards Haskell rather than Lisp because although lisp has higher order functions, idiomatic lisp can be and is often very object-oriented.
We'll also ignore inheritance (and ad-hoc polymorphism) which is commonly associated with object oriented programming, but is somewhat orthogonal.
In general, abstract data types "replace" objects, in the sense that generally you use an object to bundle together a bunch of related data in e.g. Java or Python, and you declare a data type to do such a thing in Haskell or ML.
However objects also bundle behavior with the data. So an object of a class has data, but also functions which can access and mutate that data. In a functional style, you'd simply declare the functions on your data type outside of that data type. Then encapsulation is provided by either modules or use of closures.
On the latter point -- closures and objects are duals, although it is not necessarily idiomatic to express them as such. There's some very old-school discussion of this at the portland patterns wiki: http://c2.com/cgi/wiki?ClosuresAndObjectsAreEquivalent.
Oh, and an example from oleg: http://okmij.org/ftp/Scheme/oop-in-fp.txt.
Ignoring typeclasses (which are essential to idiomatic Haskell), and focusing just on core functional programming, here's a sketch of a different approach to something that would be done with inheritance in an OO language. Function foo uses some object that implements interface A and some object that implements interface B to produce some Double. With higher order functions, you'd perhaps have a type signature of fooGen :: (a -> Double -> Double) -> (b -> String -> Double) -> a -> b -> Double.
That signature says that fooGen takes a function from some a and a Double to another Double, and a function of some b and a String to a Double, and then it takes an a and a b, and finally it returns a Double.
So now you can pass in the "interface" separately from the concrete values through partial application, by declaring, e.g., fooSpecialized = fooGen funcOnA funcOnB.
With typeclasses, you can abstract away the concrete passing in of the "interface implementation" (or, in more proper language, dictionary), and declare foo :: HasSomeFunc a, HasSomeOtherFunc b => a -> b -> Double. You can think of the stuff on the left hand side of the => as declaring, loosely, the interfaces that your concrete a and b types are required to implement.
This is all a handwavy and partial answer of course to an exceedingly general question.
Answers first
Somebody said "we don't have objects, but we have higher order functions". Do higher order functions replace objects?
If you mean can higher order functions contain some hidden state, then yes. Functions defined inside of the other functions can capture some information from their scope, and, if returned to the outer world, will preserve this information. This is what closures are about.
If you mean can higher order functions contain mutable state, then no. In pure functional programming they are not stateful. They produce the same results on the same inputs. If you want to simulate how something changes, you do not overwrite a variable, but you define how to calculate its new value from the old one.
Of course, there are shortcuts, and even functional languages allow for writing in imperative style.
If I try to do that in functional programming, am I going to need lots of higher order functions?
You are going to use them a lot. Probably not even thinking that your functions are higher-order. And probably, enjoying them a lot. You'll just pass functions as values to other functions.
For example, map is a HOF. Its first argument is another function. What you would think of in an imperative language as a loop "for every element in a collection: apply some function, save the result", in the functional language will be "map a function over a collection and get a new collection of results". Folds are another example of HOFs. So most of the loops from an imperative language can be translated to the calls of a higher order functions in a functional language. This is to make clear how often you are likely to use them.
overview, but very over in functional programming
This is a good place to start: Functional Programming.
An example of encapsulating "state":
f = let x = 3
in let withX y = x + y
in withX
Now f is the same as withX, which is a function that "remembers", that x = 3. When we use f, we need to supply only one argument, y, and it will sum it with the "remembered" value of x (3).
This should print 3 and then [4, 5, 6]:
main = do
print $ f 0
print $ map f [1..3]
We do not pass 3 as an argument to f, it "remembers" 3 from the closure above,
and we can pass f itself as a parameter to map, which is a HOF in this case.
So functions can encapsulate state.
An example of "changing" a variable
As I said above, in functional programming, the state is not mutable. So if you want to, say, apply operation f to the value, save the result, and then apply operation g, in functional language you would express it with an intermediate variable, which contains the result of applying f, and then apply g to it to calculate the new value. Please note that you are not "overwriting" the original value x0:
applyTwo first second x0 =
let x1 = first x0
in second x1
But in fact it is possible to write it shorter, because it is just a simple composition of functions:
applyTwo' f g = g . f
Or you can generalize this approach, and write a function, which will apply any number of functions:
applyAll [] = id -- don't do anything if there are no functions to apply
applyAll (f:otherFs) = applyAll otherFs . f
Please note, that applyTwo and applyAll are now a higher order functions. Of course, they do not replace objects, but allow to avoid the mutable state.
This how they are used:
ghci> applyTwo (+1) (*10) 2
30
ghci> applyAll [(+1), (*10)] 2
30
It's all programming; the same patterns show up again and again. You might write something like this in your favorite OO language:
role Person {
has 'firstname' => ( isa => 'Str' );
has 'lastname' => ( isa => 'Str' );
}
class Employee does Person {
has 'manager' => ( isa => 'Employee' );
has 'reports' => ( isa => '[Employee]' );
}
Whereas in Haskell, you'd write:
class Person a where
firstname :: a -> String
lastname :: a -> String
data Employee = Employee { firstName :: String
, lastName :: String
, manager :: Employee
, reports :: [Employee]
}
instance Person Employee where
firstname = firstName
lastname = lastName
People worry too much about what's different rather than realizing that most things are the same.

Resources