Good way of creating loops - haskell

Haskell does not have loops like many other languages. I understand the reasoning behind it and some of the different approaches used to solve problems without them. However, when a loop structure is necessary, I am not sure if the way I'm creating the loop is correct/good.
For example (trivial function):
dumdum = do
putStrLn "Enter something"
num <- getLine
putStrLn $ "You entered: " ++ num
dumdum
This works fine, but is there a potential problem in the code?
A different example:
a = do
putStrLn "1"
putStrLn "2"
a
If implemented in an imperative language like Python, it would look like:
def a():
print ("1")
print ("2")
a()
This eventually causes a maximum recursion depth error. This does not seem to be the case in Haskell, but I'm not sure if it might cause potential problems.
I know there are other options for creating loops such as Control.Monad.LoopWhile and Control.Monad.forever -- should I be using those instead? (I am still very new to Haskell and do not understand monads yet.)

For general iteration, having a recursive function call itself is definitely the way to go. If your calls are in tail position, they don't use any extra stack space and behave more like goto1. For example, here is a function to sum the first n integers using constant stack space2:
sum :: Int -> Int
sum n = sum' 0 n
sum' !s 0 = s
sum' !s n = sum' (s+n) (n-1)
It is roughly equivalent to the following pseudocode:
function sum(N)
var s, n = 0, N
loop:
if n == 0 then
return s
else
s,n = (s+n, n-1)
goto loop
Notice how in the Haskell version we used function parameters for the sum accumulator instead of a mutable variable. This is very common pattern for tail-recursive code.
So far, general recursion with tail-call-optimization should give you all the looping power of gotos. The only problem is that manual recursion (kind of like gotos, but a little better) is relatively unstructured and we often need to carefully read code that uses it to see what is going on. Just like how imperative languages have looping mechanisms (for, while, etc) to describe most common iteration patterns, in Haskell we can use higher order functions to do a similar job. For example, many of the list processing functions like map or foldl'3 are analogous to straightforward for-loops in pure code and when dealing with monadic code there are functions in Control.Monad or in the monad-loops package that you can use. In the end, its a matter of style but I would err towards using the higher order looping functions.
1 You might want to check out "Lambda the ultimate GOTO", a classical article about how tail recursion can be as efficient as traditional iteration. Additionally, since Haskell is a lazy languages, there are also some situations where recursion at non-tail positions can still run in O(1) space (search for "Tail recursion modulo cons")
2 Those exclamation marks are there to make the accumulator parameter be eagerly evaluated, so the addition happens at the same time as the recursive call (Haskell is lazy by default). You can omit the "!"s if you want but then you run the risk of running into a space leak.
3 Always use foldl' instead of foldl, due to the previously mentioned space leak issue.

I know there are other options for creating loops such as Control.Monad.LoopWhile and Control.Monad.forever -- should I be using those instead? (I am still very new to Haskell and do not understand monads yet.)
Yes, you should. You'll find that in "real" Haskell code, explicit recursion (i.e. calling your function in your function) is actually pretty rare. Sometimes, people do it because it's the most readable solution, but often, using things such as forever is much better.
In fact, saying that Haskell doesn't have loops is only a half-truth. It's correct that no loops are built into the language. However, in the standard libraries there are more kinds of loops than you'll ever find in an imperative language. In a language such as Python, you have "the for loop" which you use whenever you need to iterate through something. In Haskell, you have
map, fold, any, all, scan, mapAccum, unfold, find, filter (Data.List)
mapM, forM, forever (Control.Monad)
traverse, for (Data.Traversable)
foldMap, asum, concatMap (Data.Foldable)
and many, many others!
Each of these loops are tailored for (and sometimes optimised for) a specific use case.
When writing Haskell code, we make heavy use of these, because they allow us to reason more intelligently about our code and data. When you see someone use a for loop in Python, you have to read and understand the loop to know what it does. When you see someone use a map loop in Haskell, you know without reading what it does that it will not add any elements to the list – because we have the "Functor laws" which are just rules that say any map function must work this or that way!
Back to your example, we can first define an askNum "function" (it's technically not a function but an IO value... we can pretend it is a function for the time being) which asks the user to enter something just once, and displays it back to them. When you want your program to keep asking forever, you just give that "function" as an argument to the forever loop and the forever loop will keep asking forever!
The entire thing might look like:
askNum = do
putStrLn "Enter something"
num <- getLine
putStrLn "You entered: " ++ num
dumdum = forever askNum
Then a more experienced programmer would probably get rid of the askNum "function" in this case, and turn the entire thing into
dumdum = forever $ do
putStrLn "Enter something"
num <- getLine
putStrLn "You entered: " ++ num

Related

How to quickly read the do notation without translating to >>= compositions?

This question is related to this post: Understanding do notation for simple Reader monad: a <- (*2), b <- (+10), return (a+b)
I don't care if a language is hard to understand if it promises to solve some problems that easy to understand languages give us. I've been promised that the impossibility of changing state in Haskell (and other functional languages) is a game changer and I do believe that. I've had too many bugs in my code related to state and I totally agree with this post that reasoning about the interaction of objects in OOP languages is near impossible because they can change states, and thus in order to reason about code we should consider all the possible permutations of these states.
However, I've been finding that reasoning about Haskell monads is also very hard. As you can see in the answers to the question I linked, we need a big diagram to understand 3 lines of the do notation. I always end up opening stackedit.io to desugar the do notation by hand and write step by step the >>= applications of the do notation in order to understand the code.
The problem is more or less like this: in the majority of the cases when we have S a >>= f we have to unwrap a from S and apply f to it. However, f is actually another thing more or less in the formS a >>= g, which we also have to unwrap and so on. Human brain doesn't work like that, we can't easily apply these things in the head and stop, keep them in the brain's stack, and continue applying the rest of the >>= until we reach the end. When the end is reached, we get all those things stored in the brain's stack and glue them together.
Therefore, I must be doing something wrong. There must be an easy way to understand '>>= composition' in the brain. I know that do notation is very simple, but I can only think of that as a way to easily write >>= compositions. When I see the do notation I simply translate it to a bunch of >>=. I don't see it as a separate way of understanding code. If there is a way, I'd like someone to tell me.
So the question is: how to read the do notation?
Given a simple code like
foo :: Monad m => m Int -> m Int -> m Int
foo x y = do
a <- y -- I'm intentionally doing y first; see the Either example
b <- x
return (a + b)
you can't say much about <- except that it "gets" an Int value from x or y. What "get" means depends very much on what m is.
Some examples:
m ~ Maybe
foo (Just 3) (Just 5) evaluates to Just 8; replace either argument with Nothing, and you get Nothing. <- tries to get a value out the Maybe Int value, but aborts the rest of the block if it fails.
m ~ Either a
Pretty much the same as Maybe, but replacing Nothing with the first Left value that it encounters. foo (Right 3) (Right 5) returns Right 8. foo x (Left "foo") returns Left "foo", whether x is a Right or Left value.
m ~ []
Now, instead of getting an Int, <- gets every Int from among the given choices. It does so nondeterministically; you can imagine that the function "forks" into multiple parallel copies, each one having chosen a different value from its list. In the end, the final result is a list of all the results that were computed.
foo [1,2] [3,4] returns [4, 5, 5, 6] ([3 + 1, 3 + 2, 4 + 1, 4 + 2]).
m ~ IO
This one is tricky, because unlike the previous monads we've looked at, there isn't necessarily a value yet to get. foo readLn readLn will return whatever the sum of the two numbers read from standard input is, with the possibility of a run-time error should the strings so read not be parseable as Int values.
You might think of it as working like the Maybe monad, but with run-time exceptions replacing Nothing.
Part 1: no need to go into the weeds
There is actually a very simple, easy to grasp, intuition behind monads: they encode the order of stuff happening. Like, first do this thing, then do the other thing, then do the third thing. For example:
executeMadDoctrine = do
wait oneYear
s <- evaluatePoliticalSituation
case s of
Stable -> do
printInNewspapers "We're going to live another day"
executeMadDoctrine -- recursive call
Unstable -> do
printInNewspapers "Run for your lives"
launchMissiles
return ()
Or a slightly more realistic (and also compilable and executable) example:
main = do
putStrLn "What's your name?"
name <- getLine
if name == "EXIT" then
return ()
else do
putStrLn $ "Hi, " <> name
main
Simple. Just like Python. Human brain does, indeed, work exactly like this.
You see, you don't need to know how it all works inside, unless you start doing more advanced things. After all, you're probably not thinking about the order of cylinders firing every time you start your car, do you? You just hit the gas and it goes. It's the same with do.
Part 2: you picked a bad example
The example you picked in your previous question is not the best candidate for this stuff. The Monad instance for functions is indeed a bit brain-wrecking. Even I have to make a little effort to understand what's going on - and I've been doing Haskell professionally for quite some time.
The trouble here is mathematics. The bloody thing turns out unreasonably effective time after time, especially when nobody is asking it to.
Think about this: first we had perfectly good natural numbers that we could very well understand. I have two eyes, and you have one sword, I better run. But then it turned out that we need zero. Why the bloody hell do we need it? It's sacrilege! You can't write down something that isn't! But it turns out you have to have it. It unambiguously follows from the other stuff we know is true. And then we got irrational numbers. WTF is that? How do I even understand it? I can't have π oranges after all, can I? But they too must exist. It just follows. No way around it. And then complex numbers, transcendental, hypercomplex, unconstructible... My brain is boiling at this point.
It's sort of the same with monads: there is this peculiar mathematical object, and at some point somebody noticed that it's very good for expressing the order of computation, so we appropriated monads for that. But then it turns out that all kinds of things can be made to look like monads, mathematically speaking. No way around it, it just is.
And so we have all these funny instances. And the do notation still works for them, because they're monads (mathematically speaking), but it's no longer about order. Like, did you know that lists were monads too? But just like with functions, the interpretation for lists is not "order", it's nested loops. And if you combine lists with something else, you get non-determinism. Fun stuff.
But just like with different kinds of numbers, you can learn. You can build up intuition over time. Do you absolutely have to? See part 1.
Any long do chains can be re-arranged into the equivalent binary do, by the associativity law of monads, grouping everything on the right, as
do { A ; B ; C ; ... }
===
do { A ; r <- do { B ; C ; ... } ; return r }.
So we only need to understand this binary do form to understand everything else. And that is expressed as single >>= combination.
Then, treat do code's interpretation (for a particular monad) axiomatically instead, as a bunch of re-write rules. Convince yourself about the validity of those rules for a particular monad just once (yes, using possibly extensive >>=-based re-writes, once).
So, for the Reader monad from the question's linked entry,
(do { S f }) x === f x
(do { a <- S f ; return (h a) }) x === let {a = f x} in h a
=== h (f x)
(do { a <- S f ; === let {a = f x ;
b <- S g ; b = g x} in h a b
return (h a b) }) x === h (f x) (g x)
and any longer chain of lets is expressible as nested binary lets, equivalently.
The last one is liftM2 actually, so an argument could be made that understanding a particular monad means understanding its particular liftM2 (*), really.
And those Ss, we end up just ignoring them as noise, forced on us by Haskell syntax (well, that question didn't use them at all, but it could).
(*) more precisely, liftBind, (do { a <- S f ; b <- k a ; return (h a b) }) x === let {a = f x ; b = g x } in h a b where (S g) x === k a x. (specifically, this, after the words "the long version")
And so, your attitude of "When I see the do notation I simply translate it to a bunch of >>=. I don't see it as a separate way of understanding code" could actually be the problem.
do notation is your friend. Personally, I first hated it, then learned to love it, and now I see the >>=-based re-writes as its (low-level) implementation, more and more.
And, even more abstractly, do can equivalently be written as Monad Comprehensions, looking just like list comprehensions!
#chepner has already included in his answer quite a lot of what I would have said, but I wish to emphasise another aspect, which I feel is quite pertinent to this question: the fact that do notation is, for most developers, a much easier and more easily understandable way to work with any monadic expression which is at least moderately complex.
The reason for this is that, in an almost miraculous way, do blocks end up very much resembling code written in an imperative language. Imperative style code is much easier to understand for the majority of developers, and not only because it's by far the most common paradigm: it gives an explicit "recipe" for what a piece of code is doing, whereas more typical Haskell expressions, particularly monadic ones involving nested lambdas and >>= everywhere, very easily become difficult to comprehend.
In saying this I certainly do not mean that one should code in an imperative language as opposed to Haskell. The advantages of the pure functional style are well documented and seemingly well understood by the OP, so I will not go into them here. But Haskell's do notation allows one to write code in an imperative-looking "style", which therefore is explicit and easier to comprehend - at least on a small scale - while sacrificing none of the advantages of using a pure functional language.
This "imperative style" of do notation is, I feel, more visible with some monads than others, and I wish to illustrate my point with examples from a couple of monads which I find suit the "imperative style" well. First, IO, where I can give this simple example:
greet :: IO ()
greet = do
putStrLn "Hello, what is your name?"
name <- readLine
putStrLn $ "Pleased to meet you, " ++ name ++ "!"
I hope it's immediately obvious what this code does, when executed by the Haskell runtime. What I wish to emphasise is how similar it is to imperative code, for example, this Python translation (which is not the most idiomatic, but has been chosen to exactly match the Haskell code line for line).
def greet():
print("Hello, what is your name?")
name = input()
print("Pleased to meet you, " + name + "!")
Now ask yourself, how easy would the code be to understand in its desugared from, without do?
greet = putStrLn "Hello, what is your name?" >> readLine >>= \name -> putStrLn $ "Pleased to meet you, " ++ name ++ "!"
It's not particularly difficult, granted - but I hope you agree that it's much more "noisy" than the do block above. I can't speak for others, but I very much doubt I'm alone in saying that the latter version might take me 10-20 seconds or so to fully comprehend, whereas the do block is instantly comprehensible. And this of course is an extremely simple action - anything more complicated, as found in many real-world applications, makes the difference in comprehensibility much greater.
I have chosen IO for a reason, of course - I think it's in dealing with IO in particular that it's most natural to think in terms of "do this action, then if the result is that then do the next action, otherwise...". While the semantics of the IO monad fits this perfectly, it's much easier to translate the code into something like that when written in quasi-imperative notation than it is to use >>= directly. And the do notation is easier to write, too.
But although IO is the clearest example of this, it's certainly not the only one. Another great example is the State monad. Here's a simple example of using it to find the sum of a list of integers (and I know you wouldn't actually do that this way, but it's just a very simple example of some not-totally-trivial code that used this monad):
sumList :: State [Int] Int
sumList = go 0
where go subtotal = do
remaining <- get
case remaining of
[] -> return subtotal
(x:xs) -> do
put xs
go $ subtotal + X
Here, in my opinion, the steps are very clear - the auxiliary function go successively adds the first element of the list to the running total, while updating the internal state with the tail of the list. When there is no more list, it returns the running total. (Given the above, the function evalState sumList will take an actual list and sum it.)
One can probably come up with better examples (particularly ones where the calculation involved isn't trivial to do in other ways), but my point is hopefully still clear: rewriting the above with >>= and lambdas would make it much less comprehensible.
do notation is, in my opinion, why the often-quoted quip about Haskell being "the world's finest imperative language", has more than a grain of truth. By using and defining different monads one can write easily understandable "imperative" code in a wide variety of situations - while still having guarantees that various functions can't, for example, change global state. It's in many ways the best of both worlds.

Haskell: How to read user input to be used in a function inside of a do block?

I am attempting to run a simple palindrome program in Haskell but I want to request input from the user to be used in the program.
code:
palindrome :: String -> Bool
palindrome x = x == reverse x
main = do
putStrLn "Brendon Bailey CSCI 4200-DB"
putStrLn "Enter word to test for Palindrome"
palindrome <- getLine
putStrLn "Thanks"
The code works but it just displays "thanks". If I exclude the last line of the block I get the "last line of do block must be expression" and if I include the actual palindrome code above the do block I get errors saying I need a "let" statement but incorporating one causes more errors. I am brand new to Haskell (and functional programming in general. I do java, c++, and web langs) I think the explanation here will be simple but after reading input output documentation on haskell I still can't figure out the proper way to accomplish this.
I'm doing it this way because my assignment stipulates that I must show the class info when the program is run. The palindrome part works fine without the do block. I'm just attempting to automatically output my text before testing a palindrome. Also how can I set up a haskell script to just run when loaded? I don't want the user to have to type "main" I want it to just run.
edit:
palindrome :: String -> Bool
palindrome x = x == reverse x
main = do
putStrLn "Brendon Bailey CSCI 4200-DB"
putStrLn "Enter Word"
x <- getLine
palindrome x
putStrLn "thanks"
This also doesn't work. How do I get the users input to use in the palindrome???
If the code:
palindrome :: String -> Bool
palindrome x = x == reverse x
works when "palindrome "etc"" is entered in GHCi then why doesn't my second code version do the same thing???
edit 2:
Don't forget to enclose print statements in parentheses or Haskell will think it separate input is a separate argument. Dumb mistake but easy to make.
This question is multiple questions in one, so I'll try to address each problem. I will address these in a different order to which they were stated in.
Before we begin, it must be said that you have critical misconceptions about the way Haskell works. I strongly recommend reading Learn You a Haskell for Great Good, which may seem juvenile, but is in fact a stellar introduction to functional programming, and in fact is the way many of us began learning Haskell.
Question 1: Compilation, GHC, and Haskell executables
(...) how can I set up a haskell script to just run when loaded? I don't want the user to have to type "main" I want it to just run.
You are confusing interpreted scripting languages like Python, Javascript, or Ruby, with compiled languages like C, Java, or Haskell.
Programs in interpreted languages are run by a program on the computer that contains information on how to interpret the text written, hence the name 'interpreted'. However, compiled languages are written, then converted from text into machine code, which is almost unreadable by humans, but which computers can run quickly.
When we write a complete, executable program in Haskell, we expect to compile it with The Glasgow Haskell Compiler, GHC (not GHCi). This looks something like this:
$ cat MyProgram.hs
main :: IO ()
main = putStrLn "This is my program!"
$ ghc MyProgram.hs
[1 of 1] Compiling Main ( MyProgram.hs, MyProgram.o )
Linking MyProgram ...
$ ./MyProgram
This is my program!
Now, MyProgram.hs was converted by GHC from a Haskell source code file into an executable, MyProgram. We defined an IO action, called main, to tell the compiler where we wanted it to begin. This is called an entry point, and is necessary to create a standalone executable. This is how Haskell programs are created and run.
Crucially, GHCi is not how Haskell programs are run. GHCi is an interactive environment for experimenting with Haskell functions and definitions. GHCi allows you to evaluate expressions (such as palindrome "Hello", or main) but is not intended as the way to run Haskell programs, in the same way that python's IDLE is for Python programs.
Question 2: Functions vs IO actions
(...) If you just have the palindrome code outside of the do block and the user enters "palindrome "etc"" in GHCi the program returns the boolean as output (...)
Haskell does not behave like languages you may be used to. Functions in Haskell are not programs. When you defined your function palindrome, you made a way for the computer to convert a String into a Bool. You did not tell it how to output that Bool in any way. Hence, there is an extra step: the IO monad.
Haskell programs are, in a way, represented as data, hence the somewhat strange type signature main :: IO (). I will not be explaining monads in detail here, but put simply, in a do-block, you can only state things of type IO a where a is any type. So, when you wrote:
main = do
-- (...)
palindrome x
-- (...)
It didn't make sense. Think of it this way: how can you 'run' or 'execute' a Bool? It's a boolean, not a program!
However, there is a function, called print, which allows you to do exactly that. Your line should be:
main = do
-- (...)
print (palindrome x)
-- (...)
I won't go into prints type signature, but rest assured that it does indeed return an IO (), which allows us to put it into a do-block. By the way, if you write:
palindrome :: String -> Bool
palindrome x = x == reverse x
main = do
putStrLn "Brendon Bailey CSCI 4200-DB"
putStrLn "Enter Word"
x <- getLine
print (palindrome x)
putStrLn "thanks"
...and compile the file with GHC, you will get the desired effect, assuming you did in fact want the program to output True or False. In fact, I compiled this as Palindromes.hs, and here's the result:
$ ghc Palindromes.hs
[1 of 1] Compiling Main ( Palindromes.hs, Palindromes.o )
Linking Palindromes ...
$ ./Palindromes
Brendon Bailey CSCI 4200-DB
Enter Word
amanaplanacanalpanama
True
thanks
Question 3: GHCi as a testing environment
[My code] works when "palindrome "etc"" is entered in GHCi then why doesn't my second code version do the same thing???
GHCi, as stated before, is different from GHC, in that it is an environment to test and 'play' with Haskell code before writing standalone programs.
In GHCi, you write expressions, and they are evaluated (using GHC's machinery), the result is printed, and it loops, Hence, this kind of system is often called a Read-Evaluate-Print-Loop, or a REPL.
Let's examine a bit of code in GHCi:
λ let plusOne n = n + 1
λ plusOne 5
6
The first line defines a value (here a function) that we can use later. The second line, however, is not a definition, but an expression. Hence, GHCi evaluates it, and prints the result.
GHCi can also execute IO actions:
λ let myProgram = putStrLn "Hello!"
λ myProgram
Hello!
This works because evaluating an IO action in Haskell is equivalent to executing them: this is how Haskell is designed to work - it's a truly brilliant idea provided that you grasp it.
Even though we see these similarities, GHCi is not equivalent to a Haskell program, as one might expect after using Python. GHCi unfortunately bears little resemblance to 'real' Haskell code, because Haskell works in a fundamentally different way to most other languages.
GHCi is nothing more than a useful way to quickly see results of our code, as well as other useful information that I won't explain here.
Other points
Monads are a key part of the problem here. There have been many, many questions on Monads here on StackOverflow as they tend to be a sticking point, so any questions you may have are likely here. This makes things difficult, since Haskell IO cannot be fully appreciated without an understanding of Monads. There is an LYAH chapter on Monads.
because they are not the same thing Haskell is a pure functional programming .
with no side effects ,for example if a sum function changes a global variable, or prints the sum before returning it, those are side effects. Functions in most other languages frequently have side effects, typically making them hard to analyze. Functional programming languages seek to avoid side effects when possible.
in this case reading and writing to IO is side effects.
operations that have side effects are handled by Monad (do) == (>>=)
this is valid code
palindrome :: String -> Bool
palindrome x = x == reverse x
main = do
putStrLn "Brendon Bailey CSCI 4200-DB"
putStrLn "Enter Word"
x <- getLine
let y = palindrome x
putStrLn "thanks"
print y
here is a good explanations about Monads

Is there a way to "preserve" results in Haskell?

I'm new to Haskell and understand that it is (basically) a pure functional language, which has the advantage that results to functions will not change across multiple evaluations. Given this, I'm puzzled by why I can't easily mark a function in such a way that its remembers the results of its first evaluation, and does not have to be evaluated again each time its value is required.
In Mathematica, for example, there is a simple idiom for accomplishing this:
f[x_]:=f[x]= ...
but in Haskell, the closest things I've found is something like
f' = (map f [0 ..] !!)
where f 0 = ...
f n = f' ...
which in addition to being far less clear (and apparently limited to Int arguments?) does not (seem to) preserve results within an interactive session.
Admittedly (and clearly), I don't understand exactly what's going on here; but naively, it seems like Haskel should have some way, at the function definition level, of
taking advantage of the fact that its functions are functions and skipping re-computation of their results once they have been computed, and
indicating a desire to do this at the function definition level with a simple and clean idiom.
Is there a way to accomplish this in Haskell that I'm missing? I understand (sort of) that Haskell can't store the evaluations as "state", but why can't it simply (in effect) redefine evaluated functions to be their computed value?
This grows out of this question, in which lack of this feature results in terrible performance.
Use a suitable library, such as MemoTrie.
import Data.MemoTrie
f' = memo f
where f 0 = ...
f n = f' ...
That's hardly less nice than the Mathematica version, is it?
Regarding
“why can't it simply (in effect) redefine evaluated functions to be their computed value?”
Well, it's not so easy in general. These values have to be stored somewhere. Even for an Int-valued function, you can't just allocate an array with all possible values – it wouldn't fit in memory. The list solution only works because Haskell is lazy and therefore allows infinite lists, but that's not particularly satisfying since lookup is O(n).
For other types it's simply hopeless – you'd need to somehow diagonalise an over-countably infinite domain.
You need some cleverer organisation. I don't know how Mathematica does this, but it probably uses a lot of “proprietary magic”. I wouldn't be so sure that it does really work the way you'd like, for any inputs.
Haskell fortunately has type classes, and these allow you to express exactly what a type needs in order to be quickly memoisable. HasTrie is such a class.

what's the meaning of "you do computations in Haskell by declaring what something is instead of declaring how you get it"?

Recently I am trying to learn a functional programming language and I choosed Haskell.
Now I am reading learn you a haskell and here is a description seems like Haskell's philosophy I am not sure I understand it exactly: you do computations in Haskell by declaring what something is instead of declaring how you get it.
Suppose I want to get the sum of a list.
In a declaring how you get it way:
get the total sum by add all the elements, so the code will be like this(not haskell, python):
sum = 0
for i in l:
sum += i
print sum
In a what something is way:
the total sum is the sum of the first element and the sum of the rest elements, so the code will be like this:
sum' :: (Num a) => [a] -> a
sum' [] = 0
sum' (x:xs) = x + sum' xs
But I am not sure I get it or not. Can some one help? Thanks.
Imperative and functional are two different ways to approach problem solving.
Imperative (Python) gives you actions which you need to use to get what you want. For example, you may tell the computer "knead the dough. Then put it in the oven. Turn the oven on. Bake for 10 minutes.".
Functional (Haskell, Clojure) gives you solutions. You'd be more likely to tell the computer "I have flour, eggs, and water. I need bread". The computer happens to know dough, but it doesn't know bread, so you tell it "bread is dough that has been baked". The computer, knowing what baking is, knows now how to make bread. You sit at the table for 10 minutes while the computer does the work for you. Then you enjoy delicious bread fresh from the oven.
You can see a similar difference in how engineers and mathematicians work. The engineer is imperative, looking at the problem and giving workers a blueprint to solve it. The mathematician defines the problem (solve for x) and the solution (x = -----) and may use any number of tried and true solutions to smaller problems (2x - 1 = ----- => 2x = ----- + 1) until he finally finds the desired solution.
It is not a coincidence that functional languages are used largely by people in universities, not because it is difficult to learn, but because there are not many mathematical thinkers outside of universities. In your quotation, they tried to define this difference in thought process by cleverly using how and what. I personally believe that everybody understands words by turning them into things they already understand, so I'd imagine my bread metaphor should clarify the difference for you.
EDIT: It is worth noting that when you imperatively command the computer, you don't know if you'll have bread at the end (maybe you cooked it too long and it's burnt, or you didn't add enough flour). This is not a problem in functional languages where you know exactly what each solution gives you. There is no need for trial and error in a functional language because everything you do will be correct (though not always useful, like accidentally solving for t instead of x).
The missing part of the explanations is the following.
The imperative example shows you step by step how to compute the sum. At no stage you can convince yourself that it is indeed a sum of elements of a list. For example, there is no knowing why sum=0 at first; should it be 0 at all; do you loop through the right indices; what sum+=i gives you.
sum=0 -- why? it may become clear if you consider what happens in the loop,
-- but not on its own
for i in l:
sum += i -- what do we get? it will become clear only after the loop ends
-- at no step of iteration you have *the sum of the list*
-- so the step on its own is not meaningful
The declarative example is very different in this respect. In this particular case you start with declaring that the sum of an empty list is 0. This is already part of the answer of what the sum is. Then you add a statement about non-empty lists - a sum for a non-empty list is the sum of the tail with the head element added to it. This is the declaration of what the sum is. You can demonstrate inductively that it finds the solution for any list.
Note this proof part. In this case it is obvious. In more complex algorithms it is not obvious, so the proof of correctness is a substantial part - and remember that the imperative case only makes sense as a whole.
Another way to compute sum, where, hopefully, declarativeness and proovability become clearer:
sum [] = 0 -- the sum of the empty list is 0
sum [x] = x -- the sum of the list with 1 element is that element
sum xs = sum $ p xs where -- the sum of any other list is
-- the sum of the list reduced with p
p (x:y:xs) = x+y : p xs -- p reduces the list by replacing a pair of elements
-- with their sum
p xs = xs -- if there was no pair of elements, leave the list as is
Here we can convince ourselves that: 1. p makes the list ever shorter, so the computation of the sum will terminate; 2. p produces a list of sums, so by summing ever shorter lists we get a list of just one element; 3. because (+) is associative, the value produced by repeatedly applying p is the same as the sum of all elements in the original list; 4. we can demonstrate the number of applications of (+) is smaller than in the straightforward implementation.
In other words, the order of adding the elements doesn't matter, so we can sum the elements ([a,b,c,d,e]) in pairs first (a+b, c+d), which gives us a shorter list [a+b,c+d,e], whose sum is the same as the sum of the original list, and which now can be reduced in the same way: [(a+b)+(c+d),e], then [((a+b)+(c+d))+e].
Robert Harper claims in his blog that "declarative" has no meaning. I suppose
he is talking about a clear definition there, which I usually think of as more narrow
then meaning, but the post is still worth checking out and hints that you might not
find as clear an answer as you would wish.
Still, everybody talks about "declarative" and it feels like when we do we usually
talk about the same thing. i.e. Give a number of people two different apis/languages/programs
and ask them which is the most declarative one and they will usually pick the same.
The confusing part to me at first was that your declarative sum
sum' [] = 0
sum' (x:xs) = x + sum' xs
can also be seen as an instruction on how to get the result. It's just a different one.
It's also worth noting that the function sum in the prelude isn't actually defined like that
since that particular way of calculating the sum is inefficient. So clearly something is
fishy.
So, the "what, not how" explanation seem unsatisfactory to me. I think of it instead as
declarative being a "how" which in addition have some nice properties. My current intuition
about what those properties are is something similar to:
A thing is more declarative if it doesn't mutate any state.
A thing is more declarative if you can do mathy transformations on it and the meaning of
the thing sort of remains intact. So given your declarative sum again, if we knew that
+ is commutative there is some justification for thinking that writing it like
sum' xs + x should yield the same result.
A declarative thing can be decomposed into smaller thing and still have some meaning. Like
x and sum' xs still have the same meaning when taken separately, but trying to do the
same with the sum += x of python doesn't work as well.
A thing is more declarative if it's independent of the flow of time. For example css
doesn't describe the styling of a web page at page load. It describes the styling of the
web page at any time, even if the page would change.
A thing is more declarative if you don't have to think about program flow.
Other people might have different intuitions, or even a definition that I'm not aware of,
but hopefully these are somewhat helpful regardless.

Non-Trivial Lazy Evaluation

I'm currently digesting the nice presentation Why learn Haskell? by Keegan McAllister. There he uses the snippet
minimum = head . sort
as an illustration of Haskell's lazy evaluation by stating that minimum has time-complexity O(n) in Haskell. However, I think the example is kind of academic in nature. I'm therefore asking for a more practical example where it's not trivially apparent that most of the intermediate calculations are thrown away.
Have you ever written an AI? Isn't it annoying that you have to thread pruning information (e.g. maximum depth, the minimum cost of an adjacent branch, or other such information) through the tree traversal function? This means you have to write a new tree traversal every time you want to improve your AI. That's dumb. With lazy evaluation, this is no longer a problem: write your tree traversal function once, to produce a huge (maybe even infinite!) game tree, and let your consumer decide how much of it to consume.
Writing a GUI that shows lots of information? Want it to run fast anyway? In other languages, you might have to write code that renders only the visible scenes. In Haskell, you can write code that renders the whole scene, and then later choose which pixels to observe. Similarly, rendering a complicated scene? Why not compute an infinite sequence of scenes at various detail levels, and pick the most appropriate one as the program runs?
You write an expensive function, and decide to memoize it for speed. In other languages, this requires building a data structure that tracks which inputs for the function you know the answer to, and updating the structure as you see new inputs. Remember to make it thread safe -- if we really need speed, we need parallelism, too! In Haskell, you build an infinite data structure, with an entry for each possible input, and evaluate the parts of the data structure that correspond to the inputs you care about. Thread safety comes for free with purity.
Here's one that's perhaps a bit more prosaic than the previous ones. Have you ever found a time when && and || weren't the only things you wanted to be short-circuiting? I sure have! For example, I love the <|> function for combining Maybe values: it takes the first one of its arguments that actually has a value. So Just 3 <|> Nothing = Just 3; Nothing <|> Just 7 = Just 7; and Nothing <|> Nothing = Nothing. Moreover, it's short-circuiting: if it turns out that its first argument is a Just, it won't bother doing the computation required to figure out what its second argument is.
And <|> isn't built in to the language; it's tacked on by a library. That is: laziness allows you to write brand new short-circuiting forms. (Indeed, in Haskell, even the short-circuiting behavior of (&&) and (||) aren't built-in compiler magic: they arise naturally from the semantics of the language plus their definitions in the standard libraries.)
In general, the common theme here is that you can separate the production of values from the determination of which values are interesting to look at. This makes things more composable, because the choice of what is interesting to look at need not be known by the producer.
Here's a well-known example I posted to another thread yesterday. Hamming numbers are numbers that don't have any prime factors larger than 5. I.e. they have the form 2^i*3^j*5^k. The first 20 of them are:
[1,2,3,4,5,6,8,9,10,12,15,16,18,20,24,25,27,30,32,36]
The 500000th one is:
1962938367679548095642112423564462631020433036610484123229980468750
The program that printed the 500000th one (after a brief moment of computation) is:
merge xxs#(x:xs) yys#(y:ys) =
case (x`compare`y) of
LT -> x:merge xs yys
EQ -> x:merge xs ys
GT -> y:merge xxs ys
hamming = 1 : m 2 `merge` m 3 `merge` m 5
where
m k = map (k *) hamming
main = print (hamming !! 499999)
Computing that number with reasonable speed in a non-lazy language takes quite a bit more code and head-scratching. There are a lot of examples here
Consider generating and consuming the first n elements of an infinite sequence. Without lazy evaluation, the naive encoding would run forever in the generation step, and never consume anything. With lazy evaluation, only as many elements are generated as the code tries to consume.

Resources