Haskell: What's the underlying data structure for list? - haskell

If it's linked list, why doesn't it support push_back?
If it's simply array, why does it need linear time when accessed by subscript?
Appreciated for your help.
Edit: We can append element in front of a list like this 1:[2,3], that's push_front; but we can't do it this way: [2,3]:4, that's push_back.
ps. actually I borrow push_front/back from C++'s STL

Haskell list are singly linked list. It supports appending to the end of the list, but it has to traverse the entire list for this operation.:
λ> let x = [1,2,3]
λ> x ++ [4]
[1,2,3,4]

If by push_back you mean adding an element to the end, of course it "supports" it. To add an element x to a list list (and get a new list so constructed), use the expression list ++ [x].
Beware though, that this is an O(n) operation, n being the length of list.
That's because it is a singly linked list. It also answers your other question about subscript.

Since the question asks about the "underlying implementation":
The list type is (conceptually) defined like this:
data [a] = [] | a:[a]
You can't actually write this declaration, because lists have funky syntax. But you can make exactly the same kind of thing yourself
data List a = Nil | a :* List a
infixr 5 :*
When you write something like [1,2,3], that's just special syntax for 1:2:3:[]. Since : associates to the right, this really means 1:(2:(3:[])). The functions on lists are all, fundamentally, based on pattern matching, which deals with just one constructor at a time. You may have to open up a lot of : constructors to get to the end of a list, and then build a bunch of : constructors to put together a new version with an extra element at the end. Any functions (like ++) that you may use will end up going through this sort of process internally.

Related

What is the relation between syntax sugar, laziness and list elements accessed by index in Haskell?

Haskell lists are constructed by a sequence of calls to cons, after desugaring syntax:
Prelude> (:) 1 $ (:) 2 $ (:) 3 []
[1,2,3]
Are lists lazy due to them being such a sequence of function calls?
If this is true, how can the runtime access the values while calling the chain of functions?
Is the access by index also a syntax sugar?
How could we express it in other way, less sugared than this?:
Prelude> (!!) lst 1
2
The underlying question here might be:
Are lists fundamental entities in Haskell, or they can be expressed as composition of more essential concepts?
Are there possiblity to represent lists in the simplest lambda calculus?
I am trying to implement a language where the list is defined at the standard libray, not as a special entity directly hardwired in the parser/interpreter/runtime.
Lists in Haskell are special in syntax, but not fundamentally.
Fundamentally, Haskell list is defined like this:
data [] a = [] | (:) a ([] a)
Just another data type with two constructors, nothing to see here, move along.
The above is kind of a pseudocode though, because you can't actually define something like that yourself: neither [] nor (:) is a valid constructor name. A special exception is made for the built-in lists.
But you could define the equivalent, something like:
data MyList a = Nil | Cons a (MyList a)
And this would work exactly the same with regards to memory management, laziness, and so on, but it won't have the nice square-bracket syntax (at least in Haskell 2010; in modern GHC you can get the special syntax for your own types too, thanks to overloaded lists).
As far as laziness goes, that's not special to lists, but is special to data constructors, or more precisely, pattern matching on data constructors. In Haskell, every computation is lazy. This means that whatever crazy chain of function calls you may construct, it's not evaluated right away. Doesn't matter if it's a list construction or some other function call. Nothing is evaluated right away.
But when is it evaluated then? The answer is in the spec: a value is evaluated when somebody tries to pattern match on it, and at that moment it's evaluated up to the data constructor being matched. So, for lists, this would be when you go case myList of { [] -> "foo"; x:xs -> "bar" } - that's when the call chain is evaluated up to the first data constructor, which is necessary in order to decide whether that constructor is [] or (:), which is necessary for evaluating the case expression.
Index access is also not special, it works on the exact same principle: the implementation of the (!!) operator (check out the source code) repeatedly (recursively) matches on the list until it discovers N (:) constructors in a row, at which point it stops matching and returns whatever was on the left of the last (:) constructor.
In the "simplest" lambda calculus, in the absence of data constructors or primitive types, I reckon your only choice is to Church-encode lists (e.g. as a fold, or directly as a catamorphism) or build them up from other structures (e.g. pairs), which are themselves Church-encoded. Lambda calculus, after all, only has functions and nothing else. Unless you mean something more specific.

Haskell style: Pattern matching vs. more intuitive solutions

I'm just starting out with Haskell, so I'm trying to wrap my head around the "Haskell way of thinking." Is there a reason to use pattern matching to solve Problem 1 here basically by unwrapping the whole list and calling the function recursively, instead of just retrieving the last element directly like myLast lst = lst !! ((length lst) - 1)? It seems almost brute-force, but I assume it's just my lack of familiarity here.
A few things I can think of:
(!!) and length are ultimately implemented using recursion over the structure of the list. That being so, it can be a worthwhile learning exercise to implement those basic functions using explicit recursion.
Keep in mind that, under the hood, the retrieval of the last element is not direct. Since we are dealing with linked lists, length has to go through all elements of the lists, and (!!) has to go through all elements up to the desired index. That being so, lst !! (length lst - 1) runs through the whole list twice, rather than once. (This is one of the reasons why, as a rule of thumb, length is better avoided unless you actually need to know the number of elements in and of itself, and not just as a proxy to something else.)
Pattern matching is a neat way of stating facts about the structure of data types. If, while consuming a list recursively, you match a [x] pattern (or, equivalently, x : [] -- an element consed to the empty list), you know that x is the last element. In a way, matching [x] involves one less level of indirection than accessing the list element at index length lst - 1, as it only deals with the structure of the list, without requiring an indexing scheme to be bolted on the top of it.
With all that said, there is something fundamentally right about your feeling that explicit recursion feels "almost brute-force". In time, you'll find out about folds, mapping functions, and other ways to capture and abstract common recursive patterns, making it possible to write in a more fluent manner.

How does lazy evaluation works when the argument is a list?

From my understanding, lazy evaluation is the arguments are not evaluated before they are passed to a function, but only when their values are actually used.
But in a haskell tutorial, I see an example.
xs = [1,2,3,4,5,6,7,8]
doubleMe(doubleMe(doubleMe(xs)))
The author said an imperative language would probably pass through the list once and make a copy and then return it. Then it would pass through the list another two times and return the result.
But in a lazy language, it would first compute
doubleMe(doubleMe(doubleMe(1)))
This will give back a doubleMe(1), which is 2. Then 4, and finally 8.
So it only does one pass through the list and only when you really need it.
This makes me confused. Why don't lazy language take the list as a whole, but split it? I mean we can ignore what the list or the expression is before we use it. But we need to evaluate the whole thing when we use it, isn't it?
A list like [1,2,3,4,5,6,7,8] is just syntactic sugar for this: 1:2:3:4:5:6:7:8:[].
In this case, all the values in the list are numeric constants, but we could define another, smaller list like this:
1:1+1:[]
All Haskell lists are linked lists, which means that they have a head and a tail. In the above example, the head is 1, and the tail is 1+1:[].
If you only want the head of the list, there's no reason to evaluate the rest of the list:
(h:_) = 1:1+1:[]
Here, h refers to 1. There's no reason to evaluate the rest of the list (1+1:[]) if h is all you need.
That's how lists are lazily evaluated. 1+1 remains a thunk (an unevaluated expression) until the value is required.

Why is prepending to a list instantaneous?

I am going through an Haskell tutorial about lists, and it claims:
Watch out when repeatedly using the ++ operator on long strings ... Haskell has to walk through the whole list on the left side of ++. ... However, putting something at the beginning of a list using the : operator (also called the cons operator) is instantaneous.
But, in my mind, things should be the other way around.
: has to go through all elements in the list because it needs to shift all the indices. ++, on the other hand, can just append a new element at the end of the list and be done with it, hence instantaneous.
Any help understanding this statement?
A list in Haskell is just a singly-linked list. A list of, say, Char, is either [], the empty list, or c : cs, where c is a Char and cs is a list of Char. To produce c : cs given c and cs, all the implementation needs to do is allocate a record with a tag indicating (:) and copies of the pointers c and cs. This is extremely cheap.

Why does Haskell's `head` crash on an empty list (or why *doesn't* it return an empty list)? (Language philosophy)

Note to other potential contributors: Please don't hesitate to use abstract or mathematical notations to make your point. If I find your answer unclear, I will ask for elucidation, but otherwise feel free to express yourself in a comfortable fashion.
To be clear: I am not looking for a "safe" head, nor is the choice of head in particular exceptionally meaningful. The meat of the question follows the discussion of head and head', which serve to provide context.
I've been hacking away with Haskell for a few months now (to the point that it has become my main language), but I am admittedly not well-informed about some of the more advanced concepts nor the details of the language's philosophy (though I am more than willing to learn). My question then is not so much a technical one (unless it is and I just don't realize it) as it is one of philosophy.
For this example, I am speaking of head.
As I imagine you'll know,
Prelude> head []
*** Exception: Prelude.head: empty list
This follows from head :: [a] -> a. Fair enough. Obviously one cannot return an element of (hand-wavingly) no type. But at the same time, it is simple (if not trivial) to define
head' :: [a] -> Maybe a
head' [] = Nothing
head' (x:xs) = Just x
I've seen some little discussion of this here in the comment section of certain statements. Notably, one Alex Stangl says
'There are good reasons not to make everything "safe" and to throw exceptions when preconditions are violated.'
I do not necessarily question this assertion, but I am curious as to what these "good reasons" are.
Additionally, a Paul Johnson says,
'For instance you could define "safeHead :: [a] -> Maybe a", but now instead of either handling an empty list or proving it can't happen, you have to handle "Nothing" or prove it can't happen.'
The tone that I read from that comment suggests that this is a notable increase in difficulty/complexity/something, but I am not sure that I grasp what he's putting out there.
One Steven Pruzina says (in 2011, no less),
"There's a deeper reason why e.g 'head' can't be crash-proof. To be polymorphic yet handle an empty list, 'head' must always return a variable of the type which is absent from any particular empty list. It would be Delphic if Haskell could do that...".
Is polymorphism lost by allowing empty list handling? If so, how so, and why? Are there particular cases which would make this obvious? This section amply answered by #Russell O'Connor. Any further thoughts are, of course, appreciated.
I'll edit this as clarity and suggestion dictates. Any thoughts, papers, etc., you can provide will be most appreciated.
Is polymorphism lost by allowing empty
list handling? If so, how so, and why?
Are there particular cases which would
make this obvious?
The free theorem for head states that
f . head = head . $map f
Applying this theorem to [] implies that
f (head []) = head (map f []) = head []
This theorem must hold for every f, so in particular it must hold for const True and const False. This implies
True = const True (head []) = head [] = const False (head []) = False
Thus if head is properly polymorphic and head [] were a total value, then True would equal False.
PS. I have some other comments about the background to your question to the effect of if you have a precondition that your list is non-empty then you should enforce it by using a non-empty list type in your function signature instead of using a list.
Why does anyone use head :: [a] -> a instead of pattern matching? One of the reasons is because you know that the argument cannot be empty and do not want to write the code to handle the case where the argument is empty.
Of course, your head' of type [a] -> Maybe a is defined in the standard library as Data.Maybe.listToMaybe. But if you replace a use of head with listToMaybe, you have to write the code to handle the empty case, which defeats this purpose of using head.
I am not saying that using head is a good style. It hides the fact that it can result in an exception, and in this sense it is not good. But it is sometimes convenient. The point is that head serves some purposes which cannot be served by listToMaybe.
The last quotation in the question (about polymorphism) simply means that it is impossible to define a function of type [a] -> a which returns a value on the empty list (as Russell O'Connor explained in his answer).
It's only natural to expect the following to hold: xs === head xs : tail xs - a list is identical to its first element, followed by the rest. Seems logical, right?
Now, let's count the number of conses (applications of :), disregarding the actual elements, when applying the purported 'law' to []: [] should be identical to foo : bar, but the former has 0 conses, while the latter has (at least) one. Uh oh, something's not right here!
Haskell's type system, for all its strengths, is not up to expressing the fact that you should only call head on a non-empty list (and that the 'law' is only valid for non-empty lists). Using head shifts the burden of proof to the programmer, who should make sure it's not used on empty lists. I believe dependently typed languages like Agda can help here.
Finally, a slightly more operational-philosophical description: how should head ([] :: [a]) :: a be implemented? Conjuring a value of type a out of thin air is impossible (think of uninhabited types such as data Falsum), and would amount to proving anything (via the Curry-Howard isomorphism).
There are a number of different ways to think about this. So I am going to argue both for and against head':
Against head':
There is no need to have head': Since lists are a concrete data type, everything that you can do with head' you can do by pattern matching.
Furthermore, with head' you're just trading off one functor for another. At some point you want to get down to brass tacks and get some work done on the underlying list element.
In defense of head':
But pattern matching obscures what's going on. In Haskell we are interested in calculating functions, which is better accomplished by writing them in point-free style using compositions and combinators.
Furthermore, thinking about the [] and Maybe functors, head' allows you to move back and forth between them (In particular the Applicative instance of [] with pure = replicate.)
If in your use case an empty list makes no sense at all, you can always opt to use NonEmpty instead, where neHead is safe to use. If you see it from that angle, it's not the head function that is unsafe, it's the whole list data-structure (again, for that use case).
I think this is a matter of simplicity and beauty. Which is, of course, in the eye of the beholder.
If coming from a Lisp background, you may be aware that lists are built of cons cells, each cell having a data element and a pointer to next cell. The empty list is not a list per se, but a special symbol. And Haskell goes with this reasoning.
In my view, it is both cleaner, simpler to reason about, and more traditional, if empty list and list are two different things.
...I may add - if you are worried about head being unsafe - don't use it, use pattern matching instead:
sum [] = 0
sum (x:xs) = x + sum xs

Resources