Making a Haskell function to work with infinite list - haskell

I would like to know how I can turn a function to work with infinite list?
For example, I have a function to revert a list of lists.
innerReverse [[1,2,3]] will return [[3,2,1]]. However, when I tried take 10 $ innerReverse [[1..]] It basically runs into an infinite loop.
When I do innerReverse [(take 10 [1..])] It gives the result: [[10,9,8,7,6,5,4,3,2,1]]

Haskell is a lazy language, which means that evaluations are only performed right before the result is actually used. That's what makes it possible for Haskell to have infinite lists; only the portions of the list that you're accessed so far are actually stored in memory.
The concept of an infinite list makes what you're trying to do impossible. In the list [1..] the first element is 1. What's the last element? The answer is that that's a trick question; there is no concept of the "end" of an infinite list. Similarly, what is the first element of the reverse of [1..]? Again, it's a trick question. The last element is 1, but the list would have no beginning.
The reverse of [1..] is not [10,9,8,7,6,5,4,3,2,1]. The reverse of the latter is [1,2,3,4,5,6,7,8,9,10], not [1..].

Related

Guards in list comprehensions don't terminate infinite lists? [duplicate]

This question already has answers here:
Finite comprehension of an infinite list
(3 answers)
How to filter an infinite list in Haskell [duplicate]
(4 answers)
Closed 4 years ago.
I thought this :
[i | i <- [1..], i < 5]
would produce
[1, 2, 3, 4]
just like
take 4 [i | i <- [1..]]
A finite length list.
But it doesn't, it seems to be infinite because any attempt to treat it as a finite list just causes hang (ghci).
I am not sure how to understand this exactly. Is it some kind of infinite generator which simply produces nothing after the fourth item but never stops?
Basically the code keeps generating new items because it doesn't know they can never satisfy the criterion?
You literally told the program to check every element of the infinite list and include only the ones that are less than 5. As you say, the compiler doesn’t realize that no remaining element of the list will ever satisfy the condition. Nor could it, even in theory, create such a proof at runtime if passed an arbitrary list. It just does what you said, and keeps checking every element.
This is not necessarily a bug if the program does not try to evaluate xs in the result 1:2:3:4:xs, due to lazy evaluation. Taking the head should work just fine. If you tell it to find the length of the list or something like that, though, it’s an infinite loop.
One way to do what you (probably?) want is takeWhile, which stops when the condition is no longer true.

ListComprehension but different order

I would like to know if there is a difference between these to definitions.
[(x,y)| x<-[1..10000], x=2000,y<-[1..100], odd y]
[(x,y)| x<-[1..10000],y<-[1..100], x=2000, odd y]
Both will generate same list of tuples.
But if our compiler doesn't do any optimization.
How can i find out which one is faster.
In both case x<-[1..10000] will give us a list from [1,2.. 20000] since x==2000.
In what order will the y value be evaluated?
Things are executed left-to-right. Think of it as nested loops. So in the first one the test of x is executed 10000 times, and in the second it's executed 1000000 times.
Moving the condition outwards to speed up the execution is called "filter promotion"; a term coined by David Turner (ca 1980).

Lisp - remove from position

I need a function for deleting elements on nth position in starting list and all sublists. I don't need a working code, i just need any advice.
Asking for advice and not the final solution is laudable. I'll try to explain it to you.
Singly linked lists lend themself to being recursively processed from front to end. You have cheap operations to get the first element of a list, its rest, and to build a list by putting a new element at the front. One simple recursion scheme would be: Take the first element from the list, do something with it, then put it at the front of the result of repeating the whole process with the rest of the list. This repetition of the process for successive elements and rests is the recursive part. If you have an empty input list, there is nothing to do, and the empty list is returned, thus ending the processing. This is your base case, anchor, or whatever you want to call it. Remember: Recursive case, base case, check – you need both.
(Because of Lisp's evaluation rules, to actually put your processed element before the processed rest, it must be remembered until the rest is actually processed, since the operation for building lists evaluates both of its arguments before it returns the new list. These intermediate results will be kept on the stack, which might be a problem for big lists. There are methods that avoid this, but we will keep it simple here.)
Now, you're actually asking not only for simple lists, but for trees. Conveniently, that tree is represented as a nested list, so generally the above still applies, except a little complication: You will have to check whether the element you want to process is itself a branch, i.e. a list. If it is, the whole process must be done on that branch, too.
That's basically it. Now, to remove an element from a tree, your operation is just to check if your element matches and, if yes, dropping it. In more detail:
To remove an element from an empty list, just return an empty list.
If the first element is itself a list, return a list built from the first element with all matches removed as its first, and the rest with all matches removed as its rest.
If its first element matches, return the rest of the list with all
matching elements removed. (Notice that something gets "dropped" here.)
Otherwise, return a list built from the first element as its first and the rest of the list with all maching elements removed as its rest.
Take a look at this and try to find your recursive case, the base case, and what deals with walking the nested tree structure. If you understand all of this, the implementation will be easy. If you never really learned all this, and your head is not spinning by now, consider yourself a natural born Lisp programmer. Otherwise, recursion is just a fundamental concept that maybe hard to grasp the first time, but once it clicked, it's like riding a bicycle.
Ed: Somehow missed the "position" part, and misread – even despite the question title. That's what fatigue can do to people.
Anyway, if you want to delete an element in the tree by position, you can let your function take an optional counter argument (or you can use a wrapping function providing it). If you look at the points above, recursing for a new branch would be the place where you reset your counter. The basic recursion scheme stays the same, but instead of the comparing the element itself, you check your counter – if it matches the position you want to remove, drop the element. In every recursive case, you pass your function the incremented counter, except when entering a new branch, where you reset it, i.e. pass 0 for your counter argument. (You could also just return the rest of the list once the element is dropped, making the function more performant, especially for long lists where an element near the beginning is to be deleted, but let's keep it simple here.)
My approach would be the following:
delete the nth element in the top-level list
recursively delete the nth element in each sublist from the result of #1
I'd implement it this like:
(defun (del n l)
(defun (del-top-level n l)
...) ; return l but with nth gone
(mapcar #'(lambda (l) (if (not (listp l)) l (del n l)))
(del-top-level n l)))
You'd need to implement the del-top-level function.
Ok I think I see what you need.
You should need two functions
The entry function will just call a helper function like (DeleteHelper position position myList)
DeleteHelper will recursively call itself and optionally include the current element of the list if the current position is not 0. Such as (cons (car myList) (DeleteHelper (- position 1) originalPosition (cdr myList))
If DeleteHelper encounters a list, recursively traverse the list with a position reset to the original incoming position. Such as (cons (DeleteHelper originalPosition originalPosition (car myList)) (DeleteHelper (- position 1) originalPosition (cdr myList)))
Also keep in mind the base case (I guess return an empty list once you traverse a whole list).
This should get you in the right direction. It has also been a few years since I wrote any Lisp so my syntax might be a bit off.

Simulating a for loop in haskell

I am currently writing my bachelor CS thesis for my studies in Austria.
The programming language that I use is Haskell.
Now, I am trying to find a way to fix my following issue:
I have a list of tuples, lets say [(1,2),(2,3)]. From that list of tuples, I would now like to pick out each of that tuples and then do an operation on it:
Map.insert (1,2) XXX ftable
where
(1,2) is the first element of that list and XXX is some value and ftable is my map.
How can I "iterate" through that list and proceed with that operation inserting the "n-th" elment of my list to my map?
I guess I am just too much familiar with programming imperative and I do not find a way to fix that in Haskell.
It's not entirely clear what you mean here. Is it correct to assume that the tuples are meant to represent the keys in your map and XXX is some value attached to a specific key? Are all the values you want to match to a given key also provided in a list? In that case you can easily use the fromList function in Data.Map:
keys = [(1,2),(2,3),(7,9)]
values = ["A","B","C"]
map = Data.Map.fromList $ zip keys values
Think about what your loop is doing.
if it transforms each list element, then use map (or concatMap)
if it filters out some of the list elements, then use filter
if it reduces the list to a summary value, use a fold (e.g. foldl, foldr; more specific folds are sum, and, etc)
or if it's doing some combination of the above, use a combination of the above functions
In your case, I'm not entirely certain what you want, but I think you want to end up with a single Map, so you want to fold your list. Perhaps something like
foldl (\oldMap key -> Map.insert key xxx oldMap) ftable yourListOfTuples
You have several options to iterate on lists, each one to be chosen depending on the effect you're searching for:
folds, they are useful for many computations on lists. They may also be what you're looking for, but I do not understand from your question your specific issue.
map, that applies the same (given) function on every element of the input list returning the list of the results. There are also variants that works in monadic computations.
or, if it best suits your needs, you can always write your own tail recursive function: haskell will treat them just like for loops, without consuming heap (but please look at the reference link for a good explanation of the technique).
In the end, whatever function or technique you will use, it will be based on recursion since functional languages like Haskell do not admit for-loops in the form you are used to.
Building off of hakoja's suggestion...
It is probable that XXX at this point is either 1) constant for every key, or 2) some function based on the key, or 3) somehow defined in parallel to the key.
1) constant xxx for every key
keys = [(1,2),(2,3),(7,9)]
xxx = "A"
ftable = Data.Map.fromList $ zip keys (repeat xxx)
2) function produces value based on key
keys = [(1,2),(2,3),(7,9)]
f = ...
ftable = Data.Map.fromList $ zip keys (map f keys)
3) xxx defined in parallel: use hakoja's suggestion

Explanation of “tying the knot”

In reading Haskell-related stuff I sometimes come across the expression “tying the knot”, I think I understand what it does, but not how.
So, are there any good, basic, and simple to understand explanations of this concept?
Tying the knot is a solution to the problem of circular data structures. In imperative languages you construct a circular structure by first creating a non-circular structure, and then going back and fixing up the pointers to add the circularity.
Say you wanted a two-element circular list with the elements "0" and "1". It would seem impossible to construct because if you create the "1" node and then create the "0" node to point at it, you cannot then go back and fix up the "1" node to point back at the "0" node. So you have a chicken-and-egg situation where both nodes need to exist before either can be created.
Here is how you do it in Haskell. Consider the following value:
alternates = x where
x = 0 : y
y = 1 : x
In a non-lazy language this will be an infinite loop because of the unterminated recursion. But in Haskell lazy evaluation does the Right Thing: it generates a two-element circular list.
To see how it works in practice, think about what happens at run-time. The usual "thunk" implementation of lazy evaluation represents an unevaluated expression as a data structure containing a function pointer plus the arguments to be passed to the function. When this is evaluated the thunk is replaced by the actual value so that future references don't have to call the function again.
When you take the first element of the list 'x' is evaluated down to a value (0, &y), where the "&y" bit is a pointer to the value of 'y'. Since 'y' has not been evaluated this is currently a thunk. When you take the second element of the list the computer follows the link from x to this thunk and evaluates it. It evaluates to (1, &x), or in other words a pointer back to the original 'x' value. So you now have a circular list sitting in memory. The programmer doesn't need to fix up the back-pointers because the lazy evaluation mechanism does it for you.
It's not quite what you asked for, and it's not directly related to Haskell, but Bruce McAdam's paper That About Wraps It Up goes into this topic in substantial breadth and depth. Bruce's basic idea is to use an explicit knot-tying operator called WRAP instead of the implicit knot-tying that is done automatically in Haskell, OCaml, and some other languages. The paper has lots of entertaining examples, and if you are interested in knot-tying I think you will come away with a much better feel for the process.

Resources