I am trying to solve one of the problem in H99:
Split a list into two parts; the length of the first part is given.
Do not use any predefined predicates.
Example:
> (split '(a b c d e f g h i k) 3)
( (A B C) (D E F G H I K))
And I can quickly come with a solution:
split'::[a]->Int->Int->[a]->[[a]]
split' [] _ _ _ = []
split' (x:xs) y z w = if y == z then [w,xs] else split' xs y (z+1) (w++[x])
split::[a]->Int->[[a]]
split x y = split' x y 0 []
My question is that what I am doing is kind of just rewriting the loop version in a recursion format. Is this the right way you do things in Haskell? Isn't it just the same as imperative programming?
EDIT: Also, how do you generally avoid the extra function here?
It's convenient that you can often convert an imperative solution to Haskell, but you're right, you do usually want to find a more natural recursive statement. For this one in particular, reasoning in terms of base case and inductive case can be very helpful. So what's your base case? Why, when the split location is 0:
split x 0 = ([], x)
The inductive case can be built on that by prepending the first element of the list onto the result of splitting with n-1:
split (x:xs) n = (x:left, right)
where (left, right) = split xs (n-1)
This may not perform wonderfully (it's probably not as bad as you'd think) but it illustrates my thought process when I first encounter a problem and want to approach it functionally.
Edit: Another solution relying more heavily on the Prelude might be:
split l n = (take n l, drop n l)
It's not the same as imperative programming really, each function call avoids any side effects, they're just simple expressions. But I have a suggestion for your code
split :: Int -> [a] -> ([a], [a])
split p xs = go p ([], xs)
where go 0 (xs, ys) = (reverse xs, ys)
go n (xs, y:ys) = go (n-1) (y : xs, ys)
So how we've declared that we're only returning two things ([a], [a]) instead of a list of things (which is a bit misleading) and that we've constrained our tail recursive call to be in local scope.
I'm also using pattern matching, which is a more idiomatic way to write recursive functions in Haskell, when go is called with a zero, then the first case is run. It's more pleasant generally to write recursive functions that go down rather than up since you can use pattern matching rather than if statements.
Finally this is more efficient since ++ is linear in the length of the first list, which means that the complexity of your function is quadratic rather than linear. This method is also tail recursive unlike Daniel's solution, which is important for handling any large lists.
TLDR: Both versions are functional style, avoiding mutation, using recursion instead of loops. But the version I've presented is a little more Haskell-ish and slightly faster.
A word on tail recursion
This solution uses tail recursion which isn't always essential in Haskell but in this case is helpful when you use the resulting lists, but at other times is actually a bad thing. For example, map isn't tail recursive, but if it was you couldn't use it over infinite lists!
In this case, we can use tail recursion, since an integer is always finite. But, if we only use the first element of the list, Daniel's solution is much faster, since it produces the list lazily. On the other hand, if we use the whole list, my solution is much faster.
split'::[a]->Int->([a],[a])
split' [] _ = ([],[])
split' xs 0 = ([],xs)
split' (x:xs) n = (x:(fst splitResult),snd splitResult)
where splitResult = split' xs (n-1)
It seems you have already shown an example of a better solution.
I would recommend you read SICP. Then you come to the conclusion that the extra function is normal. There's also widely used approach to hide functions in the local area. The book may seem boring to you but in the early chapters she will get used to the functional approach in solving problems.
There are tasks in which the recursive approach is more necessary. But for example if you use tail recursion (which is so often praised without cause) then you will notice that this is just the usual iteration. Often with "extra-function" which hide iteration variable (oh.. word variable is not very appropriate, likely argument).
Related
Doing the third of the 99-Haskell problems (I am currently trying to learn the language) I tried to incorporate pattern matching as well as recursion into my function which now looks like this:
myElementAt :: [a] -> Int -> a
myElementAt (x ++ xs) i =
if length (x ++ xs) == i && length xs == 1 then xs!!0
else myElementAt x i
Which gives me Parse error in pattern: x ++ xs. The questions:
Why does this give me a parse error? Is it because Haskell is no idea where to cut my list (Which is my best guess)?
How could I reframe my function so that it works? The algorithmic idea is to check wether the list has the length as the specified inde; if yes return the last elemen; if not cut away one element at the end of the list and then do the recursion.
Note: I know that this is a really bad algorithm, but it I've set myself the challenge to write that function including recursion and pattern matching. I also tried not to use the !! operator, but that is fine for me since the only thing it really does (or should do if it compiled) is to convert a one-element list into that element.
Haskell has two different kinds of value-level entities: variables (this also includes functions, infix operators like ++ etc.) and constructors. Both can be used in expressions, but only constructors can also be used in patterns.
In either case, it's easy to tell whether you're dealing with a variable or constructor: a constructor always starts with an uppercase letter (e.g. Nothing, True or StateT) or, if it's an infix, with a colon (:, :+). Everything else is a variable. Fundamentally, the difference is that a constructor is always a unique, immediately matcheable value from a predefined collection (namely, the alternatives of a data definition), whereas a variable can just have any value, and often it's in principle not possible to uniquely distinguish different variables, in particular if they have a function type.
Yours is actually a good example for this: for the pattern match x ++ xs to make sense, there would have to be one unique way in which the input list could be written in the form x ++ xs. Well, but for, say [0,1,2,3], there are multiple different ways in which this can be done:
[] ++[0,1,2,3]
[0] ++ [1,2,3]
[0,1] ++ [2,3]
[0,1,2] ++ [3]
[0,1,2,3]++ []
Which one should the runtime choose?
Presumably, you're trying to match the head and tail part of a list. Let's step through it:
myElementAt (x:_) 0 = x
This means that if the head is x, the tail is something, and the index is 0, return the head. Note that your x ++ x is a concatenation of two lists, not the head and tail parts.
Then you can have
myElementAt(_:tl) i = myElementAt tl (i - 1)
which means that if the previous pattern was not matched, ignore the head, and take the i - 1 element of the tail.
In patterns, you can only use constructors like : and []. The append operator (++) is a non-constructor function.
So, try something like:
myElementAt :: [a] -> Int -> a
myElementAt (x:xs) i = ...
There are more issues in your code, but at least this fixes your first problem.
in standard Haskell pattern matches like this :
f :: Int -> Int
f (g n 1) = n
g :: Int -> Int -> Int
g a b = a+b
Are illegal because function calls aren't allowed in patterns, your case is just a special case as the operator ++ is just a function.
To pattern match on lists you can do it like this:
myElementAt :: [a] -> Int -> a
myElementAt (x:xs) i = // result
But in this case x is of type a not [a] , it is the head of the list and xs is its tail, you'll need to change your function implementation to accommodate this fact, also this function will fail with the empty list []. However that's the idiomatic haskell way to pattern match aginst lists.
I should mention that when I said "illegal" I meant in standard Haskell, there are GHC extensions that give something similar to that , it's called ViewPatterns But I don't think you need it especially that you're still learning.
I was trying to implement a Haskell function that takes as input an array of integers A
and produces another array B = [A[0], A[0]+A[1], A[0]+A[1]+A[2] ,... ]. I know that scanl from Data.List can be used for this with the function (+). I wrote the second implementation
(which performs faster) after seeing the source code of scanl. I want to know why the first implementation is slower compared to the second one, despite being tail-recursive?
-- This function works slow.
ps s x [] = x
ps s x y = ps s' x' y'
where
s' = s + head y
x' = x ++ [s']
y' = tail y
-- This function works fast.
ps' s [] = []
ps' s y = [s'] ++ (ps' s' y')
where
s' = s + head y
y' = tail y
Some details about the above code:
Implementation 1 : It should be called as
ps 0 [] a
where 'a' is your array.
Implementation 2: It should be called as
ps' 0 a
where 'a' is your array.
You are changing the way that ++ associates. In your first function you are computing ((([a0] ++ [a1]) ++ [a2]) ++ ...) whereas in the second function you are computing [a0] ++ ([a1] ++ ([a2] ++ ..)). Appending a few elements to the start of the list is O(1), whereas appending a few elements to the end of a list is O(n) in the length of the list. This leads to a linear versus quadratic algorithm overall.
You can fix the first example by building the list up in reverse order, and then reversing again at the end, or by using something like dlist. However the second will still be better for most purposes. While tail calls do exist and can be important in Haskell, if you are familiar with a strict functional language like Scheme or ML your intuition about how and when to use them is completely wrong.
The second example is better, in large part, because it's incremental; it immediately starts returning data that the consumer might be interested in. If you just fixed the first example using the double-reverse or dlist tricks, your function will traverse the entire list before it returns anything at all.
I would like to mention that your function can be more easily expressed as
drop 1 . scanl (+) 0
Usually, it is a good idea to use predefined combinators like scanl in favour of writing your own recursion schemes; it improves readability and makes it less likely that you needlessly squander performance.
However, in this case, both my scanl version and your original ps and ps' can sometimes lead to stack overflows due to lazy evaluation: Haskell does not necessarily immediately evaluate the additions (depends on strictness analysis).
One case where you can see this is if you do last (ps' 0 [1..100000000]). That leads to a stack overflow. You can solve that problem by forcing Haskell to evaluate the additions immediately, for instance by defining your own, strict scanl:
myscanl :: (b -> a -> b) -> b -> [a] -> [b]
myscanl f q [] = []
myscanl f q (x:xs) = q `seq` let q' = f q x in q' : myscanl f q' xs
ps' = myscanl (+) 0
Then, calling last (ps' [1..100000000]) works.
A contrived example, but the below code demonstrates a class of problems I keep running into while learning Haskell.
import Control.Monad.Error
import Data.Char (isDigit)
countDigitsForList [] = return []
countDigitsForList (x:xs) = do
q <- countDigits x
qs <- countDigitsForList xs
return (q:qs)
countDigits x = do
if all isDigit x
then return $ length x
else throwError $ "Bad number: " ++ x
t1 = countDigitsForList ["1", "23", "456", "7890"] :: Either String [Int]
t2 = countDigitsForList ["1", "23", "4S6", "7890"] :: Either String [Int]
t1 gives me the right answer and t2 correctly identifies the error.
Seems to me that, for a sufficiently long list, this code is going to run out of stack space because it runs inside of a monad and at each step it tries to process the rest of the list before returning the result.
An accumulator and tail recursion seems like it may solve the problem but I repeatedly read that neither are necessary in Haskell because of lazy evaluation.
How do I structure this kind of code into one which won't have a stack space problem and/or be lazy?
How do I structure this kind of code into one which won't have a stack space problem and/or be lazy?
You can't make this function process the list lazily, monads or no. Here's a direct translation of countDigitsForList to use pattern matching instead of do notation:
countDigitsForList [] = return []
countDigitsForList (x:xs) = case countDigits x of
Left e -> Left e
Right q -> case countDigitsForList xs of
Left e -> Left e
Right qs -> Right (q:qs)
It should be easier to see here that, because a Left at any point in the list makes the whole thing return that value, in order to determine the outermost constructor of the result, the entire list must be traversed and processed; likewise for processing each element. Because the final result potentially depends on the last character in the last string, this function as written is inherently strict, much like summing a list of numbers.
Given that, the thing to do is ensure that the function is strict enough to avoid building up a huge unevaluated expression. A good place to start for information on that is discussions on the difference between foldr, foldl and foldl'.
An accumulator and tail recursion seems like it may solve the problem but I repeatedly read that neither are necessary in Haskell because of lazy evaluation.
Both are unnecessary when you can instead generate, process, and consume a list lazily; the simplest example here being map. For a function where that's not possible, strictly-evaluated tail recursion is precisely what you want.
camccann is right that the function is inherently strict. But that doesn't mean that it can't run in constant stack!
countDigitsForList xss = go xss []
where go (x:xs) acc = case countDigits x of
Left e -> Left e
Right q -> go xs (q:acc)
go [] acc = reverse acc
This accumulating parameter version is a partial cps transform of camccann's code, and I bet that you could get the same result by working over a cps-transformed either monad as well.
Edited to take into account jwodder's correction regarding reverse. oops. As John L notes an implicit or explicit difference list would work as well...
Many times I see functions which operate on the head of a list, e.g:
trimHead ('\n':xs) = xs
trimHead xs = xs
then I see the the definition:
trimTail = reverse . trimHead . reverse
then I see:
trimBoth = trimHead . trimTail
They are clean, but are trimTail and trimBoth efficient? Is there a better way?
Consider this alternative implementation
trimTail2 [] = []
trimTail2 ['\n'] = []
trimTail2 (x:xs) = x : trimTail2 xs
trimBoth2 = trimHead . trimTail2
It's easy to confirm that trimTail and trimBoth require that the entire list be evaluated, while trimTail2 and trimBoth2 only evaluate as much of the list as is necessary.
*Main> head $ trimTail ('h':undefined)
*** Exception: Prelude.undefined
*Main> head $ trimBoth ('h':undefined)
*** Exception: Prelude.undefined
*Main> head $ trimTail2 ('h':undefined)
'h'
*Main> head $ trimBoth2 ('h':undefined)
'h'
This implies that your version is going to be less efficient if the whole result is not needed.
Assuming the whole list is to be evaluated (if you don't need the whole list, why are you trimming the end?), it's about half as efficient as you can get out of immutable lists, but it has the same asymptotic complexity O(n).
The new list requires at least:
You have to find the end: n pointer traversals.
You have to modify the end, and thus what points to the end, etc.: n cons of existing data with new pointers.
reverse . trimHead . reverse performs roughly twice this:
The first reverse performs n pointer traversals and n cons.
trimHead possibly performs 1 pointer traversal.
The second reverse performs n pointer traversals and n cons.
Is this worth worrying about? In some circumstances, maybe. Is the code too slow, and is this called a lot? In others, maybe not. Benchmark! The implementation with reverse is nice and easy to understand, and that's important.
There is a fairly natural recursive step-through-the-list solution, which will only evaluate as much of the output as is consumed, so in the case that you don't know whether you need the whole string, you can possibly save some evaluation.
It isn't efficient in the sense, that streaming is impossible, because the whole list needs to be evaluated to get even a single element. But a better solution is difficult, as you need to evaluate the rest of the list to know, whether a line-break is to be trimmed or not. A slightly more efficient way would be to look ahead whether the linebreak is to be trimmed and react appropriately:
trimTail, trimHead, trimBoth :: String -> String
trimTail ('\n':xs) | all (=='\n') xs = ""
trimTail (x:xs) = x : trimTail xs
trimHead = dropWhile (=='\n')
trimBoth = trimTail . trimHead
The solution above evaluates only as much as needed from the string to know, if the linebreak is to be trimmed. An even better method would be to incorporate the knowledge, that the next n chars are not to be trimmed. Implementing this is left as an exercise to the reader.
An even better (and shorter) way to write trimTail is this way (by rotsor):
trimTail = foldr step [] where
step '\n' [] = []
step x xs = x:xs
Generally, try to avoid reverse. Usually there is a better way to solve the problem.
Are trimHead and trimTail efficient?
They both take O(n) time (time directly proportional to the size of the list) since the entire list must be traversed twice in order to perform the two reverses.
Is there a better way?
Well, do you have to use lists? With Data.Sequence you can modify either end of the list in constant time. If you're stuck with lists, then check out the other solutions suggested here. If you can use Sequences instead, then just modify FUZxxl's answer to use dropWhileR.
So, I'm new here, and I would like to ask 2 questions about some code:
Duplicate each element in list by n times. For example, duplicate [1,2,3] should give [1,2,2,3,3,3]
duplicate1 xs = x*x ++ duplicate1 xs
What is wrong in here?
Take positive numbers from list and find the minimum positive subtraction. For example, [-2,-1,0,1,3] should give 1 because (1-0) is the lowest difference above 0.
For your first part, there are a few issues: you forgot the pattern in the first argument, you are trying to square the first element rather than replicate it, and there is no second case to end your recursion (it will crash). To help, here is a type signature:
replicate :: Int -> a -> [a]
For your second part, if it has been covered in your course, you could try a list comprehension to get all differences of the numbers, and then you can apply the minimum function. If you don't know list comprehensions, you can do something similar with concatMap.
Don't forget that you can check functions on http://www.haskell.org/hoogle/ (Hoogle) or similar search engines.
Tell me if you need a more thorough answer.
To your first question:
Use pattern matching. You can write something like duplicate (x:xs). This will deconstruct the first cell of the parameter list. If the list is empty, the next pattern is tried:
duplicate (x:xs) = ... -- list is not empty
duplicate [] = ... -- list is empty
the function replicate n x creates a list, that contains n items x. For instance replicate 3 'a' yields `['a','a','a'].
Use recursion. To understand, how recursion works, it is important to understand the concept of recursion first ;)
1)
dupe :: [Int] -> [Int]
dupe l = concat [replicate i i | i<-l]
Theres a few problems with yours, one being that you are squaring each term, not creating a new list. In addition, your pattern matching is off and you would create am infinite recursion. Note how you recurse on the exact same list as was input. I think you mean something along the lines of duplicate1 (x:xs) = (replicate x x) ++ duplicate1 xs and that would be fine, so long as you write a proper base case as well.
2)
This is pretty straight forward from your problem description, but probably not too efficient. First filters out negatives, thewn checks out all subtractions with non-negative results. Answer is the minumum of these
p2 l = let l2 = filter (\x -> x >= 0) l
in minimum [i-j | i<-l2, j<-l2, i >= j]
Problem here is that it will allow a number to be checkeed against itself, whichwiull lend to answers of always zero. Any ideas? I'd like to leave it to you, commenter has a point abou t spoon-feeding.
1) You can use the fact that list is a monad:
dup = (=<<) (\x -> replicate x x)
Or in do-notation:
dup xs = do x <- xs; replicate x x; return x
2) For getting only the positive numbers from a list, you can use filter:
filter (>= 0) [1,-1,0,-5,3]
-- [1,0,3]
To get all possible "pairings" you can use either monads or applicative functors:
import Control.Applicative
(,) <$> [1,2,3] <*> [1,2,3]
[(1,1),(1,2),(1,3),(2,1),(2,2),(2,3),(3,1),(3,2),(3,3)]
Of course instead of creating pairs you can generate directly differences when replacing (,) by (-). Now you need to filter again, discarding all zero or negative differences. Then you only need to find the minimum of the list, but I think you can guess the name of that function.
Here, this should do the trick:
dup [] = []
dup (x:xs) = (replicate x x) ++ (dup xs)
We define dup recursively: for empty list it is just an empty list, for a non empty list, it is a list in which the first x elements are equal to x (the head of the initial list), and the rest is the list generated by recursively applying the dup function. It is easy to prove the correctness of this solution by induction (do it as an exercise).
Now, lets analyze your initial solution:
duplicate1 xs = x*x ++ duplicate1 xs
The first mistake: you did not define the list pattern properly. According to your definition, the function has just one argument - xs. To achieve the desired effect, you should use the correct pattern for matching the list's head and tail (x:xs, see my previous example). Read up on pattern matching.
But that's not all. Second mistake: x*x is actually x squared, not a list of two values. Which brings us to the third mistake: ++ expects both of its operands to be lists of values of the same type. While in your code, you're trying to apply ++ to two values of types Int and [Int].
As for the second task, the solution has already been given.
HTH