Lazy Catalan Numbers in Haskell - haskell

How might I go about efficiently generating an infinite list of Catalan numbers? What I have now works reasonably quickly, but it seems to me that there should be a better way.
c 1 = 1
c n = sum (zipWith (*) xs (reverse xs)) : xs
where xs = c (n-1)
catalan = map (head . c) [1..]
I made an attempt at using fix instead, but the lambda isn't lazy enough for the computation to terminate:
catalan = fix (\xs -> xs ++ [zipWith (*) xs (reverse xs)])
I realize (++) isn't ideal
Does such a better way exist? Can that function be made sufficiently lazy? There's an explicit formula for the nth, I know, but I'd rather avoid it.

The Catalan numbers [wiki] can be defined inductively with:
C0 = 1 and Cn+1=(4n+2)×Cn/(n+2).
So we can implement this as:
catalan :: Integral i => [i]
catalan = xs
where xs = 1 : zipWith f [0..] xs
f n cn = div ((4*n+2) * cn) (n+2)
For example:
Prelude> take 10 catalan
[1,1,2,5,14,42,132,429,1430,4862]

I'm guessing you're looking for a lazy, infinite, self-referential list of all the Catalan numbers using one of the basic recurrence relations. That's a common thing to do with the Fibonacci numbers after all. But it would help to specify the recurrence relation you mean, if you want answers to your specific question. I'm guessing this is the one you mean:
cat :: Integer -> Integer
cat 1 = 1
cat n = sum [ cat i * cat (n - i) | i <- [1 .. n - 1] ]
If so, the conversion to a self-referential form looks like this:
import Data.List (inits)
cats :: [Integer]
cats = 1 : [ sum (zipWith (*) pre (reverse pre)) | pre <- tail (inits cats) ]
This is quite a lot more complex than the fibonacci examples, because the recurrence refers to all previous entries in the list, not just a fixed small number of the most recent. Using inits from Data.List is the easiest way to get the prefix at each position. I used tail there because its first result is the empty list, and that's not helpful here. The rest is a straight-forward rewrite of the recurrence relation that I don't have much to say about. Except...
It's going to perform pretty badly. I mean, it's better than the exponential recursive calls of my cat function, but there's a lot of list manipulation going on that's allocating and then throwing away a lot of memory cells. That recurrence relation is not a very good fit for the recursive structure of the list data type. You can explore a lot of ways to make it more efficient, but they'll all be pretty bad in the end. For this particular case, going to a closed-form solution is the way to go if you want performance.

Apparently, what you wanted is
> cats = 1 : unfoldr (\ fx -> let x = sum $ zipWith (*) fx cats in Just (x, x:fx)) [1]
> take 10 cats
[1,1,2,5,14,42,132,429,1430,4862]
This avoids the repeated reversing of the prefixes (as in the linked answer), by unfolding with the state being a reversed prefix while consing onto the state as well as producing the next element.
The non-reversed prefix we don't have to maintain, as zipping the reversed prefix with the catalans list itself takes care of the catalans prefix's length.
You did say you wanted to avoid the direct formula.

The Catalan numbers are best understood by their generating function, which satisfies the relation
f(t) = 1 + t f(t)^2
This can be expressed in Haskell as
f :: [Int]
f = 1 : convolve f f
for a suitable definition of convolve. It is helpful to factor out convolve, for many other counting problems take this form. For example, a generalized Catalan number enumerates ternary trees, and its generating function satisfies the relation
g(t) = 1 + t g(t)^3
which can be expressed in Haskell as
g :: [Int]
g = 1 : convolve g (convolve g g)
convolve can be written using Haskell primitives as
convolve :: [Int] -> [Int] -> [Int]
convolve xs = map (sum . zipWith (*) xs) . tail . scanl (flip (:)) []
For these two examples and many other special cases, there are formulas that are quicker to evaluate. convolve is however more general, and cognitively more efficient. In a typical scenario, one has understood a counting problem in terms of a polynomial relation on its generating function, and now wants to compute some numbers in order to look them up in The On-Line Encyclopedia of Integer Sequences. One wants to get in and out, indifferent to complexity. What language will be least fuss?
If one has seen the iconic Haskell definition for the Fibonacci numbers
fibs :: [Int]
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
then one imagines there must be a similar idiom for products of generating functions. That search is what brought me here.

Related

Infinite lists that depend on each other in Haskell?

I am working on a programming exercise where the goal is to write a function to get the term at the Nth index of Hofstadter's Figure-Figure sequence.
Rather come up with a basic solution using the formula, I thought it would be an interesting challenge to generate an infinite list to represent the sequence and then index it.
This was my initial approach, however, it hangs when trying to calculate anything past the first two terms.
hof :: Int -> Int
hof = (!!) seqA
where
seqA = 1 : zipWith (+) seqA seqB
seqB = 2 : filter (`notElem` seqA) [3..]
seqA represents the sequence of terms, and seqB is the differences between them.
Though I don't really understand how to use seq, I tried using it to strictly evaluate the terms that come before the desired one, like shown below.
hof :: Int -> Int
hof 0 = 1
hof n = seq (hof $ n - 1) $ seqA !! n
where
seqA = 1 : zipWith (+) seqA seqB
seqB = 2 : filter (`notElem` seqA) [3..]
This also hangs when trying to calculate values past the first index.
After playing around in ghci, I found a way to get this to work in a weird way
ghci> seqB = [2, 4, 5, 6]
ghci> seqA = 1 : zipWith (+) seqA seqB
ghci> seqB = 2 : filter (`notElem` seqA) [3..]
ghci> seqA = 1 : zipWith (+) seqA seqB
ghci> hof = (!!) seqA
By giving seqB and initial value and redefining both seqA and seqB afterwards, it seems to function normally. I did notice, however, that the result of passing larger values to hof seems to give different results based on how many terms I initially put in the seqB list. When I redefine the function in ghci, does it still use the older version for functions that call it previous to its redefinition?
I would like to know why this works in ghci and whether it's possible to write a working version of this code using a similar technique. Thanks in advance!
The problem is that seqA is infinite, and so
(`notElem` seqA) x
can never return True. If it sees that x is the first element of seqA, then great: it can return False. But if it doesn't see x, it wants to keep looking: maybe x is the next element! The list never ends, so there's no way it can conclude x is definitely not present.
This is a classic mistake beginners make, trying to filter an infinite list and expecting the list to end at some point. Often, the answer is to use something like
x `notElem` (takeWhile (<= x) infList)
instead. This way, your program gives up on searching for x once it's found a number above x. This only works if your lists are sorted, of course. Your equations look like they probably produce ascending lists, in which case it would work, but I haven't worked through the algebra. If your lists aren't in ascending order, you'll need to design some other stopping condition to avoid the infinite recursion.
The other answer tells you the problem with your approach, and suggests a great fix. I thought it might be fun to try to work out a slightly more efficient solution, though; it seems a shame to keep checking the beginning of seqA over and over during our membership calls. Here's the idea I had: the point is for seqB to be the complement of seqA, right? Well, what if we just directly define a complement function? Like this:
complement :: Integer -> [Integer] -> [Integer]
complement = go 1 where
go i xs#(x:xt) = case compare i x of
LT -> i : go (i+1) xs
EQ -> i+1 : go (i+2) xt
GT -> go i xt -- this case should be impossible
go i [] = [i..] -- this case is irrelevant for our purposes
The EQ case is a bit suspect; it doesn't work for general increasing input sequences. (But see below.) Anyway, with this definition in place, the two sequences can be quite naturally defined:
seqA, seqB :: [Integer]
seqA = 1 : zipWith (+) seqA seqB
seqB = complement seqA
Try it in ghci:
> take 10 seqA
[1,3,7,12,18,26,35,45,56,69]
Nice. Now, if we fix up the EQ case to work properly for all (increasing) input sequences, it would have to look like this:
complement :: Integer -> [Integer] -> [Integer]
complement = go i where
go i xs#(x:xt) = case compare i x of
LT -> i : go (i+1) xs
EQ -> go (i+1) xt
GT -> go i xt -- still impossible
go i [] = [i..] -- still irrelevant
Unfortunately, our definitions of seqA and seqB above don't quite work any more. The right first value for seqB depends on whether 2 is in seqA, but whether 2 is in seqA depends on whether the first value of seqB is 1 or not... Luckily, because seqA grows much faster than seqB, we only have to prime the pump a little.
seqA, seqB :: [Integer]
seqA = 1 : 3 : 7 : zipWith (+) (drop 2 seqA) (drop 2 seqB)
seqB = complement seqA
-- OR
seqA = 1 : zipWith (+) seqA seqB
seqB = 2 : 4 : drop 2 (complement seqA)
Try it in ghci:
> take 10 seqA
[1,3,7,12,18,26,35,45,56,69]
The definition of seqX is a bit less natural, but the definition of complement is a bit more natural, so there seems to be something of a tradeoff there.
As an answer to this part:
When I redefine the function in ghci, does it still use the older version for functions that call it previous to its redefinition?
Yes, that's the way it has to work. Bindings at the ghci prompt are not mutable variables as you would have in an imperative language, they're supposed to work the same way as variables do in every other part of Haskell.
So when you have this:
ghci> a = 1
ghci> b = [a]
ghci> b
[1]
a is just a name for 1, and b is just a name for [1]. The latter was calculated by from the expression [a] by seeing what value a was a name for, but it is absolutely the value [1] and not the expression [a] that b refers to.
ghci> a = 2
ghci> b
[1]
Executing a = 2 doesn't change the value referred to by a, it just changes the state of the environment available at the ghci prompt. This cannot affect any values that were calculated when a was a name for 1; they were and remain pure values.
An easy way to think about it is that a = 2 is not "changing a", it's just introducing a new and separate binding. Because it happens to have the same name as an existing one the new one shadows the old one, making the old one impossible for you to refer to in any future expressions. But nothing about the old one has been changed.
And you will in fact see exactly the same behaviour in a compiled module in contexts where you can have multiple bindings for one name (if you shadow a function argument with a let, or nest lets, etc). All but one of them will be inaccessible, but things that were defined in terms of the shadowed binding remain exactly the same; they aren't re-evaluated as if they were defined in terms of the new binding.
So with that in mind, it becomes easy to explain why this works:
ghci> seqB = [2, 4, 5, 6]
ghci> seqA = 1 : zipWith (+) seqA seqB
ghci> seqB = 2 : filter (`notElem` seqA) [3..]
ghci> seqA = 1 : zipWith (+) seqA seqB
ghci> hof = (!!) seqA
It's much the same as if you had defined it this way:
ghci> seqB_old = [2, 4, 5, 6]
ghci> seqA_old = 1 : zipWith (+) seqA_old seqB_old
ghci> seqB_new = 2 : filter (`notElem` seqA_old) [3..]
ghci> seqA_new = 1 : zipWith (+) seqA_new seqB_new
ghci> hof = (!!) seqA_new
seqB_old is just a finte list
Because zipWith stops at the length of the shortest list, seqA_old is also just a finite list, even though it's defined in terms of itself.
seqB_new is an infinite list that just has to filter each element against any of the elements of the finite list seqA_old; this doesn't get caught up in the problem amalloy points out, but it isn't actually the correct list you were trying to define
seqA_new is defined in terms of itself, but seqB_new was defined in terms of seqA_old, not this new version. There is simply no mutual recursion happening.
This problem doesn’t really lend itself to a mutually recursive solution. filter + notElem will continue searching beyond where they could ever return a result, because they can’t make any use of the fact that the sequence is strictly ascending.
Rather than searching for the next element that we haven’t seen, we can turn the problem around: start by assuming we will see every number, and use delete to prune out those numbers that we know we will want to exclude.
hof :: Int -> Int
hof = (!!) seqA
where
-- By definition, one is the cumulative sum of the other.
seqA = scanl' (+) 1 seqB
-- Iteratively build the sequence.
seqB = unfoldr (infinitely step) (1, [2 ..])
step c (d, xs) = (c, (c + d, delete (c + d) xs))
-- Helper for when ‘unfoldr’ is known to have
-- unbounded input (‘x : xs’ always matches)
-- and unbounded output (we always return ‘Just’).
infinitely f (d, x : xs) = Just (f x (d, xs))

how to do this in haskell ? [x^0,x^1,x^2,x^3 ...]

i want to have a list like this one
[x^0,x^1,x^2,x^3 ...]
is it possible to have such a list
for example
ex : x = 2 [1,2,4,8,16,32 ..]
You can use iterate or unfoldr to double a number many times. This could be more efficient than computing x^n for each n.
Below, I use x=2, but you can use any x.
> take 10 $ iterate (*2) 1
[1,2,4,8,16,32,64,128,256,512]
> take 10 $ unfoldr (\x -> Just (x,2*x)) 1
[1,2,4,8,16,32,64,128,256,512]
Also beware that bounded integer types such as Int will overflow pretty fast in this way.
Yes, it is pretty easy thing to do in haskell.
You create an infinite stream of positive numbers and then map over them with function n ↦ x^n
f :: Num a => a -> [a]
f x = fmap (\n -> x^n) [0..]
> take 10 (f 2)
[1,2,4,8,16,32,64,128,256,512]
In Haskell, the list is linear no matter the progression. By linear, I mean non-recursive. The elements in the list are not dependent on one or more previous elements or an initial element.
In Haskell such lists are used very much. In Haskell there are two primary facilities for producing such lists. The first is map and it is effective without any filtering or recursion.
f b n = map (b^) [0..n]
The second is the list comprehension
f b n = [b^x|x<-[0..n]]
In both it is simple to set the limit or number of elements in the result. These could both be made into infinite lists if desired by excluding the n in both the left and right side of the equations.

Long working of program that count Ints

I want to write program that takes array of Ints and length and returns array that consist in position i all elements, that equals i, for example
[0,0,0,1,3,5,3,2,2,4,4,4] 6 -> [[0,0,0],[1],[2,2],[3,3],[4,4,4],[5]]
[0,0,4] 7 -> [[0,0],[],[],[],[4],[],[]]
[] 3 -> [[],[],[]]
[2,2] 3 -> [[],[],[2,2]]
So, that's my solution
import Data.List
import Data.Function
f :: [Int] -> Int -> [[Int]]
f ls len = g 0 ls' [] where
ls' = group . sort $ ls
g :: Int -> [[Int]] -> [[Int]] -> [[Int]]
g val [] accum
| len == val = accum
| otherwise = g (val+1) [] (accum ++ [[]])
g val (x:xs) accum
| len == val = accum
| val == head x = g (val+1) xs (accum ++ [x])
| otherwise = g (val+1) (x:xs) (accum ++ [[]])
But query f [] 1000000 works really long, why?
I see we're accumulating over some data structure. I think foldMap. I ask "Which Monoid"? It's some kind of lists of accumulations. Like this
newtype Bunch x = Bunch {bunch :: [x]}
instance Semigroup x => Monoid (Bunch x) where
mempty = Bunch []
mappend (Bunch xss) (Bunch yss) = Bunch (glom xss yss) where
glom [] yss = yss
glom xss [] = xss
glom (xs : xss) (ys : yss) = (xs <> ys) : glom xss yss
Our underlying elements have some associative operator <>, and we can thus apply that operator pointwise to a pair of lists, just like zipWith does, except that when we run out of one of the lists, we don't truncate, rather we just take the other. Note that Bunch is a name I'm introducing for purposes of this answer, but it's not that unusual a thing to want. I'm sure I've used it before and will again.
If we can translate
0 -> Bunch [[0]] -- single 0 in place 0
1 -> Bunch [[],[1]] -- single 1 in place 1
2 -> Bunch [[],[],[2]] -- single 2 in place 2
3 -> Bunch [[],[],[],[3]] -- single 3 in place 3
...
and foldMap across the input, then we'll get the right number of each in each place. There should be no need for an upper bound on the numbers in the input to get a sensible output, as long as you are willing to interpret [] as "the rest is silence". Otherwise, like Procrustes, you can pad or chop to the length you need.
Note, by the way, that when mappend's first argument comes from our translation, we do a bunch of ([]++) operations, a.k.a. ids, then a single ([i]++), a.k.a. (i:), so if foldMap is right-nested (which it is for lists), then we will always be doing cheap operations at the left end of our lists.
Now, as the question works with lists, we might want to introduce the Bunch structure only when it's useful. That's what Control.Newtype is for. We just need to tell it about Bunch.
instance Newtype (Bunch x) [x] where
pack = Bunch
unpack = bunch
And then it's
groupInts :: [Int] -> [[Int]]
groupInts = ala' Bunch foldMap (basis !!) where
basis = ala' Bunch foldMap id [iterate ([]:) [], [[[i]] | i <- [0..]]]
What? Well, without going to town on what ala' is in general, its impact here is as follows:
ala' Bunch foldMap f = bunch . foldMap (Bunch . f)
meaning that, although f is a function to lists, we accumulate as if f were a function to Bunches: the role of ala' is to insert the correct pack and unpack operations to make that just happen.
We need (basis !!) :: Int -> [[Int]] to be our translation. Hence basis :: [[[Int]]] is the list of images of our translation, computed on demand at most once each (i.e., the translation, memoized).
For this basis, observe that we need these two infinite lists
[ [] [ [[0]]
, [[]] , [[1]]
, [[],[]] , [[2]]
, [[],[],[]] , [[3]]
... ...
combined Bunchwise. As both lists have the same length (infinity), I could also have written
basis = zipWith (++) (iterate ([]:) []) [[[i]] | i <- [0..]]
but I thought it was worth observing that this also is an example of Bunch structure.
Of course, it's very nice when something like accumArray hands you exactly the sort of accumulation you need, neatly packaging a bunch of grungy behind-the-scenes mutation. But the general recipe for an accumulation is to think "What's the Monoid?" and "What do I do with each element?". That's what foldMap asks you.
The (++) operator copies the left-hand list. For this reason, adding to the beginning of a list is quite fast, but adding to the end of a list is very slow.
In summary, avoid adding things to the end of a list. Try to always add to the beginning instead. One simple way to do that is to build the list backwards, and then reverse it at the end. A more devious trick is to use "difference lists" (Google it). Another possibility is to use Data.Sequence rather than a list.
The first thing that should be noted is the most obvious way to implement this is use a data structure that allows random access, an array is an obviously choice. Note that you need to add the elements to the array multiple times and somehow "join them".
accumArray is perfect for this.
So we get:
f l i = elems $ accumArray (\l e -> e:l) [] (0,i-1) (map (\e -> (e,e)) l)
And we're good to go (see full code here).
This approach does involve converting the final array back into a list, but that step is very likely faster than say sorting the list, which often involves scanning the list at least a few times for a list of decent size.
Whenever you use ++ you have to recreate the entire list, since lists are immutable.
A simple solution would be to use :, but that builds a reversed list. However that can be fixed using reverse, which results in only building two lists (instead of 1 million in your case).
Your concept of glomming things onto an accumulator is a very useful one, and both MathematicalOrchid and Guvante show how you can use that concept reasonably efficiently. But in this case, there is a simpler approach that is likely also faster. You started with
group . sort $ ls
and this was a very good place to start! You get a list that's almost the one you want, except that you need to fill in some blanks. How can we figure those out? The simplest way, though probably not quite the most efficient, is to work with a list of all the numbers you want to count up to: [0 .. len-1].
So we start with
f ls len = g [0 .. len-1] (group . sort $ ls)
where
?
How do we define g? By pattern matching!
f ls len = g [0 .. len-1] (group . sort $ ls)
where
-- We may or may not have some lists left,
-- but we counted as high as we decided we
-- would
g [] _ = []
-- We have no lists left, so the rest of the
-- numbers are not represented
g ns [] = map (const []) ns
-- This shouldn't be possible, because group
-- doesn't make empty lists.
g _ ([]:_) = error "group isn't working!"
-- Finally, we have some work to do!
g (n:ns) xls#(xl#(x:_):xls')
| n == x = xl : g ns xls'
| otherwise = [] : g ns xls
That was nice, but making the list of numbers isn't free, so you might be wondering how you can optimize it. One method I invite you to try is using your original technique of keeping a separate counter, but following this same sort of structure.

Is it possible to create a list of fibonacci numbers using [edit: a very limited set of knowledge]?

For instance, in the form of something like this:
let f = [ a | a <- [1..], a == a - 1]
I'm just curious. It seems like it'd be possible, but I can't quite wrap my head around how it would work. This question is meant more for understanding how Haskell works, not so much because I'm looking for a practical application.
Also, I know that similar questions have been asked, but none of the posts I looked at offered any help in doing it they way I'm curious about.
Edit: sorry for the vagueness. Let me clarify a new rule, then. The challenge is to find a way to represent an infinite list of fibonacci numbers using as little extra from the first chapter of Learn You A Haskell For Great Good! as possible. How's that? In other words, what's the most creative way you can think of to produce those numbers with as little 'knowledge' as possible. Sorry for making anyones answer invalid, now.
Try this:
import Data.List (tails)
fib :: [Integer]
fib = 0 : 1 : [ a + b | (a:b:_) <- tails fib ]
Yes it makes use of the cons operator (:) for the seed values. However I believe that can be pardoned.
It is certainly possible with a nasty trick: there's a closed form of the Fibonacci sequence
fn = (φn - ψn) / √5
so
Prelude> let (φ, ψ) = (1/2+s, 1/2-s) where s = sqrt(5/4)
Prelude> let fibs = [ round $ (φ^n - ψ^n) / sqrt 5 | n <- [0..] ]
Prelude> take 20 fibs
[0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,610,987,1597,2584,4181]
This works in floating point, so it's extremely fast but fails to work exactly at high values:
Prelude> take 3 $ drop 80 fibs
[23416728348467676,37889062373143896,61305790721611584]
Prelude> 23416728348467676 + 37889062373143896 - 61305790721611584
-12
I think it can not work without exploiting irrational numbers or putting some recursion in the comprehension, because list comprehensions are just syntactic sugar for monadic binds and those by themselves aren't Turing-complete, so they can't generate an infinite sequence constructively.
The simplest is of course:
fibs = f 0 1 where f a b = a : (f b (a+b))
We arrive at this solution from the definition.
Let f a b computes a stream of numbers starting with a and followed by b. Then we can use f to compute the substream of numbers following b, if we can compute the number following immediately after b. We know that immediately after b follows a+b, so we get that stream and append it to a.
This solution does not involve any list library functions, using list comprehensions instead.
fib = [ x | (x,_) <- l ]
where l = (0,1) : [ (b,a+b) | (a,b) <- l ]

How to select every n-th element from a list [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
How to get every Nth element of an infinite list in Haskell?
Simple task - we have a list and want to leave only each nth element in that list.
What is the most idiomatic way to do it in haskell?
off the top of my head it is something like:
dr n [] = []
dr n (x : xs) = x : (dr n $ drop n xs)
but I have a strong feeling that I'm overcomplicating the problem.
My variant would be:
each :: Int -> [a] -> [a]
each n = map head . takeWhile (not . null) . iterate (drop n)
Fast and plays well with laziness.
Your solution is fine, but here are three other solutions using functions from Haskell's base library.
dr1 m = concatMap (take 1) . iterate (drop m)
Of coarse, this will never terminate (because iterate never terminates). So perhaps a better solution would be to use unfoldr:
{-# LANGUAGE TupleSections #-}
import Data.Maybe
dr2 m = unfoldr ((\x-> fmap (,drop m x) (listToMaybe x)))
The function you pass to an unfold can get a bit ugly if you don't know GHC extensions and concepts such as functors, here's that solution again without the fancy foot-work (untested):
dr2 m = unfoldr ((\x -> case listToMaybe x of
Nothing -> Nothing
Just i -> Just (i,drop m x)))
If you don't like unfolds then consider a zip and a filter:
dr3 m = map snd . filter ((== 1) . fst) . zip (cycle [1..m])
Review
Understand all these solutions are slightly different. Learning why will make you a better Haskell progammer. dr1 uses iterate and will thus never terminate (perhaps this is ok for infinite lists, but probably not a good overall solution):
> dr1 99 [1..400]
[1,100,199,298,397^CInterrupted.
The dr2 solution will show every mth value by skipping values in the unfold. The unfold passes both the value to be used for the next unfolding and the result of the current unfolding in a single tuple.
> dr2 99 [1..400]
[1,100,199,298,397]
The dr3 solution is slightly longer but probably easier for a beginner to understand. First you tag every element in the list with a cycle of [1..n, 1..n, 1..n ...]. Second, you select only the numbers tagged with a 1, effectively skipping n-1 of the elements. Third you remove the tags.
> dr3 99 [1..400]
[1,100,199,298,397]
Lots of ways to shave this yak! Here's yet another:
import Data.List.Split -- from the "split" package on Hackage
dr n = map head . chunk n
Try this:
getEach :: Int -> [a] -> [a]
getEach _ [] = []
getEach n list
| n < 1 = []
| otherwise = foldr (\i acc -> list !! (i - 1):acc) [] [n, (2 * n)..(length list)]
Then in GHC:
*Main> getEach 2 [1..10]
[10,8,6,4,2]

Resources