Derivative Towers and how to use the vector-space package (haskell) - haskell

I am working with Haskell for quite a while now, but I am far from being an expert. But I see that the functional approach to programming suits me the best.
So far I am working on a project to calculate some serious stuff, like currents and potentials radiated from a given structure.
I followed the blog written by Conal Elliott (here is some more Linear Maps) which is very nice and fundamental.
Unfortunately, I am lacking a simple example :)
To be more precise, I have a curve
f:[0,1] in R -> R³
t -> a*e_y + 2*t*e_z
which is a simple straight line at (0,a,2*t).
When I want to calculate the derivative of f, e.g. for the length of the curve, I know the mathematical result, which is quite simple (0,0,2), but how do I accomplish this in Haskell, especially with the vector-space package?
I really want to use this library because of its functionality, it is exactly the approach I would have take too (but I am not that far ahead on the Haskell road)
What I have so far is this:
{-# LANGUAGE Rank2Types, TypeOperators, FlexibleContexts, TypeFamilies #-}
{-# OPTIONS_GHC -Wall #-}
import Numeric.GSL.Integration
import Data.VectorSpace
import Data.Basis
import Data.Cross
import Data.Derivative
import Data.LinearMap
type Vec3 s = Three s
prec :: Double
prec = 1E-9
f1 :: (Floating s, VectorSpace s, Scalar s ~ s) => s -> s
f1 = id
c1 :: Double -> Vec3 Double
c1 = \t -> linearCombo [((v 0 0 1),f1 t),(( v 0 1 0),2)]
derivC :: Double -> Vec3 (Double :> Double)
derivC t = c1 (pureD t)
It is the the actual implementation of the pureD function, so far nothing that I have tried works to get this snippet to compile. I get the following error:
tests.hs:26:12:
Couldn't match expected type `Double :> Double'
with actual type `Double'
Expected type: Vec3 (Double :> Double)
Actual type: Vec3 Double
In the return type of a call of `c1'
In the expression: c1 (pureD t)
Failed, modules loaded: none.
There is also a graphics library which uses vector-space and there is even an example on a torus, where pureD is used. I tried to deduce the example but I don't see how I can map it to my problem.
Any help would be greatly appreciated.
Thanks in advance
PS: I cannot post all the links I'd like to, but am willing to provide

That's an interesting library.. Thanks for sharing.
Although I don't understand the concept of the library yet,
how about this code:
{-# LANGUAGE Rank2Types, TypeOperators, FlexibleContexts, TypeFamilies #-}
module Main where
import Data.LinearMap
import Data.Maclaurin
diff :: (Double :~> (Double,Double,Double) ) -> (Double :~> (Double,Double,Double))
diff f = \x -> (atBasis (derivative (f x)) ())
eval :: (Double :~> (Double,Double,Double)) -> Double -> (Double,Double,Double)
eval f x = powVal (f x)
f :: Double :~> (Double,Double,Double)
f x = tripleD (pureD 0,pureD 1,(2*idD) x)
*Main> map (eval f) [0,0.2 .. 1]
[(0.0,1.0,0.0),(0.0,1.0,0.4),(0.0,1.0,0.8),(0.0,1.0,1.2000000000000002),
(0.0,1.0,1.6000000000000003),(0.0,1.0,2.0000000000000004)]
*Main> map (eval (diff f)) [0,0.2 .. 1]
[(0.0,0.0,2.0),(0.0,0.0,2.0),(0.0,0.0,2.0),(0.0,0.0,2.0),(0.0,0.0,2.0),
(0.0,0.0,2.0)]
*Main> map (eval (diff $ diff f)) [0,0.2 .. 1]
[(0.0,0.0,0.0),(0.0,0.0,0.0),(0.0,0.0,0.0),(0.0,0.0,0.0),(0.0,0.0,0.0),(0.0,0.0,0.0)]
Try also g x = tripleD (pureD 0,idD x,(idD*idD) x) (which seem to represent the curve (0,x,x^2)).

You might want to try the ad package, which does its best to make it easy to do automatic differentiation of functions written in transparent idiomatic Haskell.
$ cabal install ad
$ ghci
Prelude> :m + Numeric.AD
Prelude Numeric.AD> diffF (\t->let a=3 in [0,a,2*t]) 7
[0,0,2]
Prelude Numeric.AD> let f t = let a=3 in [0,a,2*t]
Prelude Numeric.AD> diffF f 17
[0,0,2]

Related

How do the various "..Instances" pragma's work together, and is there a way around my current problem?

Consider the following code:
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE UndecidableInstances #-}
class X a
class Y a
instance Y Bool
instance (Y a) => X a
instance {-# OVERLAPPING #-} X Int
f :: (X a) => a -> a
f x = x
These LANGUAGE pragma's are needed to write the above instances.
Now, say we want to write a function g:
g :: (Y a) => a -> a
g = f
Without IncoherentInstances or adding {-# INCOHERENT #-} to one of the instances, this doesn't typecheck.
But when we do add this, and ask ghci
ghci> :t f
f :: Y a => a -> a
Suddenly the type of 'f' changed?
With this small example, programs still typecheck when I give f an Int (indicating that the above would be merely a 'visual bug', but in a bigger example the same does not typecheck, giving me an error like:
Could not deduce (Y a) arising from a use of 'f
(...)
from the context: (..., X a, ...)
This also happens when we say
h = f
and try to call h with an Int
:type f does not report the type of the defined entity f. It reports the type of the expression f. GHC tries really hard to stamp polymorphism out of expressions. In particular, using f in an expression triggers simplification of the X a constraint (as does using any definition with a constraint). Without IncoherentInstances, GHC refrains from using instance Y a => X a, because there is another instance that overlaps it, so GHC needs to wait to see which one it should use. This ensures coherence; the only X Int instance that is ever used is the explicitly "specialized" one. With IncoherentInstances, you say that you don't care about coherence, so GHC goes ahead and simplifies X a to Y a using the polymorphic instance whenever f appears in an expression. The weird behavior you see where sometimes GHC is OK with using X Int and sometimes complains that there is no Y Int is a result of GHC making different internal decisions about when it wants to simplify constraints (you did ask for incoherence!). The command for seeing the type of a definition is :type +v. :type +v f should show the type of f "as declared". Hopefully, you can also see that IncoherentInstances is a bad idea. Don't use it.

Expressing infinite kinds

When expressing infinite types in Haskell:
f x = x x -- This doesn't type check
One can use a newtype to do it:
newtype Inf = Inf { runInf :: Inf -> * }
f x = x (Inf x)
Is there a newtype equivalent for kinds that allows one to express infinite kinds?
I already found that I can use type families to get something similar:
{-# LANGUAGE TypeFamilies #-}
data Inf (x :: * -> *) = Inf
type family RunInf x :: * -> *
type instance RunInf (Inf x) = x
But I'm not satisfied with this solution - unlike the types equivalent, Inf doesn't create a new kind (Inf x has the kind *), so there's less kind safety.
Is there a more elegant solution to this problem?
Responding to:
Like recursion-schemes, I want a way to construct ASTs, except I want my ASTs to be able to refer to each other - that is a term can contain a type (for a lambda parameter), a type can contain a row-type in it and vice-versa. I'd like the ASTs to be defined with an external fix-point (so one can have "pure" expressions or ones annotated with types after type inference), but I also want these fix-points to be able to contain other types of fix-points (just like terms can contain terms of different types). I don't see how Fix helps me there
I have a method, which maybe is near what you are asking for, that I have been experimenting with. It seems to be quite powerful, though the abstractions around this construction need some development. The key is that there is a kind Label which indicates from where the recursion will continue.
{-# LANGUAGE DataKinds #-}
import Data.Kind (Type)
data Label = Label ((Label -> Type) -> Type)
type L = 'Label
L is just a shortcut to construct labels.
Open-fixed-point definitions are of kind (Label -> Type) -> Type, that is, they take a "label interpretation (type) function" and give back a type. I called these "shape functors", and abstractly refer to them with the letter h. The simplest shape functor is one that does not recurse:
newtype LiteralF a f = Literal a -- does not depend on the interpretation f
type Literal a = L (LiteralF a)
Now we can make a little expression grammar as an example:
data Expr f
= Lit (f (Literal Integer))
| Let (f (L Defn)) (f (L Expr))
| Var (f (L String))
| Add (f (L Expr)) (f (L Expr))
data Defn f
= Defn (f (Literal String)) (f (L Expr))
Notice how we pass labels to f, which is in turn responsible for closing off the recursion. If we just want a normal expression tree, we can use Tree:
data Tree :: Label -> Type where
Tree :: h Tree -> Tree (L h)
Then a Tree (L Expr) is isomorphic to the normal expression tree you would expect. But this also allows us to, e.g., annotate the tree with a label-dependent annotation at each level of the tree:
data Annotated :: (Label -> Type) -> Label -> Type where
Annotated :: ann (L h) -> h (Annotated ann) -> Annotated ann (L h)
In the repo ann is indexed by a shape functor rather than a label, but this seems cleaner to me now. There are a lot of little decisions like this to be made, and I have yet to find the most convenient pattern. The best abstractions to use around shape functors still needs exploration and development.
There are many other possibilities, many of which I have not explored. If you end up using it I would love to hear about your use case.
With data-kinds, we can use a regular newtype:
import Data.Kind (Type)
newtype Inf = Inf (Inf -> Type)
And promote it (with ') to create new kinds to represent loops:
{-# LANGUAGE DataKinds #-}
type F x = x ('Inf x)
For a type to unpack its 'Inf argument we need a type-family:
{-# LANGUAGE TypeFamilies #-}
type family RunInf (x :: Inf) :: Inf -> Type
type instance RunInf ('Inf x) = x
Now we can express infinite kinds with a new kind for them, so no kind-safety is lost.
Thanks to #luqui for pointing out the DataKinds part in his answer!
I think you're looking for Fix which is defined as
data Fix f = Fix (f (Fix f))
Fix gives you the fixpoint of the type f. I'm not sure what you're trying to do but such infinite types are generally used when you use recursion scehemes (patterns of recursion that you can use) see recursion-schemes package by Edward Kmett or with free monads which among other things allow you to construct ASTs in a monadic style.

How to satisfy constraints on existentially quantified values?

In an attempt at learning how to work with dependent data types in haskell I encountered the following problem:
Suppose you have a function such as:
mean :: ((1 GHC.TypeLits.<=? n) ~ 'True, GHC.TypeLits.KnownNat n) => R n -> ℝ
defined in the hmatrix library, then how do you use this on a vector that has an existential type? E.g.:
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeOperators #-}
import Data.Proxy (Proxy (..))
import GHC.TypeLits
import Numeric.LinearAlgebra.Static
getUserInput =
let userInput = 3 -- pretend it's unknown at compile time
seed = 42
in existentialCrisis seed userInput
existentialCrisis seed userInput
| userInput <= 0 = 0
| otherwise =
case someNatVal userInput of
Nothing -> undefined -- let's ignore this case for now
Just (SomeNat (proxy :: Proxy n)) ->
let someVector = randomVector seed Gaussian :: R n
in mean someVector -- I know that 'n > 0' but the compiler doesn't
This gives the following error:
• Couldn't match type ‘1 <=? n’ with ‘'True’
arising from a use of ‘mean’
Makes sense indeed, but after some googling and fiddling around, I could not find out how to deal with this. How can I get hold of an n :: Nat, based on user input, such that it satisfies the 1 <= n constraint?. I believe it must be possible since the someNatVal function already succeeds in satisfying the KnownNat constraint based on the condition that the input is not negative.
It seems to me that this is a common thing when working with dependent types, and maybe the answer is obvious but I don't see it.
So my question:
How, in general, can I bring an existential type in scope satisfying the constraints required for some function?
My attempts:
To my surprise, even the following modification
let someVector = randomVector seed Gaussian :: R (n + 1)
gave a type error:
• Couldn't match type ‘1 <=? (n + 1)’ with ‘'True’
arising from a use of ‘mean’
Also, adding an extra instance to <=? to prove this equality does not work as <=? is closed.
I tried an approach combining GADTs with typeclasses as in this answer to a previous question of mine but could not make it work.
Thanks #danidiaz for pointing me in the right direction, the typelist-witnesses documentation provides a nearly direct answer to my question. Seems like I was using the wrong search terms when googling for a solution.
So here is a self contained compileable solution:
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE TypeFamilies #-}
import Data.Proxy (Proxy (..))
import Data.Type.Equality ((:~:)(Refl))
import GHC.TypeLits
import GHC.TypeLits.Compare
import Numeric.LinearAlgebra.Static
existentialCrisis :: Int -> Int -> IO (Double)
existentialCrisis seed userInput =
case someNatVal (fromIntegral userInput) of
Nothing -> print "someNatVal failed" >> return 0
Just (SomeNat (proxy :: Proxy n)) ->
case isLE (Proxy :: Proxy 1) proxy of
Nothing -> print "isLE failed" >> return 0
Just Refl ->
let someVector = randomVector seed Gaussian :: R n
in do
print userInput
-- I know that 'n > 0' and so does the compiler
return (mean someVector)
And it works with input only known at runtime:
λ: :l ExistentialCrisis.hs
λ: existentialCrisis 41 1
(0.2596687587224799 :: R 1)
0.2596687587224799
*Main
λ: existentialCrisis 41 0
"isLE failed"
0.0
*Main
λ: existentialCrisis 41 (-1)
"someNatVal failed"
0.0
It seems like typelist-witnesses does a lot unsafeCoerceing under the hood. But the interface is type-safe so it doesn't really matter that much for practical use cases.
EDIT:
If this question was of interest to you, might also find this post interesting: https://stackoverflow.com/a/41615278/2496293

Accessing record name and function in generics

I am trying to figure out how to do generic deriving modeled after deriveJSON. I defined a simple type using record style data constructor as below:
data T = C1 { aInt::Int, aString::String} deriving (Show,Generic)
What I will like to do is to define a generic derivable function that takes the data constructors above, and outputs a builder using the record names and the functions - just a toy code - we want to make ABuilder generic so we can use it for any data type with record syntax (like deriveJSON in Aeson):
{-# LANGUAGE DeriveGeneric #-}
import GHC.Generics
data T = C1 { aInt::Int, aString::String} deriving (Show,Generic)
-- Some kind of builder output - String here is a stand-in for the
-- builder
class ABuilder a where
f :: a -> String
-- Need to get the record field name, and record field function
-- for each argument, and build string - for anything that is not
-- a string, we need to add show function - we assume "Show" instance
-- exists
instance ABuilder T where
f x = ("aInt:" ++ (show . aInt $ x)) ++ "," ++ ("aString:" ++ (aString $ x))
What I can't figure out is how to get the record name, and the function. Here is my attempt in ghci 7.10.3. I could get the data type name, but can't figure out how to get record names and functions out of it.
$ ghci Test.hs
GHCi, version 7.10.3: http://www.haskell.org/ghc/ :? for help
[1 of 1] Compiling Main ( Test.hs, interpreted )
Ok, modules loaded: Main.
*Main> datatypeName . from $ (C1 {aInt=1,aString="a"})
"T"
*Main> :t from (C1 {aInt=1,aString="a"})
from (C1 {aInt=1,aString="a"})
:: D1
Main.D1T
(C1
Main.C1_0T
(S1 Main.S1_0_0T (Rec0 Int) :*: S1 Main.S1_0_1T (Rec0 String)))
x
*Main>
I will appreciate pointers on how to get the record name and the function in Generics. If TemplateHaskell is better approach for defining Generic instance of ABuilder, I will appreciate hearing why. I am hoping to stick to Generics for solving this at compile-time if the solution is simple. I have noticed that Aeson uses TemplateHaskell for deriveJSON part. That is why my question about TemplateHaskell above to see if there is something I am missing here (I am using ghc 7.10.3 and don't need backward compatibility with older versions).
Here's something I just whipped up that should get this if you hand it the innards of a specific constructor:
{-# LANGUAGE DeriveGeneric, TypeOperators, FlexibleContexts, FlexibleInstances #-}
import GHC.Generics
data T = C1 { aInt::Int, aString::String} deriving (Show,Generic)
class AllSelNames x where
allSelNames :: x -> [String]
instance (AllSelNames (a p), AllSelNames (b p)) => AllSelNames ((a :*: b) p) where
allSelNames (x :*: y) = allSelNames x ++ allSelNames y
instance Selector s => AllSelNames (M1 S s f a) where
allSelNames x = [selName x]
From the repl we see
*Main> let x = unM1 . unM1 $ from (C1 {aInt=1,aString="a"})
*Main> allSelNames x
["aInt","aString"]

Style vs Performance Using Vectors

Here's the code:
{-# LANGUAGE FlexibleContexts #-}
import Data.Int
import qualified Data.Vector.Unboxed as U
import qualified Data.Vector.Generic as V
{-# NOINLINE f #-} -- Note the 'NO'
--f :: (Num r, V.Vector v r) => v r -> v r -> v r
--f :: (V.Vector v Int64) => v Int64 -> v Int64 -> v Int64
--f :: (U.Unbox r, Num r) => U.Vector r -> U.Vector r -> U.Vector r
f :: U.Vector Int64 -> U.Vector Int64 -> U.Vector Int64
f = V.zipWith (+) -- or U.zipWith, it doesn't make a difference
main = do
let iters = 100
dim = 221184
y = U.replicate dim 0 :: U.Vector Int64
let ans = iterate ((f y)) y !! iters
putStr $ (show $ U.sum ans)
I compiled with ghc 7.6.2 and -O2, and it took 1.7 seconds to run.
I tried several different versions of f:
f x = U.zipWith (+) x
f x = (U.zipWith (+) x) . id
f x y = U.zipWith (+) x y
Version 1 is the same as the original while versions 2 and 3 run in in under 0.09 seconds (and INLINING f doesn't change anything).
I also noticed that if I make f polymorphic (with any of the three signatures above), even with a "fast" definition (i.e. 2 or 3), it slows back down...to exactly 1.7 seconds. This makes me wonder if the original problem is perhaps due to (lack of) type inference, even though I'm explicitly giving the types for the Vector type and element type.
I'm also interested in adding integers modulo q:
newtype Zq q i = Zq {unZq :: i}
As when adding Int64s, if I write a function with every type specified,
h :: U.Vector (Zq Q17 Int64) -> U.Vector (Zq Q17 Int64) -> U.Vector (Zq Q17 Int64)
I get an order of magnitude better performance than if I leave any polymorphism
h :: (Modulus q) => U.Vector (Zq q Int64) -> U.Vector (Zq q Int64) -> U.Vector (Zq q Int64)
But I should at least be able to remove the specific phantom type! It should be compiled out, since I'm dealing with a newtype.
Here are my questions:
Where is the slowdown coming from?
What is going on in versions 2 and 3 of f that affect performance in any way? It seems like a bug to me that (what amounts to) coding style can affect performance like this. Are there other examples outside of Vector where partially applying a function or other stylistic choices affect performance?
Why does polymorphism slow me down an order of magnitude independent of where the polymorphism is (i.e. in the vector type, in the Num type, both, or phantom type)? I know polymorphism makes code slower, but this is ridiculous. Is there a hack around it?
EDIT 1
I filed a issue with the Vector library page. I found a GHC
issue relating to this problem.
EDIT2
I rewrote the question after gaining some insight from #kqr's answer.
Below is the original for reference.
--------------ORIGINAL QUESTION--------------------
Here's the code:
{-# LANGUAGE FlexibleContexts #-}
import Control.DeepSeq
import Data.Int
import qualified Data.Vector.Unboxed as U
import qualified Data.Vector.Generic as V
{-# NOINLINE f #-} -- Note the 'NO'
--f :: (Num r, V.Vector v r) => v r -> v r -> v r
--f :: (V.Vector v Int64) => v Int64 -> v Int64 -> v Int64
--f :: (U.Unbox r, Num r) => U.Vector r -> U.Vector r -> U.Vector r
f :: U.Vector Int64 -> U.Vector Int64 -> U.Vector Int64
f = V.zipWith (+)
main = do
let iters = 100
dim = 221184
y = U.replicate dim 0 :: U.Vector Int64
let ans = iterate ((f y)) y !! iters
putStr $ (show $ U.sum ans)
I compiled with ghc 7.6.2 and -O2, and it took 1.7 seconds to run.
I tried several different versions of f:
f x = U.zipWith (+) x
f x = (U.zipWith (+) x) . U.force
f x = (U.zipWith (+) x) . Control.DeepSeq.force)
f x = (U.zipWith (+) x) . (\z -> z `seq` z)
f x = (U.zipWith (+) x) . id
f x y = U.zipWith (+) x y
Version 1 is the same as the original, version 2 runs in 0.111 seconds, and versions 3-6 run in in under 0.09 seconds (and INLINING f doesn't change anything).
So the order-of-magnitude slowdown appears to be due to laziness since force helped, but I'm not sure where the laziness is coming from. Unboxed types aren't allowed to be lazy, right?
I tried writing a strict version of iterate, thinking the vector itself must be lazy:
{-# INLINE iterate' #-}
iterate' :: (NFData a) => (a -> a) -> a -> [a]
iterate' f x = x `seq` x : iterate' f (f x)
but with the point-free version of f, this didn't help at all.
I also noticed something else, which could be just a coincidence and red herring:
If I make f polymorphic (with any of the three signatures above), even with a "fast" definition, it slows back down...to exactly 1.7 seconds. This makes me wonder if the original problem is perhaps due to (lack of) type inference, even though everything should be inferred nicely.
Here are my questions:
Where is the slowdown coming from?
Why does composing with force help, but using a strict iterate doesn't?
Why is U.force worse than DeepSeq.force? I have no idea what U.force is supposed to do, but it sounds a lot like DeepSeq.force, and seems to have a similar effect.
Why does polymorphism slow me down an order of magnitude independent of where the polymorphism is (i.e. in the vector type, in the Num type, or both)?
Why are versions 5 and 6, neither of which should have any strictness implications at all, just as fast as a strict function?
As #kqr pointed out, the problem doesn't seem to be strictness. So something about the way I write the function is causing the generic zipWith to be used rather than the Unboxed-specific version. Is this just a fluke between GHC and the Vector library, or is there something more general that can be said here?
While I don't have the definitive answer you want, there are two things that might help you along.
The first thing is that x `seq` x is, both semantically and computationally, the same thing as just x. The wiki says about seq:
A common misconception regarding seq is that seq x "evaluates" x. Well, sort of. seq doesn't evaluate anything just by virtue of existing in the source file, all it does is introduce an artificial data dependency of one value on another: when the result of seq is evaluated, the first argument must also (sort of; see below) be evaluated.
As an example, suppose x :: Integer, then seq x b behaves essentially like if x == 0 then b else b – unconditionally equal to b, but forcing x along the way. In particular, the expression x `seq` x is completely redundant, and always has exactly the same effect as just writing x.
What the first paragraph says is that writing seq a b doesn't mean that a will magically get evaluated this instant, it means that a will get evaluated as soon as b needs to be evaluated. This might occur later in the program, or maybe never at all. When you view it in that light, it is obvious that seq x x is a redundancy, because all it says is, "evaluate x as soon as x needs to be evaluated." Which of course is what would happen anyway if you had just written x.
This has two implications for you:
Your "strict" iterate' function isn't really any stricter than it would be without the seq. In fact, I have a hard time imagining how the iterate function could become any stricter than it already is. You can't make the tail of the list strict, because it is infinite. The main thing you can do is force the "accumulator", f x, but doing so doesn't give any significant performance increase on my system.[1]
Scratch that. Your strict iterate' does exactly the same thing as my bang pattern version. See the comments.
Writing (\z -> z `seq` z) does not give you a strict identity function, which I assume is what you were going for. In fact, the common identity function is as strict as you'll get – it will evaluate its result as soon as it is needed.
However, I peeked at the core GHC generates for
U.zipWith (+) y
and
U.zipWith (+) y . id
and there is only one big difference that my untrained eye can spot. The first one uses just a plain Data.Vector.Generic.zipWith (here's where your polymorphism coincidence might come into play – if GHC chooses a generic zipWith it will of course perform as if the code was polymorphic!) while the latter has exploded this single function call into almost 90 lines of state monad code and unpacked machine types.
The state monad code looks almost like the loops and destructive updates you would write in an imperative language, so I assume it's tailored pretty well to the machine it's running on. If I wasn't in such a hurry, I would take a longer look to see more exactly how it works and why GHC suddenly decided it needed a tight loop. I have attached the generated core as much for myself as anyone else who wants to take a look.[2]
[1]: Forcing the accumulator along the way: (This is what you already do, I misunderstood the code!)
{-# LANGUAGE BangPatterns #-}
iterate' f !x = x : iterate f (f x)
[2]: What core U.zipWith (+) y . id gets translated into.

Resources