After some experimentation and search, I came up with the following definition:
emcd' :: Integer -> Integer -> (Integer,Integer,Integer)
emcd' a 0 = (a, 1, 0)
emcd' a b =
let (g, t, s) = emcd' b r
in (g, s, t - (q * s))
where
(q, r) = divMod a b
What's the meaning behind this expression t - (q * s) ?
I've tried to evaluate it by hand; even though I arrived at the correct result (1, -4, 15), I can't see why that expression returns the value of t.
There is a famous method for calculating s and t in as + bt = gcd(a, b). In the process of finding the gcd, I get several equations.
By reversing the steps in the Euclidean Algorithm, it is possible to find these integers a and b. Those resulting equations look like the expression t - (q * s); however, I can't figure out the exact process.
Since (q, r) = divMod a b, we have the equation
a = qb + r
and because of the recursive call, we have:
tb + sr = g
Substituting a-qb for r in the second equation, that means
tb + s(a-qb) = g
tb + sa - qsb = g
sa + (t-qs)b = g
This explains why s and t - q*s are good choices to return.
Related
I'm still trying to grasp an intuition of pullbacks (from category theory), limits, and universal properties, and I'm not quite catching their usefulness, so maybe you could help shed some insight on that as well as verifying my trivial example?
The following is intentionally verbose, the pullback should be (p, p1, p2), and (q, q1, q2) is one example of a non-universal object to "test" the pullback against to see if things commute properly.
-- MY DIAGRAM, A -> B <- C
type A = Int
type C = Bool
type B = (A, C)
f :: A -> B
f x = (x, True)
g :: C -> B
g x = (1, x)
-- PULLBACK, (p, p1, p2)
type PL = Int
type PR = Bool
type P = (PL, PR)
p = (1, True) :: P
p1 = fst
p2 = snd
-- (g . p2) p == (f . p1) p
-- TEST CASE
type QL = Int
type QR = Bool
type Q = (QL, QR)
q = (152, False) :: Q
q1 :: Q -> A
q1 = ((+) 1) . fst
q2 :: Q -> C
q2 = ((||) True) . snd
u :: Q -> P
u (_, _) = (1, True)
-- (p2 . u == q2) && (p1 . u = q1)
I was just trying to come up with an example that fit the definition, but it doesn't seem particularly useful. When would I "look for" a pull back, or use one?
I'm not sure Haskell functions are the best context
in which to talk about pull-backs.
The pull-back of A -> B and C -> B can be identified with a subset of A x C,
and subset relationships are not directly expressible in Haskell's
type system. In your specific example the pull-back would be
the single element (1, True) because x = 1 and b = True are
the only values for which f(x) = g(b).
Some good "practical" examples of pull-backs may be found
starting on page 41 of Category Theory for Scientists
by David I. Spivak.
Relational joins are the archetypal example of pull-backs
which occur in computer science. The query:
SELECT ...
FROM A, B
WHERE A.x = B.y
selects pairs of rows (a,b) where a is a row from table A
and b is a row from table B and where some function of a
equals some other function of b. In this case the functions
being pulled back are f(a) = a.x and g(b) = b.y.
Another interesting example of a pullback is type unification in type inference. You get type constraints from several places where a variable is used, and you want to find the tightest unifying constraint. I mention this example in my blog.
I've written some code that's meant to integrate a function numerically using the trapezoidal rule. It works, but the answer it produces has a wrong sign. Why might that be?
The code is:
integration :: (Double -> Double) -> Double -> Double -> Double
integration f a b = h * (f a + f b + partial_sum)
where
h = (b - a) / 1000
most_parts = map f (points (1000-1) h)
partial_sum = sum most_parts
points :: Double -> Double -> [Double]
points x1 x2
| x1 <= 0 = []
| otherwise = (x1*x2) : points (x1-1) x2
Trapezoidal rule
The code is probably inelegant, but I'm only a student of Haskell and would like to deal with the current problem first and coding style matters after that.
Note: This answer is written in literate Haskell. Save it with .lhs as extension and load it in GHCi to test the solution.
Finding the culprit
First of all, let's take a look at integration. In its current form, it contains only summation of function values f x. Even though the factors aren't correct at the moment, the overall approach is fine: you evaluate f at the grid points. However, we can use the following function to verify that there's something wrong:
ghci> integration (\x -> if x >= 10 then 1 else (-1)) 10 15
-4.985
Wait a second. x isn't even negative in [10,15]. This suggests that you use the wrong grid points.
Grid points revisited
Even though you've linked the article, let's have a look at an exemplary use of the trapezoidal rule (public domain, original file by Oleg Alexandrov):
Although this doesn't use a uniform grid, let's suppose that the 6 grid points are equidistant with grid distance h = (b - a) / 5. What are the x coordinates of those points?
x_0 = a + 0 * h (== a)
x_1 = a + 1 * h
x_2 = a + 2 * h
x_3 = a + 3 * h
x_4 = a + 4 * h
x_5 = a + 5 * h (== b)
If we use set a = 10 and b = 15 (and therefore h = 1), we should end up with [10, 11, 12, 13, 14, 15]. Let's check your points. In this case, you would use points 5 1 and end up with [5,4,3,2,1].
And there's the error. points doesn't respect the boundary. We can easily fix this by using pointsWithOffset:
> points :: Double -> Double -> [Double]
> points x1 x2
> | x1 <= 0 = []
> | otherwise = (x1*x2) : points (x1-1) x2
>
> pointsWithOffset :: Double -> Double -> Double -> [Double]
> pointsWithOffset x1 x2 offset = map (+offset) (points x1 x2)
That way, we can still use your current points definition to generate grid points from x1 to 0 (almost). If we use integration with pointsWithOffset, we end up with
integration :: (Double -> Double) -> Double -> Double -> Double
integration f a b = h * (f a + f b + partial_sum)
where
h = (b - a) / 1000
most_parts = map f (pointsWithOffset (1000-1) h a)
partial_sum = sum most_parts
Tying up loose ends
However, this doesn't take into account that you use all inner points twice in the trapezoid rule. If we add the factors, we end up with
> integration :: (Double -> Double) -> Double -> Double -> Double
> integration f a b =
> h / 2 * (f a + f b + 2 * partial_sum)
> -- ^^^ ^^^
> where
> h = (b - a) / 1000
> most_parts = map f (pointsWithOffset (1000-1) h a)
> partial_sum = sum most_parts
Which yields the correct value for our test function above.
Exercise
Your current version only supports 1000 grid points. Add an Int argument so that one can change the number of grid points:
integration :: Int -> (Double -> Double) -> Double -> Double -> Double
integration n f a b = -- ...
Furthermore, try to write points in different ways, for example go from a to b, use takeWhile and iterate, or even a list comprehension.
Yes it indeed was the points plus you had some factors wrong (the inner points are multiplied by 2) - this is the fixed version of your code:
integration :: (Double -> Double) -> Double -> Double -> Double
integration f a b = h * (f a + f b + innerSum) / 2
where
h = (b - a) / 1000
innerPts = map ((2*) . f . (a+)) (points (1000-1) h)
innerSum = sum innerPts
points :: Double -> Double -> [Double]
points i x
| i <= 0 = []
| otherwise = (i*x) : points (i-1) x
which gives sensible approximations (to 1000 points):
λ> integration (const 2) 1 2
2.0
λ> integration id 1 2
1.5
λ> integration (\x -> x*x) 1 2
2.3333334999999975
λ> 7/3
2.3333333333333335
Right now i am porting my mathematical solution from c# to Haskell, learning Haskell in process. I have following code for Thompson algorithm:
xi[N] = a[N] / c[N];
eta[N] = f[N] / c[N];
for (int i = N - 1; i > 0; i--)
{
var cbxip = (c[i] - b[i] * xi[i + 1]);
xi[i] = a[i] / cbxip;
eta[i] = (f[i] + b[i] * eta[i + 1]) / cbxip;
}
{
int i = 0;
var cbxip = (c[i] - b[i] * xi[i + 1]);
eta[i] = (f[i] + b[i] * eta[i + 1]) / cbxip;
}
How do I do it in Haskell?
I found info on array initialization, but I have several problems with it.
Say, I wrote the following code:
xi = [a[i] / (c[i] - b[i] * xi[i + 1]) | i <- 1..N-1] ++ [a[N] / c[N]]
etha = [(f[i] + b[i] * etha[i + 1] / (c[i] - b[i] * xi[i + 1]) | i <- 0..N-1] ++ [f[N] / c[N]]
The problems are following:
How do I specify I have to initialize array starting right? Do I even need to do so, or Haskell will grasp it by itself? If latter, how can it do that? isn't it is just a blackbox like[f(i)|i<-[a..b]] for a compiler?
(most problematic) For all i in [1..N-1] the part (c[i] - b[i] * xi[i + 1]) is going to be evaluated twice. How can I fix this? Prior mapping it to some other array will cost memory and is impossible as I don't have xi array yet.
I thought of something like simultaneous mapping, but I am confused with how to apply it to array initializing.
I would probably avoid using list comprehensions until you become really familiar with solving problems through recursion. Haskell is very different to C# in that you don't have "arrays" as such, which can be randomly accessed and inserted - you can't pre-allocate this space up front, because allocation is a side effect. Instead, consider everything to be linked lists, and to use recursion to iterate through them.
If we start with a top-down approach, we have a bunch of lists of numbers, and we need a function to iterate through them. If we passed these separately we would end up with a function signature like [n] -> [n] -> [n] -> [n] -> [n] -> ... This is probably not a good idea considering they all seem to be the same size, N. Instead, we can use a tuple (or pair of tuples) to contain them, eg.
thompson :: Num n => [(n, n, n, n, n, n)] -> [(n, n)]
thompson [] = [] -- pattern to terminate recursion for empty lists
-- these variables are equivalent to your a[i], etc in C#
thompson ((a, b, c, f, xi, eta):_) = ?
If we are duplicating your C# exactly, we probably want patterns for the case of 2 elements in the list, since it seems that each iteration needs to access the current and next elements. For 2 or more elements.
-- handle final 2 elements
thompson ((a, _, c, f, xi, eta):[]) = ((a / c), (f / c))
thompson ((a0, b0, c0, f0, xi0, eta0):(_,_,_,_,xi1,eta1):[]) = ?
-- handle the regular case.
thompson ((a0, b0, c0, f0, xi0, eta0):(a1,b1,c1,f1,xi1,eta1):tail) = ?
Once you have the overall iterative structure, it should become more obvious how to implement what's in the loop. The loop is basically a function which takes one of these tuples, plus a tuple for the next xi/eta and does some calculation, returning a new tuple for xi/eta (or in the final case, just eta). The a,b,c,f appear to not change.
doCalc1 :: Num n => (n, n, n, n, n, n) -> (n, n) -> (n, n)
doCalc1 (a, b, c, f, xi0, eta0) (xi1, eta1) = (a / cbxip, f + b * eta1 / cbxip)
where cbxip = c - b * xi1
doCalc2 :: Num n => Num n => (n, n, n, n, n, n) -> (n, n) -> n
doCalc2 (a, b, c, f, xi0, eta0) (xi1, eta1) = f + b * eta1 / cbxip
where cbxip = c - b * xi1
Now we just need to update thompson to call doCalc1/doCalc2, and recursively call itself with the tail.
thompson (head:next#(_,_,_,_,xi,eta):[])
= (xi, doCalc2 head (xi, eta)) : thompson [next]
thompson (head:next#(_,_,_,_,xi,eta):tail)
= doCalc1 head (xi, eta) : thompson (next:tail)
At the page http://www.haskell.org/haskellwiki/Pointfree#Tool_support, it talks about the (->) a monad.
What is this monad? The use of symbols makes it hard to google.
This is a Reader monad. You can think of it as
type Reader r = (->) r -- Reader r a == (->) r a == r -> a
instance Monad (Reader r) where
return a = const a
m >>= f = \r -> f (m r) r
And do computations like:
double :: Num r => Reader r r
double = do
v <- id
return (2*v)
It is the function monad, and it's a bit weird to understand. It's also sometimes called the Reader monad, by the way. I think the best way to illustrate how it works is through an example:
f1 :: Double -> Double
f1 x = 10 * x + x ** 2 + 3 * x ** 3
f2 :: Double -> Double
f2 = do
x1 <- (10 *)
x2 <- (** 2)
x3 <- (** 3)
return $ x1 + x2 + 3 * x3
If you try out both of these, you'll see that you get the same output from both. So what exactly is going on? When you "extract" a value from a function, you get what can be considered its "return value". I put quotes around it because when you return a value from this monad, the value you return is a function.
For an example like this, the implicit argument to f2 gets passed to each <- as an implicit argument. It can be fairly useful if you have a lot of sub expressions with the same argument. As the Reader monad, it is generally used to supply read-only config values.
Given:
Haskell
Complex-valued function df/dz defined on complex plane U (let's say z is a Complex Double).
Point z1 from the U on which df/dz is defined.
Question:
How to get value of function f(z) for which df/dz is a derivative, in point z1?
I. e. how to restore value of original function given only it's derivative, assuming complex plane?
This question is somewhat related to my previous question about calculating integrals of complex functions, but they are about different things. Here I am interested not in calculating some scalar value, but in finding the origin function given it's derivative. It's essentially calculating the indefinite integral of this derivative.
(Runge–Kutta in Haskell)
You can use some numeric solver like Runge-Kutta
-- define 4th order Runge-Kutta map (RK4)
rk4 :: Floating a => (a -> a) -> a -> a -> a
rk4 f h x = x + (1/6) * (k1 + 2*k2 + 2*k3 + k4)
where k1 = h * f (x)
k2 = h * f (x + 0.5*k1)
k3 = h * f (x + 0.5*k2)
k4 = h * f (x + k3)
in that case function signature is Floating but you can use RealFloat instead (you can use runge-kutta in complex).
Complete example:
Prelude> import Data.Complex
Prelude Data.Complex> let rk4 f h x = x + (1/6) * (k1 + 2*k2 + 2*k3 + k4) where {k1 = h * f(x);k2 = h * f (x + 0.5*k1);k3 = h * f (x + 0.5*k2);k4 = h * f (x + k3)}
Prelude Data.Complex> let f z = 2 * z
Prelude Data.Complex> rk4 f (0.1 :+ 0.2) (0.3 :+ 1.2)
(-0.2334199999999999) :+ 1.4925599999999999
Prelude Data.Complex>
On the other hand, #leftaroundabout suggest extend that behavior to VectorSpace (great! of course! :D )