Implementing a catamorphism for Expression Trees - haskell

I am trying to implement an expression tree in Haskell as follows:
data ExprTr a b =
Variable a
| Constant b
| Add (ExprTr a b) (ExprTr a b)
| Mul (ExprTr a b) (ExprTr a b)
deriving (Eq, Show)
And I would like to be able to implement operations on it using a catamorphism.
Currently, this is the function I got:
cataTr f _ _ _ (Variable i) = f i
cataTr f g _ _ (Constant i) = g i
cataTr f g h i (Add e1 e2) = g (cataTr f g h i e1) (cataTr f g h i e2)
cataTr f g h i (Mul e1 e2) = h (cataTr f g h i e1) (cataTr f g h i e2)
However, whenever I try to use it with an expresion of type ExprTr String Integer I get compiler errors. For example, running cataTr id id id id (Var "X") returns the following compiler error instead of (Var "X").
Couldn't match type 'Integer' with '[Char]'
Expected type: 'ExprTr String String'
Actual type: 'ExprTr String Integer'
I am not sure how to proceed. Furthermore, I would appreciate some suggestions on how to type such a function as cataTr to make it easier to debug later.
As I am fairly new to Haskell, I would like to understand how to approach such situations from 'first principles' instead of using a library to generate the catamorphism for myself.

This is expected behavior.
You made a typo in the question I guess, since you should use h and i as functions:
cataTr f _ _ _ (Variable i) = f i
cataTr f g _ _ (Constant i) = g i
cataTr f g h i (Add e1 e2) = h (cataTr f g h i e1) (cataTr f g h i e2)
cataTr f g h i (Mul e1 e2) = i (cataTr f g h i e1) (cataTr f g h i e2)
or likely more elegant:
cataTr f g h i = go
where go (Variable i) = f i
go (Constant i) = g i
go (Add e1 e2) = h (go e1) (go e2)
go (Mul e1 e2) = i (go e1) (go e2)
or as #DanielWagner suggests, with a case expression:
cataTr f g h i = go
where go v = case v of
Variable i -> f i
Constant i -> g i
Add e1 e2 -> h (go e1) (go e2)
Mul e1 e2 -> i (go e1) (go e2)
Nevertheless, you can not call the function cataTr with id as third and fourth parameter. These functions require two parameters. Furthermore if a and b are different the two first parameters can not be both id, since your f maps an a to the result type, and the g maps a b to the result type.
You can for example pass the data constructor to construct an identity function with:
cataTr Variable Constant Add Mul (Variable "X")
this will thus yield Variable "X" again, or you can for example map all Variables to 0 with const 0, and use id, (+) and (*) to evaluate an expression:
cataTr (const 0) id (+) (*) (Variable "X")

Related

Maybe Int expression using unique data type

I'm wrote a unique data type to express basic math (addition, mult, etc.) and it works - however, when I try to turn it into a Maybe statement, none of the math works. I believe it's a syntax error but I've tried extra parenthesis and so on and I can't figure it out. Usually Maybe statements are easy but I don't understand why it keeps throwing an issue.
This is the data type I created (with examples):
data Math = Val Int
| Add Math Math
| Sub Math Math
| Mult Math Math
| Div Math Math
deriving Show
ex1 :: Math
ex1 = Add1 (Val1 2) (Val1 3)
ex2 :: Math
ex2 = Mult (Val 2) (Val 3)
ex3 :: Math
ex3 = Div (Val 3) (Val 0)
Here is the code. The only Nothing return should be a division by zero.
expression :: Math -> Maybe Int
expression (Val n) = Just n
expression (Add e1 e2) = Just (expression e1) + (expression e2)
expression (Sub e1 e2) = Just (expression e1) - (expression e2)
expression (Mult e1 e2) = Just (expression e1) * (expression e2)
expression (Div e1 e2)
| e2 /= 0 = Just (expression e1) `div` (expression e2)
| otherwise = Nothing
I get the same error for every individual mathematical equation, even if I delete the others, so I'm certain it's syntax. The error makes it seem like a Maybe within a Maybe but when I do that e1 /= 0 && e2 /= 0 = Just (Just (expression e1)div(expression e2)), I get the same error:
* Couldn't match type `Int' with `Maybe Int'
Expected type: Maybe (Maybe Int)
Actual type: Maybe Int
* In the second argument of `div', namely `(expression e2)'
In the expression: Just (expression e1) `div` (expression e2)
In an equation for `expression':
expression (Div e1 e2)
| e1 /= 0 && e2 /= 0 = Just (expression e1) `div` (expression e2)
| otherwise = Nothing
|
56 | | e1 /= 0 && e2 /= 0 = Just (expression e1) `div` (expression e2)
| ^^^^^^^^^
What am I missing? It's driving me crazy.
So the first issue is precedence. Instead of writing:
Just (expression e1) * (expression e2)
You probably want:
Just (expression e1 * expression e2)
The second issue is the types. Take a look at the type of (*), for instance:
>>> :t (*)
(*) :: Num a => a -> a -> a
It says, for some type a that is a Num, it takes two as and returns one a. Specialised to Int, that would be:
(*) :: Int -> Int -> Int
But expression returns a Maybe Int! So we need some way to multiply with Maybes. Let's write the function ourselves:
multMaybes :: Maybe Int -> Maybe Int -> Maybe Int
multMaybes Nothing _ = Nothing
multMaybes _ Nothing = Nothing
multMaybes (Just x) (Just y) = Just (x * y)
So if either side of the multiplication has failed (i.e. you found a divide-by-zero), the whole thing will fail. Now, we need to do this once for every operator:
addMaybes Nothing _ = Nothing
addMaybes _ Nothing = Nothing
addMaybes (Just x) (Just y) = Just (x + y)
subMaybes Nothing _ = Nothing
subMaybes _ Nothing = Nothing
subMaybes (Just x) (Just y) = Just (x - y)
And so on. But we can see there's a lot of repetition here. Luckily, there's a function that does this pattern already: liftA2.
multMaybes = liftA2 (*)
addMaybes = liftA2 (+)
subMaybes = liftA2 (-)
Finally, there are two more small problems. First, you say:
expression (Div e1 e2)
| e2 /= 0 = Just (expression e1) `div` (expression e2)
But e2 isn't an Int! It's the expression type. You probably want to check if the result of the recursive call is 0.
The second problem is that you're unnecessarily wrapping things in Just: we can remove one layer.
After all of that, we can write your function like this:
expression :: Math -> Maybe Int
expression (Val n) = Just n
expression (Add e1 e2) = liftA2 (+) (expression e1) (expression e2)
expression (Sub e1 e2) = liftA2 (-) (expression e1) (expression e2)
expression (Mult e1 e2) = liftA2 (*) (expression e1) (expression e2)
expression (Div e1 e2)
| r2 /= Just 0 = liftA2 div (expression e1) r2
| otherwise = Nothing
where r2 = expression e2
There are two problems here:
Just (expression e1) + (expression e2)
is interpreted as:
(Just (expression e1)) + (expression e2)
So that means that you have wrapped the left value in a Just, whereas the other one is not, and this will not make much sense.
Secondly, both expression e1 and expression e2 have type Maybe Int, hence that means that you can not add these two together. We can perform pattern matching.
Fortunately there is a more elegant solution: we can make use of liftM2 :: Monad m => (a -> b -> c) -> m a -> m b -> m c for most of the patterns. For Maybe the liftM2 will take a function f :: a -> b -> c and two Maybes, and if both are Justs it will call the function on the values that are wrapped in the Justs and then wrap the result in a Just as well.
As for the division case, we will first have to obtain the result of the denominator with the expression function, and if that is a Just that is not equal to zero, then we can fmap :: Functor f => (a -> b) -> f a -> f b function to map a value in a Just (that of the numerator) given of course the numerator is a Just:
import Control.Monad(liftM2)
expression :: Math -> Maybe Int
expression (Val n) = Just n
expression (Add e1 e2) = liftM2 (+) (expression e1) (expression e2)
expression (Sub e1 e2) = liftM2 (-) (expression e1) (expression e2)
expression (Mult e1 e2) = liftM2 (*) (expression e1) (expression e2)
expression (Div e1 e2) | Just v2 <- expression e2, v2 /= 0 = fmap (`div` v2) (expression e1)
| otherwise = Nothing
or we can, like #RobinZigmond says, use (<$>) :: Functor f => (a -> b) -> f a -> f b and (<*>) :: Applicative f => f (a -> b) -> f a -> f b:
expression :: Math -> Maybe Int
expression (Val n) = Just n
expression (Add e1 e2) = (+) <$> expression e1 <*> expression e2
expression (Sub e1 e2) = (-) <$> expression e1 <*> expression e2
expression (Mult e1 e2) = (*) <$> expression e1 <*> expression e2
expression (Div e1 e2) | Just v2 <- expression e2, v2 /= 0 = (`div` v2) <$> expression e1
| otherwise = Nothing

A traversal as data

I heard about this construction which is loosely described as “a traversal represented in data, applied to some structure, without the need for the applicative”
It can be defined as:
data X a b r =
| Done r
| Step a (X a b (b -> r))
A word description would be as follows:
the type X a b r describes the shape of a structure
which contains things of type a
and for each a you get the opportunity to produce something of type b
and provided you do that for each a,
you get something of type r.
Thus a “traversal” of a list, [a], has type X a b [b], because if you can turn each a of the list into a b then you get a [b].
My question is: what is this thing called? Is there a reference to more information about it?
Example usage:
instance Functor (X a b) where
fmap f (Done r) = f r
fmap f (Step a next) = Step a (fmap (f .) next)
f :: [a] -> X a b [b]
f [] = Done []
f (a:as) = Step a (fmap (flip (:)) as)
g :: Applicative f => (a -> f b) -> X a b r -> f r
g f (Done r) = pure r
g f (Step a next) = g f next <*> f a
More generally:
instance Applicative (X a b) where
pure x = Done x
Done f <*> y = fmap (\y -> f y) y
Step a next <*> y = Step a (fmap flip next <*> y)
t :: Traversable t => t a -> X a b (t b)
t = traverse (\a -> Step a (Done id))
And, assuming I haven’t made any errors, we should find that:
flip g . t == traverse
Edit: I’ve thought about this some more. There is something this doesn’t have which a traversal has: a traversal can split up the computation into something that isn’t “one at a time,” for example to traverse a binary tree one can traverse the left and right half “in parallel.” Here is a structure that I think gives the same effect:
data Y a b r =
| Done r
| One a (b -> r)
| forall s t. Split (Y a b s) (Y a b t) (s -> t -> r)
(Slightly vague syntax as I don’t remember it and don’t want to write this as a gadt)
f1 :: X a b r -> Y a b r
f1 (Done x) = Done x
f1 (Step a next) = Split (One a id) (f1 next) (flip ($))
f2 :: Y a b r -> X a b r
f2 (Done x) = Done x
f2 (One a f) = Step a (Done f)
f2 (Split x y f) = f <$> f2 x <*> f2 y

Recursion scheme for symbolic differentiation

Following terminology from this excellent series, let's represent an expression such as (1 + x^2 - 3x)^3 by a Term Expr, where the data types are the following:
data Expr a =
Var
| Const Int
| Plus a a
| Mul a a
| Pow a Int
deriving (Functor, Show, Eq)
data Term f = In { out :: f (Term f) }
Is there a recursion scheme suitable for performing symbolic differentiation? I feel like it's almost a Futumorphism specialized to Term Expr, i.e. futu deriveFutu for an appropriate function deriveFutu:
data CoAttr f a
= Automatic a
| Manual (f (CoAttr f a))
futu :: Functor f => (a -> f (CoAttr f a)) -> a -> Term f
futu f = In <<< fmap worker <<< f where
worker (Automatic a) = futu f a
worker (Manual g) = In (fmap worker g)
This looks pretty good, except that the underscored variables are Terms instead of CoAttrs:
deriveFutu :: Term Expr -> Expr (CoAttr Expr (Term Expr))
deriveFutu (In (Var)) = (Const 1)
deriveFutu (In (Const _)) = (Const 0)
deriveFutu (In (Plus x y)) = (Plus (Automatic x) (Automatic y))
deriveFutu (In (Mul x y)) = (Plus (Manual (Mul (Automatic x) (Manual _y)))
(Manual (Mul (Manual _x) (Automatic y)))
)
deriveFutu (In (Pow x c)) = (Mul (Manual (Const c)) (Manual (Mul (Manual (Pow _x (c-1))) (Automatic x))))
The version without recursion schemes looks like this:
derive :: Term Expr -> Term Expr
derive (In (Var)) = In (Const 1)
derive (In (Const _)) = In (Const 0)
derive (In (Plus x y)) = In (Plus (derive x) (derive y))
derive (In (Mul x y)) = In (Plus (In (Mul (derive x) y)) (In (Mul x (derive y))))
derive (In (Pow x c)) = In (Mul (In (Const c)) (In (Mul (In (Pow x (c-1))) (derive x))))
As an extension to this question, is there a recursion scheme for differentiating and eliminating "empty" Exprs such as Plus (Const 0) x that arise as a result of differentiation -- in one pass over the data?
Look at the differentiation rule for product:
(u v)' = u' v + v' u
What do you need to know to differentiate a product? You need to know the derivatives of the subterms (u', v'), as well as their values (u, v).
This is exactly what a paramorphism gives you.
para
:: Functor f
=> (f (b, Term f) -> b)
-> Term f -> b
para g (In a) = g $ (para g &&& id) <$> a
derivePara :: Term Expr -> Term Expr
derivePara = para $ In . \case
Var -> Const 1
Const _ -> Const 0
Plus x y -> Plus (fst x) (fst y)
Mul x y -> Plus
(In $ Mul (fst x) (snd y))
(In $ Mul (snd x) (fst y))
Pow x c -> Mul
(In (Const c))
(In (Mul
(In (Pow (snd x) (c-1)))
(fst x)))
Inside the paramorphism, fst gives you access to the derivative of a subterm, while snd gives you the term itself.
As an extension to this question, is there a recursion scheme for differentiating and eliminating "empty" Exprs such as Plus (Const 0) x that arise as a result of differentiation -- in one pass over the data?
Yes, it's still a paramorphism. The easiest way to see this is to have smart constructors such as
plus :: Term Expr -> Term Expr -> Expr (Term Expr)
plus (In (Const 0)) (In x) = x
plus (In x) (In (Const 0)) = x
plus x y = Plus x y
and use them when defining the algebra. You could probably express this as some kind of para-cata fusion, too.

Implementing alpha equivalence - Haskell

So let me define a few things:
type Name = String
data Exp = Var Name
| App Exp Exp
| Lam Name Exp
deriving (Eq,Show,Read)
I want to define alpha-equivalence, which is
alpha_eq :: Exp -> Exp -> Bool
-- The terms x and y are not alpha-equivalent, because they are not bound in a lambda abstraction
alpha_eq (Var x) (Var y) = False
alpha_eq (Lam x e1) (Lam y e2) = False
alpha_eq (App e1 e2) (App e3 e4) = False
For example Lam "x" (Var "x") and Lam "y" (Var "y") are both equivalent. However I'm both new and horrible at Haskell. Could someone give a clue of how to implement alpha_eq? One thing I thought about was to use Map Name Int so in this case I would have:
['x' -> 0] ['y' -> 0]
so in this case Map['x'] == Map['y']. But again I'm horrible at Haskell. Could you someone give me a clue how to implement it?
Yes, using a Map a correct idea (though think on what the key and value types should be; with Map Name Int you need two extra arguments instead of one). You need to add it as the argument of a helper function, I won't give the full implementation since you asked for a clue only:
alpha_eq e1 e2 = alpha_eq' e1 e2 env0 where
env0 = ???
alpha_eq' (Var x) (Var y) env = ???
alpha_eq' (Lambda x e1) (Lambda y e2) env = ???
alpha_eq' (App e1 e2) (App e3 e4) env = ???
-- you don't want to throw an error in all other cases
alpha_eq' _ _ env = ???
You could also make separate function subst :: Name -> Exp -> Exp -> Exp. Then, alpha_eq Lam-case becomes
alpha_eq :: Exp -> Exp -> Bool
...
alpha_eq (Lam x xb) (Lam y yb) = xb `alpha_eq` subst y (Var x) yb
...
Excersise: figure out other alpha_eq cases and implementation of subst.

Conversion from lambda term to combinatorial term

Suppose there are some data types to express lambda and combinatorial terms:
data Lam α = Var α -- v
| Abs α (Lam α) -- λv . e1
| App (Lam α) (Lam α) -- e1 e2
deriving (Eq, Show)
infixl 0 :#
data SKI α = V α -- x
| SKI α :# SKI α -- e1 e2
| I -- I
| K -- K
| S -- S
deriving (Eq, Show)
There is also a function to get a list of lambda term's free variables:
fv ∷ Eq α ⇒ Lam α → [α]
fv (Var v) = [v]
fv (Abs x e) = filter (/= x) $ fv e
fv (App e1 e2) = fv e1 ++ fv e2
To convert lambda term to combinatorial term abstract elimination rules could be usefull:
convert ∷ Eq α ⇒ Lam α → SKI α
1) T[x] => x
convert (Var x) = V x
2) T[(E₁ E₂)] => (T[E₁] T[E₂])
convert (App e1 e2) = (convert e1) :# (convert e2)
3) T[λx.E] => (K T[E]) (if x does not occur free in E)
convert (Abs x e) | x `notElem` fv e = K :# (convert e)
4) T[λx.x] => I
convert (Abs x (Var y)) = if x == y then I else K :# V y
5) T[λx.λy.E] => T[λx.T[λy.E]] (if x occurs free in E)
convert (Abs x (Abs y e)) | x `elem` fv e = convert (Abs x (convert (Abs y e)))
6) T[λx.(E₁ E₂)] => (S T[λx.E₁] T[λx.E₂])
convert (Abs x (App y z)) = S :# (convert (Abs x y)) :# (convert (Abs x z))
convert _ = error ":["
This definition is not valid because of 5):
Couldn't match expected type `Lam α' with actual type `SKI α'
In the return type of a call of `convert'
In the second argument of `Abs', namely `(convert (Abs y e))'
In the first argument of `convert', namely
`(Abs x (convert (Abs y e)))'
So, what I have now is:
> convert $ Abs "x" $ Abs "y" $ App (Var "y") (Var "x")
*** Exception: :[
What I want is (hope I calculate it right):
> convert $ Abs "x" $ Abs "y" $ App (Var "y") (Var "x")
S :# (S (KS) (S (KK) I)) (S (KK) I)
Question:
If lambda term and combinatorial term have a different types of expression, how 5) could be formulated right?
Let's consider the equation T[λx.λy.E] => T[λx.T[λy.E]].
We know the result of T[λy.E] is an SKI expression. Since it has been produced by one of the cases 3, 4 or 6, it is either I or an application (:#).
Thus the outer T in T[λx.T[λy.E]] must be one of the cases 3 or 6. You can perform this case analysis in the code. I'm sorry but I don't have the time to write it out.
Here it's better to have a common data type for combinators and lambda expressions. Notice that your types already have significant overlap (Var, App), and it doesn't hurt to have combinators in lambda expressions.
The only possibility we want to eliminate is having lambda abstractions in combinator terms. We can forbid them using indexed types.
In the following code the type of a term is parameterised by the number of nested lambda abstractions in that term. The convert function returns Term Z a, where Z means zero, so there are no lambda abstractions in the returned term.
For more information about singleton types (which are used a bit here), see the paper Dependently Typed Programming with Singletons.
{-# LANGUAGE DataKinds, KindSignatures, TypeFamilies, GADTs, TypeOperators,
ScopedTypeVariables, MultiParamTypeClasses, FlexibleInstances #-}
data Nat = Z | Inc Nat
data SNat :: Nat -> * where
SZ :: SNat Z
SInc :: NatSingleton n => SNat n -> SNat (Inc n)
class NatSingleton (a :: Nat) where
sing :: SNat a
instance NatSingleton Z where sing = SZ
instance NatSingleton a => NatSingleton (Inc a) where sing = SInc sing
type family Max (a :: Nat) (b :: Nat) :: Nat
type instance Max Z a = a
type instance Max a Z = a
type instance Max (Inc a) (Inc b) = Inc (Max a b)
data Term (l :: Nat) a where
Var :: a -> Term Z a
Abs :: NatSingleton l => a -> Term l a -> Term (Inc l) a
App :: (NatSingleton l1, NatSingleton l2)
=> Term l1 a -> Term l2 a -> Term (Max l1 l2) a
I :: Term Z a
K :: Term Z a
S :: Term Z a
fv :: Eq a => Term l a -> [a]
fv (Var v) = [v]
fv (Abs x e) = filter (/= x) $ fv e
fv (App e1 e2) = fv e1 ++ fv e2
fv _ = []
eliminateLambda :: (Eq a, NatSingleton l) => Term (Inc l) a -> Term l a
eliminateLambda t =
case t of
Abs x t ->
case t of
Var y
| y == x -> I
| otherwise -> App K (Var y)
Abs {} -> Abs x $ eliminateLambda t
App a b -> S `App` (eliminateLambda $ Abs x a)
`App` (eliminateLambda $ Abs x b)
App a b -> eliminateLambdaApp a b
eliminateLambdaApp
:: forall a l1 l2 l .
(Eq a, Max l1 l2 ~ Inc l,
NatSingleton l1,
NatSingleton l2)
=> Term l1 a -> Term l2 a -> Term l a
eliminateLambdaApp a b =
case (sing :: SNat l1, sing :: SNat l2) of
(SInc _, SZ ) -> App (eliminateLambda a) b
(SZ , SInc _) -> App a (eliminateLambda b)
(SInc _, SInc _) -> App (eliminateLambda a) (eliminateLambda b)
convert :: forall a l . Eq a => NatSingleton l => Term l a -> Term Z a
convert t =
case sing :: SNat l of
SZ -> t
SInc _ -> convert $ eliminateLambda t
The key insight is that S, K and I are just constant Lam terms, in the same way that 1, 2 and 3 are constant Ints. It would be pretty easy to make rule 5 type-check by making an inverse to the 'convert' function:
nvert :: SKI a -> Lam a
nvert S = Abs "x" (Abs "y" (Abs "z" (App (App (Var "x") (Var "z")) (App (Var "y") (Var "z")))))
nvert K = Abs "x" (Abs "y" (Var "x"))
nvert I = Abs "x" (Var "x")
nvert (V x) = Var x
nvert (x :# y) = App (nvert x) (nvert y)
Now we can use 'nvert' to make rule 5 type-check:
convert (Abs x (Abs y e)) | x `elem` fv e = convert (Abs x (nvert (convert (Abs y e))))
We can see that the left and the right are identical (we'll ignore the guard), except that 'Abs y e' on the left is replaced by 'nvert (convert (Abs y e))' on the right. Since 'convert' and 'nvert' are each others' inverse, we can always replace any Lam 'x' with 'nvert (convert x)' and likewise we can always replace any SKI 'x' with 'convert (nvert x)', so this is a valid equation.
Unfortunately, while it's a valid equation it's not a useful function definition because it won't cause the computation to progress: we'll just convert 'Abs y e' back and forth forever!
To break this loop we can replace the call to 'nvert' with a 'reminder' that we should do it later. We do this by adding a new constructor to Lam:
data Lam a = Var a -- v
| Abs a (Lam a) -- \v . e1
| App (Lam a) (Lam a) -- e1 e2
| Com (SKI a) -- Reminder to COMe back later and nvert
deriving (Eq, Show)
Now rule 5 uses this reminder instead of 'nvert':
convert (Abs x (Abs y e)) | x `elem` fv e = convert (Abs x (Com (convert (Abs y e))))
Now we need to make good our promise to come back, by making a separate rule to replace reminders with actual calls to 'nvert', like this:
convert (Com c) = convert (nvert c)
Now we can finally break the loop: we know that 'convert (nvert c)' is always identical to 'c', so we can replace the above line with this:
convert (Com c) = c
Notice that our final definition of 'convert' doesn't actually use 'nvert' at all! It's still a handy function though, since other functions involving Lam can use it to handle the new 'Com' case.
You've probably noticed that I've actually named this constructor 'Com' because it's just a wrapped-up COMbinator, but I thought it would be more informative to take a slightly longer route than just saying "wrap up your SKIs in Lams" :)
If you're wondering why I called that function "nvert", see http://unapologetic.wordpress.com/2007/05/31/duality-terminology/ :)
Warbo is right, combinators are constant lambda terms, consequently the conversion function is
T[ ]:L -> C with L the set of lambda terms and C that of combinatory terms and with C ⊂ L .
So there is no typing problem for the rule T[λx.λy.E] => T[λx.T[λy.E]]
Here an implementation in Scala.

Resources