Beta Conversion for Lambda Calculus Haskell - haskell

I want to implement a function which does beta reduction to a lambda expression where my lambda expression is of the type:
data Expr = App Expr Expr | Abs Int Expr | Var Int deriving (Show,Eq)
My evaluation function so far is:
eval1cbv :: Expr -> Expr
eval1cbv (Var x) = (Var x)
eval1cbv (Abs x e) = (Abs x e)
eval1cbv (App (Abs x e1) e#(Abs y e2)) = eval1cbv (subst e1 x e)
eval1cbv (App e#(Abs x e1) e2) = eval1cbv (subst e2 x e)
eval1cbv (App e1 e2) = (App (eval1cbv e1) e2)
where subst is a function used to define substitution.
However, when I try to reduce an expression using beta reduction I get a non-exhaustive patterns error and I cannot understand why. What I can do to fix it is adding an extra case at the bottom like this:
eval :: Expr -> Expr
eval (Abs x e) = (Abs x e)
eval (App (Abs x e1) e#(Abs y e2)) = subst e1 x e
eval (App e#(Abs x e1) e2) = App e (eval e2)
eval (App e1 e2) = App (eval e1) e2
eval (Var x) = Var x
However, if I do that then the lambda expression is not being reduced at all, meaning that the input is the same as the output of the function.
So, if I try to evaluate a simple case like:
eval (App (Abs 2 (Var 2)) (Abs 3 (Var 3))) it works fine giving ->
Abs 3 (Var 3)
but when I run it for a bigger test case like:
eval (App (Abs 1 (Abs 2 (Var 1))) (Var 3)) i get:
non-exhaustive patterns if I use the first function without adding the last case
or the exact same expression App (Abs 1 (Abs 2 (Var 1))) (Var 3), which obviously does not get reduced, if I add the last case
Can anyone help me figure this out please? :)

but when I run it for a bigger test case like:
eval (App (Abs 1 (Abs 2 (Var 1))) (Var 3))
When you try to apply something of the form Abs x e to Var y, you're in this branch,
eval (App e#(Abs x e1) e2) = App e (eval e2)
so you have,
App (Abs x e) (Var y)
= App (Abs x e) (eval (Var y))
= App (Abs x e) (Var y)
This is not what you want to do. Both (Abs x e) and (Var y) are in normal form (i.e. evaluated), so you should have substituted. You appear to only treat lambdas, and not variables, as evaluated.
There are more problems with your code. Consider this branch,
eval (App e1 e2) = App (eval e1) e2
The result is always an App. E.g. if eval e1 = Abs x e then the result is App (Abs x e) e2. It stops there, not further evaluation is performed.
And consider this branch,
eval (App (Abs x e1) e#(Abs y e2)) = subst e1 x e
What happens if the result of substitution is an application term? Will the result be evaluated?
EDIT
Regarding your changes, given LamApp e1 e2 you were following a call-by-value evaluation strategy before (i.e. you were evaluating e2 before substituting). That is gone,
Here it e2 is a lambda so it needs no evaluation,
eval1cbv (LamApp (LamAbs x e1) e#(LamAbs y e2)) = eval1cbv (subst e1 x e)
Here you substitute anyway regardless of what e2 is, so you do the exact same as before. You don't need the previous case then and are now following a call-by-name evaluation strategy. I don't know if that's what you want. Also you are calling subst with the wrong arguments here. I suppose you mean subst e1 x e2 and you don't need that #e.
eval1cbv (LamApp e#(LamAbs x e1) e2) = eval1cbv (subst e2 x e)
Here you are just evaluating the first argument which is consistent with a call-by-name strategy. But again I don't know if that's your intention.
eval1cbv (LamApp e1 e2) = (LamApp (eval1cbv e1) e2)

Related

Implementing a catamorphism for Expression Trees

I am trying to implement an expression tree in Haskell as follows:
data ExprTr a b =
Variable a
| Constant b
| Add (ExprTr a b) (ExprTr a b)
| Mul (ExprTr a b) (ExprTr a b)
deriving (Eq, Show)
And I would like to be able to implement operations on it using a catamorphism.
Currently, this is the function I got:
cataTr f _ _ _ (Variable i) = f i
cataTr f g _ _ (Constant i) = g i
cataTr f g h i (Add e1 e2) = g (cataTr f g h i e1) (cataTr f g h i e2)
cataTr f g h i (Mul e1 e2) = h (cataTr f g h i e1) (cataTr f g h i e2)
However, whenever I try to use it with an expresion of type ExprTr String Integer I get compiler errors. For example, running cataTr id id id id (Var "X") returns the following compiler error instead of (Var "X").
Couldn't match type 'Integer' with '[Char]'
Expected type: 'ExprTr String String'
Actual type: 'ExprTr String Integer'
I am not sure how to proceed. Furthermore, I would appreciate some suggestions on how to type such a function as cataTr to make it easier to debug later.
As I am fairly new to Haskell, I would like to understand how to approach such situations from 'first principles' instead of using a library to generate the catamorphism for myself.
This is expected behavior.
You made a typo in the question I guess, since you should use h and i as functions:
cataTr f _ _ _ (Variable i) = f i
cataTr f g _ _ (Constant i) = g i
cataTr f g h i (Add e1 e2) = h (cataTr f g h i e1) (cataTr f g h i e2)
cataTr f g h i (Mul e1 e2) = i (cataTr f g h i e1) (cataTr f g h i e2)
or likely more elegant:
cataTr f g h i = go
where go (Variable i) = f i
go (Constant i) = g i
go (Add e1 e2) = h (go e1) (go e2)
go (Mul e1 e2) = i (go e1) (go e2)
or as #DanielWagner suggests, with a case expression:
cataTr f g h i = go
where go v = case v of
Variable i -> f i
Constant i -> g i
Add e1 e2 -> h (go e1) (go e2)
Mul e1 e2 -> i (go e1) (go e2)
Nevertheless, you can not call the function cataTr with id as third and fourth parameter. These functions require two parameters. Furthermore if a and b are different the two first parameters can not be both id, since your f maps an a to the result type, and the g maps a b to the result type.
You can for example pass the data constructor to construct an identity function with:
cataTr Variable Constant Add Mul (Variable "X")
this will thus yield Variable "X" again, or you can for example map all Variables to 0 with const 0, and use id, (+) and (*) to evaluate an expression:
cataTr (const 0) id (+) (*) (Variable "X")

Maybe Int expression using unique data type

I'm wrote a unique data type to express basic math (addition, mult, etc.) and it works - however, when I try to turn it into a Maybe statement, none of the math works. I believe it's a syntax error but I've tried extra parenthesis and so on and I can't figure it out. Usually Maybe statements are easy but I don't understand why it keeps throwing an issue.
This is the data type I created (with examples):
data Math = Val Int
| Add Math Math
| Sub Math Math
| Mult Math Math
| Div Math Math
deriving Show
ex1 :: Math
ex1 = Add1 (Val1 2) (Val1 3)
ex2 :: Math
ex2 = Mult (Val 2) (Val 3)
ex3 :: Math
ex3 = Div (Val 3) (Val 0)
Here is the code. The only Nothing return should be a division by zero.
expression :: Math -> Maybe Int
expression (Val n) = Just n
expression (Add e1 e2) = Just (expression e1) + (expression e2)
expression (Sub e1 e2) = Just (expression e1) - (expression e2)
expression (Mult e1 e2) = Just (expression e1) * (expression e2)
expression (Div e1 e2)
| e2 /= 0 = Just (expression e1) `div` (expression e2)
| otherwise = Nothing
I get the same error for every individual mathematical equation, even if I delete the others, so I'm certain it's syntax. The error makes it seem like a Maybe within a Maybe but when I do that e1 /= 0 && e2 /= 0 = Just (Just (expression e1)div(expression e2)), I get the same error:
* Couldn't match type `Int' with `Maybe Int'
Expected type: Maybe (Maybe Int)
Actual type: Maybe Int
* In the second argument of `div', namely `(expression e2)'
In the expression: Just (expression e1) `div` (expression e2)
In an equation for `expression':
expression (Div e1 e2)
| e1 /= 0 && e2 /= 0 = Just (expression e1) `div` (expression e2)
| otherwise = Nothing
|
56 | | e1 /= 0 && e2 /= 0 = Just (expression e1) `div` (expression e2)
| ^^^^^^^^^
What am I missing? It's driving me crazy.
So the first issue is precedence. Instead of writing:
Just (expression e1) * (expression e2)
You probably want:
Just (expression e1 * expression e2)
The second issue is the types. Take a look at the type of (*), for instance:
>>> :t (*)
(*) :: Num a => a -> a -> a
It says, for some type a that is a Num, it takes two as and returns one a. Specialised to Int, that would be:
(*) :: Int -> Int -> Int
But expression returns a Maybe Int! So we need some way to multiply with Maybes. Let's write the function ourselves:
multMaybes :: Maybe Int -> Maybe Int -> Maybe Int
multMaybes Nothing _ = Nothing
multMaybes _ Nothing = Nothing
multMaybes (Just x) (Just y) = Just (x * y)
So if either side of the multiplication has failed (i.e. you found a divide-by-zero), the whole thing will fail. Now, we need to do this once for every operator:
addMaybes Nothing _ = Nothing
addMaybes _ Nothing = Nothing
addMaybes (Just x) (Just y) = Just (x + y)
subMaybes Nothing _ = Nothing
subMaybes _ Nothing = Nothing
subMaybes (Just x) (Just y) = Just (x - y)
And so on. But we can see there's a lot of repetition here. Luckily, there's a function that does this pattern already: liftA2.
multMaybes = liftA2 (*)
addMaybes = liftA2 (+)
subMaybes = liftA2 (-)
Finally, there are two more small problems. First, you say:
expression (Div e1 e2)
| e2 /= 0 = Just (expression e1) `div` (expression e2)
But e2 isn't an Int! It's the expression type. You probably want to check if the result of the recursive call is 0.
The second problem is that you're unnecessarily wrapping things in Just: we can remove one layer.
After all of that, we can write your function like this:
expression :: Math -> Maybe Int
expression (Val n) = Just n
expression (Add e1 e2) = liftA2 (+) (expression e1) (expression e2)
expression (Sub e1 e2) = liftA2 (-) (expression e1) (expression e2)
expression (Mult e1 e2) = liftA2 (*) (expression e1) (expression e2)
expression (Div e1 e2)
| r2 /= Just 0 = liftA2 div (expression e1) r2
| otherwise = Nothing
where r2 = expression e2
There are two problems here:
Just (expression e1) + (expression e2)
is interpreted as:
(Just (expression e1)) + (expression e2)
So that means that you have wrapped the left value in a Just, whereas the other one is not, and this will not make much sense.
Secondly, both expression e1 and expression e2 have type Maybe Int, hence that means that you can not add these two together. We can perform pattern matching.
Fortunately there is a more elegant solution: we can make use of liftM2 :: Monad m => (a -> b -> c) -> m a -> m b -> m c for most of the patterns. For Maybe the liftM2 will take a function f :: a -> b -> c and two Maybes, and if both are Justs it will call the function on the values that are wrapped in the Justs and then wrap the result in a Just as well.
As for the division case, we will first have to obtain the result of the denominator with the expression function, and if that is a Just that is not equal to zero, then we can fmap :: Functor f => (a -> b) -> f a -> f b function to map a value in a Just (that of the numerator) given of course the numerator is a Just:
import Control.Monad(liftM2)
expression :: Math -> Maybe Int
expression (Val n) = Just n
expression (Add e1 e2) = liftM2 (+) (expression e1) (expression e2)
expression (Sub e1 e2) = liftM2 (-) (expression e1) (expression e2)
expression (Mult e1 e2) = liftM2 (*) (expression e1) (expression e2)
expression (Div e1 e2) | Just v2 <- expression e2, v2 /= 0 = fmap (`div` v2) (expression e1)
| otherwise = Nothing
or we can, like #RobinZigmond says, use (<$>) :: Functor f => (a -> b) -> f a -> f b and (<*>) :: Applicative f => f (a -> b) -> f a -> f b:
expression :: Math -> Maybe Int
expression (Val n) = Just n
expression (Add e1 e2) = (+) <$> expression e1 <*> expression e2
expression (Sub e1 e2) = (-) <$> expression e1 <*> expression e2
expression (Mult e1 e2) = (*) <$> expression e1 <*> expression e2
expression (Div e1 e2) | Just v2 <- expression e2, v2 /= 0 = (`div` v2) <$> expression e1
| otherwise = Nothing

Understanding the eval function used to define combinators and expression (Lambda) in Haskell

I find the use of pattern matching for eval (App x y) redundant since both cases would return App x y. I wonder if eval (App x y) is needed at all because we have eval x = x at the end, which should also include eval (App x y)
data Expr = App Expr Expr | S | K | I | Var String | Lam String Expr deriving (Show,Eq)
eval :: Exp -> Exp
eval (App I x) = eval x
eval (App (App K x) y) = eval x
eval (App (App (App S f) g) x) = eval (App (App f x) (App g x))
eval (App x y)
| evalx == x = (App evalx (eval y)) --test if x is a Lam (not other possible values of Exp)
| otherwise = eval (App evalx y)
where evalx = eval x
eval (Var x) = (Var x)
eval x = x

Haskell: How to define my custom math data type recursively (stop infinite recursion)

I'm defining a custom data type to help me with my calculus for a project. I have defined this data type as follows:
data Math a =
Add (Math a) (Math a)
| Mult (Math a) (Math a)
| Cos (Math a)
| Sin (Math a)
| Log (Math a)
| Exp (Math a)
| Const a
| Var Char
deriving Show
I am creating a function called eval that partially evaluates a mathematical expression for me. This is what I have:
eval (Const a) = (Const a)
eval (Var a) = (Var a)
eval (Add (Const a) (Const b)) = eval (Const (a+b))
eval (Add (Var a) b) = eval (Add (Var a) (eval b))
eval (Add a (Var b)) = eval (Add (eval a) (Var b))
eval (Add a b) = eval (Add (eval a) (eval b))
eval (Mult (Const a) (Const b)) = (Const (a*b))
eval (Mult a (Var b)) = (Mult (eval a) (Var b))
eval (Mult (Var a) b) = (Mult (Var a) (eval b))
eval (Mult a b) = eval (Mult (eval a) (eval b))
eval (Cos (Const a)) = (Const (cos(a)))
eval (Cos (Var a)) = (Cos (Var a))
eval (Cos a) = eval (Cos (eval a))
eval (Sin (Const a)) = (Const (sin(a)))
eval (Sin (Var a)) = (Sin (Var a))
eval (Sin a) = eval (Sin (eval a))
eval (Log (Const a)) = (Const (log(a)))
eval (Log (Var a)) = (Log (Var a))
eval (Log a) = eval (Log (eval a))
eval (Exp (Const a)) = (Const (exp(a)))
eval (Exp (Var a)) = (Exp (Var a))
This works fine for the most part. For instance, eval (Mult ((Const 4)) (Add (Cos (Const (0))) (Log (Const 1)))) results in (Const 4.0)
My problem arises whenever I have a variable added with two constants:
eval (Add (Const 4) (Add (Const 4) (Var 'x'))) gives me infinite recursion. I have determined that the issue is because I call eval on eval (Add a b) = eval (Add (eval a) (eval b)). If I make this line, eval (Add a b) = (Add (eval a) (eval b)), I stop the infinite recursion, but I am no longer simplifying my answers: Add (Const 4.0) (Add (Const 4.0) (Var 'x')) results in the exact same Add (Const 4.0) (Add (Const 4.0) (Var 'x')). How do I get something like Add (Const 8.0) (Var 'x') instead?
Any help would be much appreciated!
Your problem is that you don't treat expressions that are already simplified any differently than those that aren't. You keep calling eval until both operands are constants and if that never happens, you never terminate. The most simple problematic input would be Add (Var "x") (Const 5). Such an input should end the recursion and just return itself. But instead it will keep calling eval on the same input:
eval (Add (Var "x") (Const 5))
= eval (Add (Var "x") (eval (Const 5)))
= eval (Add (Var "x") (Const 5))
= eval (Add (Var "x") (eval (Const 5)))
= ... ad infinitum
In general the way to avoid this kind of problem in the first place, i.e. to make it obvious when you're missing a base case, is to structure your function in such a way that all recursive cases of your function call itself only with sub-expressions of the argument expression.
In this case that could be achieved by evaluating the operands first and then constant-folding the result without another recursive call. That would look like this:
eval (Add a b) =
case (eval a, eval b) of
(Const x, Const y) => Const (x+y)
(x, y) => Add x y
Here the only recursive calls are eval a and eval b where a and b are subexpressions of the original expression. This guarantees termination because if all cases follow this rule, you'll eventually reach expressions that have no subexpressions, meaning the recursion must terminate.

Implementing alpha equivalence - Haskell

So let me define a few things:
type Name = String
data Exp = Var Name
| App Exp Exp
| Lam Name Exp
deriving (Eq,Show,Read)
I want to define alpha-equivalence, which is
alpha_eq :: Exp -> Exp -> Bool
-- The terms x and y are not alpha-equivalent, because they are not bound in a lambda abstraction
alpha_eq (Var x) (Var y) = False
alpha_eq (Lam x e1) (Lam y e2) = False
alpha_eq (App e1 e2) (App e3 e4) = False
For example Lam "x" (Var "x") and Lam "y" (Var "y") are both equivalent. However I'm both new and horrible at Haskell. Could someone give a clue of how to implement alpha_eq? One thing I thought about was to use Map Name Int so in this case I would have:
['x' -> 0] ['y' -> 0]
so in this case Map['x'] == Map['y']. But again I'm horrible at Haskell. Could you someone give me a clue how to implement it?
Yes, using a Map a correct idea (though think on what the key and value types should be; with Map Name Int you need two extra arguments instead of one). You need to add it as the argument of a helper function, I won't give the full implementation since you asked for a clue only:
alpha_eq e1 e2 = alpha_eq' e1 e2 env0 where
env0 = ???
alpha_eq' (Var x) (Var y) env = ???
alpha_eq' (Lambda x e1) (Lambda y e2) env = ???
alpha_eq' (App e1 e2) (App e3 e4) env = ???
-- you don't want to throw an error in all other cases
alpha_eq' _ _ env = ???
You could also make separate function subst :: Name -> Exp -> Exp -> Exp. Then, alpha_eq Lam-case becomes
alpha_eq :: Exp -> Exp -> Bool
...
alpha_eq (Lam x xb) (Lam y yb) = xb `alpha_eq` subst y (Var x) yb
...
Excersise: figure out other alpha_eq cases and implementation of subst.

Resources