Conversion from lambda term to combinatorial term - haskell

Suppose there are some data types to express lambda and combinatorial terms:
data Lam α = Var α -- v
| Abs α (Lam α) -- λv . e1
| App (Lam α) (Lam α) -- e1 e2
deriving (Eq, Show)
infixl 0 :#
data SKI α = V α -- x
| SKI α :# SKI α -- e1 e2
| I -- I
| K -- K
| S -- S
deriving (Eq, Show)
There is also a function to get a list of lambda term's free variables:
fv ∷ Eq α ⇒ Lam α → [α]
fv (Var v) = [v]
fv (Abs x e) = filter (/= x) $ fv e
fv (App e1 e2) = fv e1 ++ fv e2
To convert lambda term to combinatorial term abstract elimination rules could be usefull:
convert ∷ Eq α ⇒ Lam α → SKI α
1) T[x] => x
convert (Var x) = V x
2) T[(E₁ E₂)] => (T[E₁] T[E₂])
convert (App e1 e2) = (convert e1) :# (convert e2)
3) T[λx.E] => (K T[E]) (if x does not occur free in E)
convert (Abs x e) | x `notElem` fv e = K :# (convert e)
4) T[λx.x] => I
convert (Abs x (Var y)) = if x == y then I else K :# V y
5) T[λx.λy.E] => T[λx.T[λy.E]] (if x occurs free in E)
convert (Abs x (Abs y e)) | x `elem` fv e = convert (Abs x (convert (Abs y e)))
6) T[λx.(E₁ E₂)] => (S T[λx.E₁] T[λx.E₂])
convert (Abs x (App y z)) = S :# (convert (Abs x y)) :# (convert (Abs x z))
convert _ = error ":["
This definition is not valid because of 5):
Couldn't match expected type `Lam α' with actual type `SKI α'
In the return type of a call of `convert'
In the second argument of `Abs', namely `(convert (Abs y e))'
In the first argument of `convert', namely
`(Abs x (convert (Abs y e)))'
So, what I have now is:
> convert $ Abs "x" $ Abs "y" $ App (Var "y") (Var "x")
*** Exception: :[
What I want is (hope I calculate it right):
> convert $ Abs "x" $ Abs "y" $ App (Var "y") (Var "x")
S :# (S (KS) (S (KK) I)) (S (KK) I)
Question:
If lambda term and combinatorial term have a different types of expression, how 5) could be formulated right?

Let's consider the equation T[λx.λy.E] => T[λx.T[λy.E]].
We know the result of T[λy.E] is an SKI expression. Since it has been produced by one of the cases 3, 4 or 6, it is either I or an application (:#).
Thus the outer T in T[λx.T[λy.E]] must be one of the cases 3 or 6. You can perform this case analysis in the code. I'm sorry but I don't have the time to write it out.

Here it's better to have a common data type for combinators and lambda expressions. Notice that your types already have significant overlap (Var, App), and it doesn't hurt to have combinators in lambda expressions.
The only possibility we want to eliminate is having lambda abstractions in combinator terms. We can forbid them using indexed types.
In the following code the type of a term is parameterised by the number of nested lambda abstractions in that term. The convert function returns Term Z a, where Z means zero, so there are no lambda abstractions in the returned term.
For more information about singleton types (which are used a bit here), see the paper Dependently Typed Programming with Singletons.
{-# LANGUAGE DataKinds, KindSignatures, TypeFamilies, GADTs, TypeOperators,
ScopedTypeVariables, MultiParamTypeClasses, FlexibleInstances #-}
data Nat = Z | Inc Nat
data SNat :: Nat -> * where
SZ :: SNat Z
SInc :: NatSingleton n => SNat n -> SNat (Inc n)
class NatSingleton (a :: Nat) where
sing :: SNat a
instance NatSingleton Z where sing = SZ
instance NatSingleton a => NatSingleton (Inc a) where sing = SInc sing
type family Max (a :: Nat) (b :: Nat) :: Nat
type instance Max Z a = a
type instance Max a Z = a
type instance Max (Inc a) (Inc b) = Inc (Max a b)
data Term (l :: Nat) a where
Var :: a -> Term Z a
Abs :: NatSingleton l => a -> Term l a -> Term (Inc l) a
App :: (NatSingleton l1, NatSingleton l2)
=> Term l1 a -> Term l2 a -> Term (Max l1 l2) a
I :: Term Z a
K :: Term Z a
S :: Term Z a
fv :: Eq a => Term l a -> [a]
fv (Var v) = [v]
fv (Abs x e) = filter (/= x) $ fv e
fv (App e1 e2) = fv e1 ++ fv e2
fv _ = []
eliminateLambda :: (Eq a, NatSingleton l) => Term (Inc l) a -> Term l a
eliminateLambda t =
case t of
Abs x t ->
case t of
Var y
| y == x -> I
| otherwise -> App K (Var y)
Abs {} -> Abs x $ eliminateLambda t
App a b -> S `App` (eliminateLambda $ Abs x a)
`App` (eliminateLambda $ Abs x b)
App a b -> eliminateLambdaApp a b
eliminateLambdaApp
:: forall a l1 l2 l .
(Eq a, Max l1 l2 ~ Inc l,
NatSingleton l1,
NatSingleton l2)
=> Term l1 a -> Term l2 a -> Term l a
eliminateLambdaApp a b =
case (sing :: SNat l1, sing :: SNat l2) of
(SInc _, SZ ) -> App (eliminateLambda a) b
(SZ , SInc _) -> App a (eliminateLambda b)
(SInc _, SInc _) -> App (eliminateLambda a) (eliminateLambda b)
convert :: forall a l . Eq a => NatSingleton l => Term l a -> Term Z a
convert t =
case sing :: SNat l of
SZ -> t
SInc _ -> convert $ eliminateLambda t

The key insight is that S, K and I are just constant Lam terms, in the same way that 1, 2 and 3 are constant Ints. It would be pretty easy to make rule 5 type-check by making an inverse to the 'convert' function:
nvert :: SKI a -> Lam a
nvert S = Abs "x" (Abs "y" (Abs "z" (App (App (Var "x") (Var "z")) (App (Var "y") (Var "z")))))
nvert K = Abs "x" (Abs "y" (Var "x"))
nvert I = Abs "x" (Var "x")
nvert (V x) = Var x
nvert (x :# y) = App (nvert x) (nvert y)
Now we can use 'nvert' to make rule 5 type-check:
convert (Abs x (Abs y e)) | x `elem` fv e = convert (Abs x (nvert (convert (Abs y e))))
We can see that the left and the right are identical (we'll ignore the guard), except that 'Abs y e' on the left is replaced by 'nvert (convert (Abs y e))' on the right. Since 'convert' and 'nvert' are each others' inverse, we can always replace any Lam 'x' with 'nvert (convert x)' and likewise we can always replace any SKI 'x' with 'convert (nvert x)', so this is a valid equation.
Unfortunately, while it's a valid equation it's not a useful function definition because it won't cause the computation to progress: we'll just convert 'Abs y e' back and forth forever!
To break this loop we can replace the call to 'nvert' with a 'reminder' that we should do it later. We do this by adding a new constructor to Lam:
data Lam a = Var a -- v
| Abs a (Lam a) -- \v . e1
| App (Lam a) (Lam a) -- e1 e2
| Com (SKI a) -- Reminder to COMe back later and nvert
deriving (Eq, Show)
Now rule 5 uses this reminder instead of 'nvert':
convert (Abs x (Abs y e)) | x `elem` fv e = convert (Abs x (Com (convert (Abs y e))))
Now we need to make good our promise to come back, by making a separate rule to replace reminders with actual calls to 'nvert', like this:
convert (Com c) = convert (nvert c)
Now we can finally break the loop: we know that 'convert (nvert c)' is always identical to 'c', so we can replace the above line with this:
convert (Com c) = c
Notice that our final definition of 'convert' doesn't actually use 'nvert' at all! It's still a handy function though, since other functions involving Lam can use it to handle the new 'Com' case.
You've probably noticed that I've actually named this constructor 'Com' because it's just a wrapped-up COMbinator, but I thought it would be more informative to take a slightly longer route than just saying "wrap up your SKIs in Lams" :)
If you're wondering why I called that function "nvert", see http://unapologetic.wordpress.com/2007/05/31/duality-terminology/ :)

Warbo is right, combinators are constant lambda terms, consequently the conversion function is
T[ ]:L -> C with L the set of lambda terms and C that of combinatory terms and with C ⊂ L .
So there is no typing problem for the rule T[λx.λy.E] => T[λx.T[λy.E]]
Here an implementation in Scala.

Related

How does list generation with Fractionals work? [duplicate]

This question already has answers here:
Haskell ranges and floats
(2 answers)
Is floating point math broken?
(31 answers)
Closed 4 years ago.
If i want to generate a list with the input:
[3.1,5.1..8.1]
GHC 8.6.3 returns:
[3.1,5.1,7.1,9.099999999999998]
My problem here isn't the approximation of 9.1, but why the list made by GHC has one element more than the following solution.
In the documentation I found in GHC.Enum, that enumFromThenTo translates this to something similar to the following:
-- | Used in Haskell's translation of #[n,n'..m]# with
-- #[n,n'..m] = enumFromThenTo n n' m#, a possible implementation
-- being #enumFromThenTo n n' m = worker (f x) (c x) n m#,
-- #x = fromEnum n' - fromEnum n#, #c x = bool (>=) (<=) (x > 0)#
-- #f n y
-- | n > 0 = f (n - 1) (succ y)
-- | n < 0 = f (n + 1) (pred y)
-- | otherwise = y# and
-- #worker s c v m
-- | c v m = v : worker s c (s v) m
-- | otherwise = []#
So the following code:
import Data.Bool
eftt n s m = worker (f x) (c x) n m
where x = (fromEnum s) - (fromEnum n)
c x = bool (>=) (<=) (x > 0)
f n y
| n > 0 = f (n-1) (succ y)
| n < 0 = f (n+1) (pred y)
| otherwise = y
worker s c v m
| c v m = v: worker s c (s v) m
| otherwise = []
On the same input as before, this however returns this list:
[3.1,5.1,7.1]
The real implementation defined in GHC.Enum is the following:
enumFromThenTo x1 x2 y = map toEnum [fromEnum x1, fromEnum x2 .. fromEnum y]
But there is no instantiation of Enum Double or Enum Float in GHC.Enum
So when I tried to reproduce this with the following code:
import Prelude(putStrLn,show)
import GHC.Enum(toEnum,fromEnum,Enum,enumFromThenTo)
import GHC.Base(map)
main = putStrLn (show (_enumFromThenTo 3.1 5.1 8.1))
_enumFromThenTo :: (Enum a) => a -> a -> a -> [a]
_enumFromThenTo x1 x2 y = map toEnum [fromEnum x1, fromEnum x2 .. fromEnum y]
I compiled with:
$ ghc -package ghc -package base <file.hs>
The result was again:
[3.0,5.0,7.0]
What is happening here, such that the output becomes:
[3.1,5.1,7.1,9.099999999999998]
?
Well, this is instance Enum Double
instance Enum Double where
enumFromThenTo = numericEnumThenFromTo
The implementation is here
numericEnumFromThenTo :: (Ord a, Fractional a) => a -> a -> a -> [a]
numericEnumFromThenTo e1 e2 e3
= takeWhile predicate (numericEnumFromThen e1 e2)
where
mid = (e2 - e1) / 2
predicate | e2 >= e1 = (<= e3 + mid)
| otherwise = (>= e3 + mid)
More important than the implementation is the note above it:
-- These 'numeric' enumerations come straight from the Report
Which refers to this passage in the (2010) Report:
For Float and Double, the semantics of the enumFrom family is given by the rules for Int above, except that the list terminates when the elements become greater than e3 + i∕2 for positive increment i, or when they become less than e3 + i∕2 for negative i.
(Where e3 refers to the upper bound, and i the increment.)
The comment you found on Enum and the implementation in class Enum are both irrelevant. The comment is just example code detailing how an instance might be implemented, and the implementation given is inside a class, and thus may be overridden with anything.

Recursion scheme for symbolic differentiation

Following terminology from this excellent series, let's represent an expression such as (1 + x^2 - 3x)^3 by a Term Expr, where the data types are the following:
data Expr a =
Var
| Const Int
| Plus a a
| Mul a a
| Pow a Int
deriving (Functor, Show, Eq)
data Term f = In { out :: f (Term f) }
Is there a recursion scheme suitable for performing symbolic differentiation? I feel like it's almost a Futumorphism specialized to Term Expr, i.e. futu deriveFutu for an appropriate function deriveFutu:
data CoAttr f a
= Automatic a
| Manual (f (CoAttr f a))
futu :: Functor f => (a -> f (CoAttr f a)) -> a -> Term f
futu f = In <<< fmap worker <<< f where
worker (Automatic a) = futu f a
worker (Manual g) = In (fmap worker g)
This looks pretty good, except that the underscored variables are Terms instead of CoAttrs:
deriveFutu :: Term Expr -> Expr (CoAttr Expr (Term Expr))
deriveFutu (In (Var)) = (Const 1)
deriveFutu (In (Const _)) = (Const 0)
deriveFutu (In (Plus x y)) = (Plus (Automatic x) (Automatic y))
deriveFutu (In (Mul x y)) = (Plus (Manual (Mul (Automatic x) (Manual _y)))
(Manual (Mul (Manual _x) (Automatic y)))
)
deriveFutu (In (Pow x c)) = (Mul (Manual (Const c)) (Manual (Mul (Manual (Pow _x (c-1))) (Automatic x))))
The version without recursion schemes looks like this:
derive :: Term Expr -> Term Expr
derive (In (Var)) = In (Const 1)
derive (In (Const _)) = In (Const 0)
derive (In (Plus x y)) = In (Plus (derive x) (derive y))
derive (In (Mul x y)) = In (Plus (In (Mul (derive x) y)) (In (Mul x (derive y))))
derive (In (Pow x c)) = In (Mul (In (Const c)) (In (Mul (In (Pow x (c-1))) (derive x))))
As an extension to this question, is there a recursion scheme for differentiating and eliminating "empty" Exprs such as Plus (Const 0) x that arise as a result of differentiation -- in one pass over the data?
Look at the differentiation rule for product:
(u v)' = u' v + v' u
What do you need to know to differentiate a product? You need to know the derivatives of the subterms (u', v'), as well as their values (u, v).
This is exactly what a paramorphism gives you.
para
:: Functor f
=> (f (b, Term f) -> b)
-> Term f -> b
para g (In a) = g $ (para g &&& id) <$> a
derivePara :: Term Expr -> Term Expr
derivePara = para $ In . \case
Var -> Const 1
Const _ -> Const 0
Plus x y -> Plus (fst x) (fst y)
Mul x y -> Plus
(In $ Mul (fst x) (snd y))
(In $ Mul (snd x) (fst y))
Pow x c -> Mul
(In (Const c))
(In (Mul
(In (Pow (snd x) (c-1)))
(fst x)))
Inside the paramorphism, fst gives you access to the derivative of a subterm, while snd gives you the term itself.
As an extension to this question, is there a recursion scheme for differentiating and eliminating "empty" Exprs such as Plus (Const 0) x that arise as a result of differentiation -- in one pass over the data?
Yes, it's still a paramorphism. The easiest way to see this is to have smart constructors such as
plus :: Term Expr -> Term Expr -> Expr (Term Expr)
plus (In (Const 0)) (In x) = x
plus (In x) (In (Const 0)) = x
plus x y = Plus x y
and use them when defining the algebra. You could probably express this as some kind of para-cata fusion, too.

Expression expansion using recursion schemes

I have a data type representing arithmetic expressions:
data E = Add E E | Mul E E | Var String
I want to write an expansion function which will convert an expression into sum of products of variables (sort of braces expansion). Using recursion schemes of course.
I only could think of an algorithm in the spirit of "progress and preservation". The algorithm at each step constructs terms that are fully expanded so there is no need to re-check.
The handling of Mul made me crazy, so instead of doing it directly I used an isomorphic type of [[String]] and took advantage of concat and concatMap already implemented for me:
type Poly = [Mono]
type Mono = [String]
mulMonoBy :: Mono -> Poly -> Poly
mulMonoBy x = map (x ++)
mulPoly :: Poly -> Poly -> Poly
mulPoly x = concatMap (flip mulMonoBy x)
So then I just use cata:
expandList :: E -> Poly
expandList = cata $ \case
Var x -> [[x]]
Add e1 e2 = e1 ++ e2
Mul e1 e2 = mulPoly e1 e2
And convert back:
fromPoly :: Poly -> Expr
fromPoly = foldr1 Add . map fromMono where
fromMono = foldr1 Mul . map Var
Are there significantly better approaches?
Upd: There are few confusions.
The solution does allow multiline variable names. Add (Val "foo" (Mul (Val "foo) (Var "bar"))) is a representation of foo + foo * bar. I'm not representing x*y*z with Val "xyz" or something. Note that also as there are no scalars repeated vars such as "foo * foo * quux" are perfectly allowed.
By sum of products I mean sort of "curried" n-ary sum of products. A concise definition of sum of products is that I want an expression without any parentheses, with all parens represented by associativity and priority.
So (foo * bar + bar) + (foo * bar + bar) is not a sum of products as the because of middle + is sum of sums
(foo * bar + (bar + (foo * bar + bar))) or corresponding left-associative version are right answers, although we must guarantee that associativity is always left of always right. So the correct type for right-assoaciative solution is
data Poly = Sum Mono Poly
| Product Mono
which is isomorphic to nonempty lists: NonEmpty Poly (note Sum Mono Poly instead of Sum Poly Poly). If we allow empty sums or products then we get just the list of list representation I used.
Also of you don't care about performance, the multiplication seems to be just liftA2 (++)
I am no expert in recursion schemes, but since it sounds like you are trying to practice them, hopefully you will not find it too onerous to convert a solution using manual recursion to one using recursion schemes. I'll write it with mixed prose and code first, and include the complete code again at the end for simpler copy/pasting.
It is not too difficult to do using simply the distributive property and a bit of recursive algebra. Before we begin, though, let's define a better result type, one that guarantees we can only ever represent sums of products:
data Poly term = Sum (Poly term) (Poly term)
| Product (Mono term)
deriving Show
data Mono term = Term term
| MonoMul (Mono term) (Mono term)
deriving Show
This way we can't possibly mess up and accidentally yield an incorrect result like
(Mul (Var "x") (Add (Var "y") (Var "z")))
Now, let's write our function.
expand :: E -> Poly String
First, a base case: it is trivial to expand a Var, because it is already in sum-of-products form. But we must convert it a bit to fit it into our Poly result type:
expand (Var x) = Product (Term x)
Next, note that it is easy to expand an addition: simply expand the two sub-expressions, and add them together.
expand (Add x y) = Sum (expand x) (expand y)
What about a multiplication? That is a bit more complicated, since
Product (expand x) (expand y)
is ill-typed: we can't multiply polynomials, only monomials. But we do know how to do algebraic manipulation to turn a multiplication of polynomials into a sum of multiplications of monomials, via the distributive rule. As in your question, we'll need a function mulPoly. But let's just assume that exists, and implement it later.
expand (Mul x y) = mulPoly (expand x) (expand y)
That handles all the cases, so all that's left is to implement mulPoly by distributing the multiplications across the two polynomials' terms. We simply break down one of the polynomials one term at a time, and multiply the term across each of the terms in the other polynomial, adding together the results.
mulPoly :: Poly String -> Poly String -> Poly String
mulPoly (Product x) y = mulMonoBy x y
mulPoly (Sum a b) x = Sum (mulPoly a x) (mulPoly b x)
mulMonoBy :: Mono String -> Poly -> Poly
mulMonoBy x (Product y) = Product $ MonoMul x y
mulMonoBy x (Sum a b) = Sum (mulPoly a x') (mulPoly b x')
where x' = Product x
And in the end, we can test that it works as intended:
expand (Mul (Add (Var "a") (Var "b")) (Add (Var "y") (Var "z")))
{- results in: Sum (Sum (Product (MonoMul (Term "y") (Term "a")))
(Product (MonoMul (Term "z") (Term "a"))))
(Sum (Product (MonoMul (Term "y") (Term "b")))
(Product (MonoMul (Term "z") (Term "b"))))
-}
Or,
(a + b)(y * z) = ay + az + by + bz
which we know to be correct.
The complete solution, as promised above:
data E = Add E E | Mul E E | Var String
data Poly term = Sum (Poly term) (Poly term)
| Product (Mono term)
deriving Show
data Mono term = Term term
| MonoMul (Mono term) (Mono term)
deriving Show
expand :: E -> Poly String
expand (Var x) = Product (Term x)
expand (Add x y) = Sum (expand x) (expand y)
expand (Mul x y) = mulPoly (expand x) (expand y)
mulPoly :: Poly String -> Poly String -> Poly String
mulPoly (Product x) y = mulMonoBy x y
mulPoly (Sum a b) x = Sum (mulPoly a x) (mulPoly b x)
mulMonoBy :: Mono String -> Poly String -> Poly String
mulMonoBy x (Product y) = Product $ MonoMul x y
mulMonoBy x (Sum a b) = Sum (mulPoly a x') (mulPoly b x')
where x' = Product x
main = print $ expand (Mul (Add (Var "a") (Var "b")) (Add (Var "y") (Var "z")))
This answer has three sections. The first section, a summary in which I present my two favourite solutions, is the most important one. The second section contains types and imports, as well as extended commentary on the way towards the solutions. The third section focuses on the task of reassociating expressions, something that the original version of the answer (i.e. the second section) had not given due attention.
At the end of the day, I ended up with two solutions worth discussing. The first one is expandDirect (cf. the third section):
expandDirect :: E a -> E a
expandDirect = cata alg
where
alg = \case
Var' s -> Var s
Add' x y -> apo coalgAdd (Add x y)
Mul' x y -> (apo coalgAdd' . apo coalgMul) (Mul x y)
coalgAdd = \case
Add (Add x x') y -> Add' (Left x) (Right (Add x' y))
x -> Left <$> project x
coalgAdd' = \case
Add (Add x x') y -> Add' (Left x) (Right (Add x' y))
Add x (Add y y') -> Add' (Left x) (Right (Add y y'))
x -> Left <$> project x
coalgMul = \case
Mul (Add x x') y -> Add' (Right (Mul x y)) (Right (Mul x' y))
Mul x (Add y y') -> Add' (Right (Mul x y)) (Right (Mul x y'))
x -> Left <$> project x
With it, we rebuild the tree from the bottom (cata). On every branch, if we find something invalid we walk back and rewrite the subtree (apo), redistributing and reassociating as needed until all immediate children are correctly arranged (apo makes it possible to do that without having to rewrite everyting down to the very bottom).
The second solution, expandMeta, is a much simplified version of expandFlat from the third section.
expandMeta :: E a -> E a
expandMeta = apo coalg . cata alg
where
alg = \case
Var' s -> pure (Var s)
Add' x y -> x <> y
Mul' x y -> Mul <$> x <*> y
coalg = \case
x :| [] -> Left <$> project x
x :| (y:ys) -> Add' (Left x) (Right (y :| ys))
expandMeta is a metamorphism; that is, a catamorphism followed by an anamorphism (while we are using apo here as well, an apomorphism is just a fancy kind of anamorphism, so I guess the nomenclature still applies). The catamorphism changes the tree into a non-empty list -- that implicitly handles the reassociation of the Adds -- with the list applicative being used to distribute multiplication (much like you suggest). The coalgebra then quite trivially converts the non-empty list back into a tree with the appropriate shape.
Thank you for the question -- I had a lot of fun with it! Preliminaries:
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE DeriveFunctor #-}
{-# LANGUAGE DeriveFoldable #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
import Data.Functor.Foldable
import qualified Data.List.NonEmpty as N
import Data.List.NonEmpty (NonEmpty(..))
import Data.Semigroup
import Data.Foldable (toList)
import Data.List (nub)
import qualified Data.Map as M
import Data.Map (Map, (!))
import Test.QuickCheck
data E a = Var a | Add (E a) (E a) | Mul (E a) (E a)
deriving (Eq, Show, Functor, Foldable)
data EF a b = Var' a | Add' b b | Mul' b b
deriving (Eq, Show, Functor)
type instance Base (E a) = EF a
instance Recursive (E a) where
project = \case
Var x -> Var' x
Add x y -> Add' x y
Mul x y -> Mul' x y
instance Corecursive (E a) where
embed = \case
Var' x -> Var x
Add' x y -> Add x y
Mul' x y -> Mul x y
To begin with, my first working (if flawed) attempt, which uses the applicative instance of (non-empty) lists to distribute:
expandTooClever :: E a -> E a
expandTooClever = cata $ \case
Var' s -> Var s
Add' x y -> Add x y
Mul' x y -> foldr1 Add (Mul <$> flatten x <*> flatten y)
where
flatten :: E a -> NonEmpty (E a)
flatten = cata $ \case
Var' s -> pure (Var s)
Add' x y -> x <> y
Mul' x y -> pure (foldr1 Mul (x <> y))
expandTooClever has one relatively serious problem: as it calls flatten, a full-blown fold, for both subtrees whenever it reaches a Mul, it has horrible asymptotics for chains of Mul.
Brute force, simplest-thing-that-could-possibly-work solution, with an algebra that calls itself recursively:
expandBrute :: E a -> E a
expandBrute = cata alg
where
alg = \case
Var' s -> Var s
Add' x y -> Add x y
Mul' (Add x x') y -> Add (alg (Mul' x y)) (alg (Mul' x' y))
Mul' x (Add y y') -> Add (alg (Mul' x y)) (alg (Mul' x y'))
Mul' x y -> Mul x y
The recursive calls are needed because the distribution might introduce new occurrences of Add under Mul.
A slightly more tasteful variant of expandBrute, with the recursive call factored out into a separate function:
expandNotSoBrute :: E a -> E a
expandNotSoBrute = cata alg
where
alg = \case
Var' s -> Var s
Add' x y -> Add x y
Mul' x y -> dis x y
dis (Add x x') y = Add (dis x y) (dis x' y)
dis x (Add y y') = Add (dis x y) (dis x y')
dis x y = Mul x y
A tamed expandNotSoBrute, with dis being turned into an apomorphism. This way of phrasing it expresses nicely the big picture of what is going on: if you only have Vars and Adds, you can trivially reproduce the tree bottom-up without a care in the world; if you hit a Mul, however, you have to go back and reconstuct the whole subtree to perform the distributions (I wonder is there is a specialised recursion scheme that captures this pattern).
expandEvert :: E a -> E a
expandEvert = cata alg
where
alg :: EF a (E a) -> E a
alg = \case
Var' s -> Var s
Add' x y -> Add x y
Mul' x y -> apo coalg (x, y)
coalg :: (E a, E a) -> EF a (Either (E a) (E a, E a))
coalg (Add x x', y) = Add' (Right (x, y)) (Right (x', y))
coalg (x, Add y y') = Add' (Right (x, y)) (Right (x, y'))
coalg (x, y) = Mul' (Left x) (Left y)
apo is necessary because we want to anticipate the final result if there is nothing else to distribute. (There is a way to write it with ana; however, that requires wastefully rebuilding trees of Muls without changes, which leads to the same asymptotics problem expandTooClever had.)
Last, but not least, a solution which is both a successful realisation of what I had attempted with expandTooClever and my interpretation of amalloy's answer. BT is a garden-variety binary tree with values on the leaves. A product is represented by a BT a, while a sum of products is a tree of trees.
expandSOP :: E a -> E a
expandSOP = cata algS . fmap (cata algP) . cata algSOP
where
algSOP :: EF a (BT (BT a)) -> BT (BT a)
algSOP = \case
Var' s -> pure (pure s)
Add' x y -> x <> y
Mul' x y -> (<>) <$> x <*> y
algP :: BTF a (E a) -> E a
algP = \case
Leaf' s -> Var s
Branch' x y -> Mul x y
algS :: BTF (E a) (E a) -> E a
algS = \case
Leaf' x -> x
Branch' x y -> Add x y
BT and its instances:
data BT a = Leaf a | Branch (BT a) (BT a)
deriving (Eq, Show)
data BTF a b = Leaf' a | Branch' b b
deriving (Eq, Show, Functor)
type instance Base (BT a) = BTF a
instance Recursive (BT a) where
project (Leaf s) = Leaf' s
project (Branch l r) = Branch' l r
instance Corecursive (BT a) where
embed (Leaf' s) = Leaf s
embed (Branch' l r) = Branch l r
instance Semigroup (BT a) where
l <> r = Branch l r
-- Writing this, as opposed to deriving it, for the sake of illustration.
instance Functor BT where
fmap f = cata $ \case
Leaf' x -> Leaf (f x)
Branch' l r -> Branch l r
instance Applicative BT where
pure x = Leaf x
u <*> v = ana coalg (u, v)
where
coalg = \case
(Leaf f, Leaf x) -> Leaf' (f x)
(Leaf f, Branch xl xr) -> Branch' (Leaf f, xl) (Leaf f, xr)
(Branch fl fr, v) -> Branch' (fl, v) (fr, v)
To wrap things up, a test suite:
newtype TestE = TestE { getTestE :: E Char }
deriving (Eq, Show)
instance Arbitrary TestE where
arbitrary = TestE <$> sized genExpr
where
genVar = Var <$> choose ('a', 'z')
genAdd n = Add <$> genSub n <*> genSub n
genMul n = Mul <$> genSub n <*> genSub n
genSub n = genExpr (n `div` 2)
genExpr = \case
0 -> genVar
n -> oneof [genVar, genAdd n, genMul n]
data TestRig b = TestRig (Map Char b) (E Char)
deriving (Show)
instance Arbitrary b => Arbitrary (TestRig b) where
arbitrary = do
e <- genExpr
d <- genDict e
return (TestRig d e)
where
genExpr = getTestE <$> arbitrary
genDict x = M.fromList . zip (keys x) <$> (infiniteListOf arbitrary)
keys = nub . toList
unsafeSubst :: Ord a => Map a b -> E a -> E b
unsafeSubst dict = fmap (dict !)
eval :: Num a => E a -> a
eval = cata $ \case
Var' x -> x
Add' x y -> x + y
Mul' x y -> x * y
evalRig :: (E Char -> E Char) -> TestRig Integer -> Integer
evalRig f (TestRig d e) = eval (unsafeSubst d (f e))
mkPropEval :: (E Char -> E Char) -> TestRig Integer -> Bool
mkPropEval f = (==) <$> evalRig id <*> evalRig f
isDistributed :: E a -> Bool
isDistributed = para $ \case
Add' (_, x) (_, y) -> x && y
Mul' (Add _ _, _) _ -> False
Mul' _ (Add _ _, _) -> False
Mul' (_, x) (_, y) -> x && y
_ -> True
mkPropDist :: (E Char -> E Char) -> TestE -> Bool
mkPropDist f = isDistributed . f . getTestE
main = mapM_ test
[ ("expandTooClever" , expandTooClever)
, ("expandBrute" , expandBrute)
, ("expandNotSoBrute", expandNotSoBrute)
, ("expandEvert" , expandEvert)
, ("expandSOP" , expandSOP)
]
where
test (header, func) = do
putStrLn $ "Testing: " ++ header
putStr "Evaluation test: "
quickCheck $ mkPropEval func
putStr "Distribution test: "
quickCheck $ mkPropDist func
By sum of products I mean sort of "curried" n-ary sum of products. A concise definition of sum of products is that I want an expression without any parentheses, with all parens represented by associativity and priority.
We can adjust the solutions above so that the sums are reassociated. The easiest way is replacing the outer BT in expandSOP with NonEmpty. Given that the multiplication there is, much like you suggest, liftA2 (<>), this works straight away.
expandFlat :: E a -> E a
expandFlat = cata algS . fmap (cata algP) . cata algSOP
where
algSOP :: EF a (NonEmpty (BT a)) -> NonEmpty (BT a)
algSOP = \case
Var' s -> pure (Leaf s)
Add' x y -> x <> y
Mul' x y -> (<>) <$> x <*> y
algP :: BTF a (E a) -> E a
algP = \case
Leaf' s -> Var s
Branch' x y -> Mul x y
algS :: NonEmptyF (E a) (E a) -> E a
algS = \case
NonEmptyF x Nothing -> x
NonEmptyF x (Just y) -> Add x y
Another option is using any of the other solutions and reassociating the sums in the distributed tree in a separate step.
flattenSum :: E a -> E a
flattenSum = cata alg
where
alg = \case
Add' x y -> apo coalg (x, y)
x -> embed x
coalg = \case
(Add x x', y) -> Add' (Left x) (Right (x', y))
(x, y) -> Add' (Left x) (Left y)
We can also roll flattenSum and expandEvert into a single function. Note that the sum coalgebra needs an extra case when it gets the result of the distribution coalgebra. That happens because, as the coalgebra proceeds from top to bottom, we can't be sure that the subtrees it generates are properly associated.
-- This is written in a slightly different style than the previous functions.
expandDirect :: E a -> E a
expandDirect = cata alg
where
alg = \case
Var' s -> Var s
Add' x y -> apo coalgAdd (Add x y)
Mul' x y -> (apo coalgAdd' . apo coalgMul) (Mul x y)
coalgAdd = \case
Add (Add x x') y -> Add' (Left x) (Right (Add x' y))
x -> Left <$> project x
coalgAdd' = \case
Add (Add x x') y -> Add' (Left x) (Right (Add x' y))
Add x (Add y y') -> Add' (Left x) (Right (Add y y'))
x -> Left <$> project x
coalgMul = \case
Mul (Add x x') y -> Add' (Right (Mul x y)) (Right (Mul x' y))
Mul x (Add y y') -> Add' (Right (Mul x y)) (Right (Mul x y'))
x -> Left <$> project x
Perhaps there is a more clever way of writing expandDirect, but I haven't figured it out yet.

What exactly makes a type system consistent?

I've taken András Kovács's DBIndex.hs, a very simple implementation of a dependently typed core, and tried simplifying it even further, as much as I possibly could, without "breaking" the type system. After several simplifications, I was left with something much smaller:
{-# language LambdaCase, ViewPatterns #-}
data Term
= V !Int
| A Term Term
| L Term Term
| S
| E
deriving (Eq, Show)
data VTerm
= VV !Int
| VA VTerm VTerm
| VL VTerm (VTerm -> VTerm)
| VS
| VE
type Ctx = ([VTerm], [VTerm], Int)
eval :: Bool -> Term -> Term
eval typ term = err (quote 0 (eval term typ ([], [], 0))) where
eval :: Term -> Bool -> Ctx -> VTerm
eval E _ _ = VE
eval S _ _ = VS
eval (V i) typ ctx#(vs, ts, _) = (if typ then ts else vs) !! i
eval (L a b) typ ctx#(vs,ts,d) = VL a' b' where
a' = eval a False ctx
b' = \v -> eval b typ (v:vs, a':ts, d+1)
eval (A f x) typ ctx = fx where
f' = eval f typ ctx
x' = eval x False ctx
xt = eval x True ctx
fx = case f' of
(VL a b) -> if check a xt then b x' else VE -- type mismatch
VS -> VE -- non function application
f -> VA f x'
check :: VTerm -> VTerm -> Bool
check VS _ = True
check a b = quote 0 a == quote 0 b
err :: Term -> Term
err term = if ok term then term else E where
ok (A a b) = ok a && ok b
ok (L a b) = ok a && ok b
ok E = False
ok t = True
quote :: Int -> VTerm -> Term
quote d = \case
VV i -> V (d - i - 1)
VA f x -> A (quote d f) (quote d x)
VL a b -> L (quote d a) (quote (d + 1) (b (VV d)))
VS -> S
VE -> E
reduce :: Term -> Term
reduce = eval False
typeof :: Term -> Term
typeof = eval True
The problem is that I have no idea what makes a type system consistent, so I had no criteria (other than intuition) and probably broke it in several ways. It, more or less, does what I think a type system should do, though:
main :: IO ()
main = do
-- id = ∀ (a:*) . (λ (x:a) . a)
let id = L S (L (V 0) (V 0))
-- nat = ∀ (a:*) . (a -> a) -> (a -> a)
let nat = L S (L (L (V 0) (V 1)) (L (V 1) (V 2)))
-- succ = λ (n:nat) . ∀ (a:*) . λ (s : a -> a) . λ (z:a) . s (n a s z)
let succ = L nat (L S (L (L (V 0) (V 1)) (L (V 1) (A (V 1) (A (A (A (V 3) (V 2)) (V 1)) (V 0))))))
-- zero = λ (a:*) . λ (s : a -> a) . λ (z : a) . z
let zero = L S (L (L (V 0) (V 1)) (L (V 1) (V 0)))
-- add = λ (x:nat) . λ (y:nat) . λ (a:*) . λ(s: a -> a) . λ (z : a) . (x a s (y a s z))
let add = L nat (L nat (L S (L (L (V 0) (V 1)) (L (V 1) (A (A (A (V 4) (V 2)) (V 1)) (A (A (A (V 3) (V 2)) (V 1)) (V 0)))))))
-- bool = ∀ (a:*) . a -> a -> a
let bool = L S (L (V 0) (L (V 1) (V 2)))
-- false = ∀ (a:*) . λ (x : a) . λ(y : a) . x
let false = L S (L (V 0) (L (V 1) (V 0)))
-- true = ∀ (a:*) . λ (x : a) . λ(y : a) . y
let true = L S (L (V 0) (L (V 1) (V 1)))
-- loop = ((λ (x:*) . (x x)) (λ (x:*) . (x x)))
let loop = A (L S (A (V 0) (V 0))) (L S (A (V 0) (V 0)))
-- natOrBoolId = λ (a:bool) . λ (t:(if a S then nat else bool)) . λ (x:t) . t
let natOrBoolId = L bool (L (A (A (A (V 0) S) nat) bool) (V 0))
-- nat
let c0 = zero
let c1 = A succ zero
let c2 = A succ c1
let c3 = A succ c2
let c4 = A succ c3
let c5 = A succ c4
-- Tests
let test name pass = putStrLn $ "- " ++ (if pass then "OK." else "ERR") ++ " " ++ name
putStrLn "True and false are bools"
test "typeof true == bool " $ typeof true == bool
test "typeof false == bool " $ typeof false == bool
putStrLn "Calling 'true nat' on two nats selects the first one"
test "reduce (true nat c1 c2) == c1" $ reduce (A (A (A true nat) c1) c2) == reduce c1
test "typeof (true nat c1 c2) == nat" $ typeof (A (A (A true nat) c1) c2) == nat
putStrLn "Calling 'true nat' on a bool is a type error"
test "reduce (true nat true c2) == E" $ reduce (A (A (A true nat) true) c2) == E
test "reduce (true nat c2 true) == E" $ reduce (A (A (A true nat) c2) true) == E
putStrLn "More type errors"
test "reduce (succ true) == E" $ reduce (A succ true) == E
putStrLn "Addition works"
test "reduce (add c2 c3) == c5" $ reduce (A (A add c2) c3) == reduce c5
test "typeof (add c2 c2) == nat" $ typeof (A (A add c2) c3) == nat
putStrLn "Loop isn't typeable"
test "typeof loop == E" $ typeof loop == E
putStrLn "Function with type that depends on value"
test "typeof (natOrBoolId true c2) == nat" $ typeof (A (A natOrBoolId true) c2) == nat
test "typeof (natOrBoolId true true) == E" $ typeof (A (A natOrBoolId true) true) == E
test "typeof (natOrBoolId false c2) == E" $ typeof (A (A natOrBoolId false) c2) == E
test "typeof (natOrBoolId false true) == bool" $ typeof (A (A natOrBoolId false) true) == bool
My question is, what exactly makes a system consistent? Specifically:
What problems do I get from the things I did (removing Pi, merging infer/eval, etc.)? Can those be somehow "justified" (generating a different system but still "correct")?
Basically: is it possible to fix this system (i.e., make it "suitable as the core of a dependently typed language just like CoC") while keeping it small?
Runnable code.
First of all, consistency is a thing in mathematical logic, meaning that a logic doesn't contain a contradiction. I've seen people speaking about the "consistency" of type theories, but unfortunately it is not a well-defined term.
In the context of lambda calculus, many people use the term "consistency" to mean vastly different things. Some people say the untyped lambda-calculus is consistent, by which they mean that different derivations of a lambda-term lead to the same result (i.e. the Church-Rosser property). In that respect, most calculi are consistent.
However, consistent can also mean that the Curry-Howard isomorphic logic of the calculus is consistent. Simply-typed lambda-calculus corresponds to first-order intuitionistic logic (without equality), and when people say STLC is consistent they really mean that first-order intuitionistic logic is consistent. In this sense of consistency, consistency means that the bottom (void) type doesn't have an inhabitant (and, thus, a derivation). That is, every expression must produce a value with a valid type. Bottom corresponds to falsehood, so this means that falsehood can't be derived (because you can derive anything from falsehood).
Now, in this sense, to be consistent you can't have nontermination (fix), non-returning functions, or control operators (call/cc and friends). Haskell is not consistent in this sense, because you can write functions that produce arbitrary types (the function f x = f x has type a -> b; obviously, this isn't consistent). Similarly, you can return undefined from anything, making it producing something of the bottom type. So, to make sure that a type theory is consistent in this sense, you'd have to make sure that you can't return the empty type from any expression.
But then, in the first sense of "consistency", Haskell is consistent I believe (barring some wacky features). Equational equivalence should be good.

Generalizing traversal of expressions with changes on specific nodes

I'm reading the lectures from the CIS194 course (version fall 2014) and I
wanted to have some comments on my solution to the Exercise 7 of the
Homework 5.
Here is the question:
Exercise 7 (Optional) The distribute and squashMulId functions are
quite similar, in that they traverse over the whole expression to make
changes to specific nodes. Generalize this notion, so that the two
functions can concentrate on just the bit that they need to transform.
(You can see the whole homework here: http://www.seas.upenn.edu/~cis194/fall14/hw/05-type-classes.pdf)
Here is my squashMulId function from Exercise 6:
squashMulId :: (Eq a, Ring a) => RingExpr a -> RingExpr a
squashMulId AddId = AddId
squashMulId MulId = MulId
squashMulId (Lit n) = Lit n
squashMulId (AddInv x) = AddInv (squashMulId x)
squashMulId (Add x y) = Add (squashMulId x) (squashMulId y)
squashMulId (Mul x (Lit y))
| y == mulId = squashMulId x
squashMulId (Mul (Lit x) y)
| x == mulId = squashMulId y
squashMulId (Mul x y) = Mul (squashMulId x) (squashMulId y)
And here is my solution to Exercise 7:
distribute :: RingExpr a -> RingExpr a
distribute = transform distribute'
where distribute' (Mul x (Add y z)) = Just $ Add (Mul x y) (Mul x z)
distribute' (Mul (Add x y) z) = Just $ Add (Mul x z) (Mul y z)
distribute' _ = Nothing
squashMulId :: (Eq a, Ring a) => RingExpr a -> RingExpr a
squashMulId = transform simplifyMul
where simplifyMul (Mul x (Lit y))
| y == mulId = Just $ squashMulId x
simplifyMul (Mul (Lit x) y)
| x == mulId = Just $ squashMulId y
simplifyMul _ = Nothing
transform :: (RingExpr a -> Maybe (RingExpr a)) -> RingExpr a -> RingExpr a
transform f e
| Just expr <- f e = expr
transform _ AddId = AddId
transform _ MulId = MulId
transform _ e#(Lit n) = e
transform f (AddInv x) = AddInv (transform f x)
transform f (Add x y) = Add (transform f x) (transform f y)
transform f (Mul x y) = Mul (transform f x) (transform f y)
Is there a better way of doing such generalization?
Your transform function is a very good start at handling general transformations of ASTs. I'm going to show you something closely related that's a little bit more general.
The uniplate library defines the following class for describing simple abstract syntax trees. An instance of the class needs only provide a definition for uniplate which should perform a step of transformation, possibly with side-effects, to the immediate decedents of the node.
import Control.Applicative
import Control.Monad.Identity
class Uniplate a where
uniplate :: Applicative m => a -> (a -> m a) -> m a
descend :: (a -> a) -> a -> a
descend f x = runIdentity $ descendM (pure . f) x
descendM :: Applicative m => (a -> m a) -> a -> m a
descendM = flip uniplate
A postorder transformation of the entire expression is defined for any type with a Uniplate instance.
transform :: Uniplate a => (a -> a) -> a -> a
transform f = f . descend (transform f)
My guess for the definitions of RingExpr and Ring (the link to the homework exercise is broken) are
data RingExpr a
= AddId
| MulId
| Lit a
| AddInv (RingExpr a)
| Add (RingExpr a) (RingExpr a)
| Mul (RingExpr a) (RingExpr a)
deriving Show
class Ring a where
addId :: a
mulId :: a
addInv :: a -> a
add :: a -> a -> a
mul :: a -> a -> a
We can define uniplate for any RingExpr a. For the three expressions that have subexpressions, AddInv, Add, and Mul, we perform the transformation p on each subexpression, then put the expression back together again in the Applicative using <$> (an infix version of fmap), and <*>. For the remaining expressions that have no sub expressions, we just package them back up in the Applicative using pure.
instance Uniplate (RingExpr a) where
uniplate e p = case e of
AddInv x -> AddInv <$> p x
Add x y -> Add <$> p x <*> p y
Mul x y -> Mul <$> p x <*> p y
_ -> pure e
The transform function we get from the Uniplate instance for RingExpr a is exactly the same as yours, except the Maybe return type wasn't necessary. A function that didn't want to change an expression could have simply returned the expression unchanged.
Here's my swing at writing squashMulId. I've split replacing the literals out into a separate function to make things look cleaner.
replaceIds :: (Eq a, Ring a) => RingExpr a -> RingExpr a
replaceIds (Lit n) | n == addId = AddId
replaceIds (Lit n) | n == mulId = MulId
replaceIds e = e
simplifyMul :: RingExpr a -> RingExpr a
simplifyMul (Mul x MulId) = x
simplifyMul (Mul MulId x ) = x
simplifyMul e = e
squashMulId :: (Eq a, Ring a) => RingExpr a -> RingExpr a
squashMulId = transform (simplifyMul . replaceIds)
This works for a simple example
> squashMulId . Mul (Lit True) . Add (Lit False) $ Lit True
Add AddId MulId

Resources