Haskell Data type 3-address code - haskell

I'm trying to create a compiler into an intermediate language, and for that purpose I created the following data types:
data C = Atr Var Var
deriving(Show)
data E = Var Op Var
deriving (Show)
data Var = V String
| N Int
deriving (Show)
data Op = OpPlus
| OpMinus
| OpMult
| OpDiv
deriving (Show)
But I'm having problems when I use the E data type, for example:
compileE :: Exp -> Int -> [(String,Int)] ->(Int,Var,[C])
compileE ( Plus e1 e2 ) k regs = let (v1,t1,l1)= (compileE e1 k regs);
( v2,t2,l2 ) =( compileE e2 v1 regs);
t = new_Var v2
in (v2+1,V t, l1 ++ l2 ++ [Atr (V t) (t1 OpPlus t2)])
In the expression "t1 OpPlus t2", it says t1 is applied to too many arguments. That would make sense since t1 doesn't exist in the data type, but t1 is a Var and that expression should be in the Var Op Var form.
Any sugestions on a work around?
EDIT:
Aparently my english sucked and i didn't explain well my question:
My intermediate language needs to be defined using this rule, among others: "E<- Var Op Var" where E is an expression, Var could be an int or string (case of variable) and Op the symbol im using. At the end of the compileE function, im trying to concatenate the previous expressions ( l1 and l2) with the current one to create the [C] list.
My problem is that I'm not being able to add the "Var Op Var" format (t1 OpPlus t2) because its listing t1 as a function and not as part of the Var data type.

It looks like you forgot to give E a data constructor of its own. data E = Var Op Var defines Var as the data constructor for E, and it takes two arguments (one Op, one Var). Because Var is also the name of a separate type, this is not a good idea. It seems that you meant to write E = E' Var Op Var, where E' is some new name (it could even be E!), after which (assuming I'm reading your intent correctly) t1 OpPlus t2 becomes E' t1 OpPlus t2.
Note that, even after you fix this, your code still won't work, as it looks like you're trying to use an E value as the second argument to Atr, even though the second argument should be a Var. What are you really trying to do there?

data Exp = Plus Exp Exp
| Name String
| Const Int
data C = Atr Var E
deriving(Show)
data E = E Var Op Var
deriving (Show)
data Var = V String
| N Int
deriving (Show)
data Op = OpPlus
| OpMinus
| OpMult
| OpDiv
deriving (Show)
new_Var :: Int -> String
new_Var i = "sym" ++ show i
compileE :: Exp -> Int -> [(String,Int)] ->(Int,Var,[C])
compileE (Name s) k regs = (k, V s, [])
compileE (Const i) k regs = (k, N i, [])
compileE (Plus e1 e2) k regs = (v2+1, V t, l1 ++ l2 ++ [Atr (V t) (E t1 OpPlus t2)])
where (v1,t1,l1) = compileE e1 k regs
(v2,t2,l2) = compileE e2 v1 regs
t = new_Var v2
The above passes compilation. I made up an Exp type to match more or less what I thought would be the AST compileE is expecting. The end result corresponds to the signature of your function. new_Var generates symbols for your variables. At this point you probably don't need yet the regs variable; it will probably be useful at a later stage, when your compiler does the register allocation phase.

Related

Confusing Error from type matching (I think) - Haskell

Getting an error I just can't make sence of
Basically my program is supposed to preform operations on a custom data type 'Config'
type Vname = String
type Val = Int
type State = [(Vname, Val)]
data Instr =
LOADI Int
| LOAD Val
| ADD
| STORE Val
| JMP Int
| JMPLESS Int
| JMPGE Int
deriving (Eq, Read, Show)
data Stack = Stack [Int]
type Config = (Int, State, Stack)
iexec :: Instr -> Config -> Config
iexec (LOADI n) c = do {
(x,y,z) <- c
;return ( x+1, y, x )
}
I've only tried to implement one of the instructions 'LOADI' but no matter what I seem to change I get the error:
• Couldn't match expected type ‘Stack’
with actual type ‘(c1, b0, c0)’
• In the pattern: (x, y, z)
In a stmt of a 'do' block: (x, y, z) <- c
In the expression:
do (x, y, z) <- c
return (x + 1, y, x)
|
42 | (x,y,z) <- c
| ^^^^^^^
Stack is just a list of Int's but I have to implement it this way
What is (c1, b0, c0) and why doesn't my third value of the config tuple work the same as the others?
Any help would be much appreciated!
Joe
iexec :: Instr -> Config -> Config
Ok, so iexec takes two arguments and returns a Config (a tuple).
iexec (LOADI n) c = do
Oh, but this says iexec takes two arguments and returns something in the form Monad m => m a (usually IO for beginners but not always).
You don't want do notation here - you're not making a monadic operation at all, just a pure non-monadic computation. For this you should use let ... in ... if you need intermediate values or just the resulting expression if not:
iexec :: Instr -> Config -> Config
iexec (LOADI n) (x,y,z) = (x+1, y, x)
Or consider the equivalent:
iexec :: Instr -> Config -> Config
iexec (LOADI n) (x,y,z) =
let newPC = x+1
newState = y
newStack = x
in (newPC, newState, newStack)
No of course that newStack value being x isn't going to work at all because a Stack is not an Int, but that's just matching your original code.

How can I print out the result of evaluation of the expression. I can't seem to find a way to print Val

I made the pattern where if you give it only a number it's gonna return the Value. I would just add deriving (Show) to Val as well but that doesn't work because of (Val->Val) (that's what I understood from the error messages). Anyone know what I could do ?
import GHC.Show (Show)
type Var = String
-- Expressions of source code in the form of a Abstract syntax tree
data Exp = Enum Int -- constant
|Evar Var -- variable
|Elet Var Exp Exp -- expr "let x = e1 in e2"
|Ecall Exp Exp -- Function call
deriving (Show)
-- returned values
data Val = Vnum Int -- Whole number
|Vprim (Val->Val) -- A primitive
mkPrim::(Int->Int->Int)->Val
mkPrim f = Vprim(\(Vnum x) -> Vprim (\(Vnum y) -> Vnum (f x y)))
-- Initial environement that contains all primitives
type Env = [(Var, Val)]
pervasive::Env
pervasive = [("+", mkPrim (+)), ("-", mkPrim (-)),("*", mkPrim (*)), ("/", mkPrim div)]
eval::Env->Exp->Val
eval pervasive (Enum n) = Vnum n
sampleExp = Elet "x" (Enum 3) (Ecall (Ecall (Evar "+") (Evar "x")) (Enum 4))
main = do print(eval pervasive (Enum 4))
Consider implementing a custom Show instance:
instance Show Val where
show (Vnum n) = show n
show (Vprim _) = "<prim>"
This however shall make printed primitives indistinguishable. It might be convenient to equip prims with names:
data Val = Vnum Int | Vprim String (Val -> Val)
instance Show Val where
show (Vnum n) = show n
show (Vprim name _) = name

Haskell: define interpreter for language

As part of defining a language interpreter I have the following definition:
initCtx :: Context
initCtx = (Map.empty, initEnv)
where initEnv =
Map.fromList [ ...
, ("+", undefined)
...
]
And for the sum operation I have this:
evalExpr :: Expr -> Interpreter Value
evalExpr (e1 :+: e2) = do v1 <- eval e1
v2 <- eval e2
return Interpreter $ (v1 + v2)
The exalExpr is made by me, but what do I need to write in place of undefined so that the expression is executed? Or, perhaps, I am looking to the wrong thing ? I have more operations of course, but I just need to have an example. I am veeeeery new to Haskell, I am practically required to deal with it.
Thanks!
EDIT:
type Env = Map Ident Value
type Primitive = [Value] -> Interpreter Value
type PEnv = Map FunName Primitive
type Context = (Env, PEnv)
Here's Expr:
data Expr = Number Int
| String String
| Array [Expr]
| Undefined
| TrueConst
| FalseConst
| Var Ident
| Compr ArrayFor Expr
| Call FunName [Expr]
| Assign Ident Expr
| Comma Expr Expr
deriving (Eq, Read, Show)
And Interpreter
newtype Interpreter a = Interpreter {runInterpreter :: Context -> Either Error (a, Env)}
And Value:
data Value = IntVal Int
| UndefinedVal
| TrueVal | FalseVal
| StringVal String
| ArrayVal [Value]
deriving (Eq, Show)
Alright, I'll take a stab at this...
So it looks like a Context consists of a pair of maps. It appears the first map lets you look up a variable's value from its name. The second one lets you look up a function name and get the corresponding executable code for it.
You're using Interpreter as if it's a monad; I don't know whether it actually is, but it looks plausible.
So initCtx starts off with no variables defined (Map.empty), and presumably you're meant to put a bunch of predefined functions like + in the second map.
Looking at the definition for Primitive, it takes a list of Value and returns an Interpreter computation. So I guess it looks something like
addExpr :: [Value] -> Interpreter Value
addExpr [e1, e2] = do
v1 <- eval e1
v2 <- eval e2
return (v1 + v2)
and then you could write map.fromList [... ("+", addExpr) ...].
Except that's not quite right either. Value isn't a number type; what happens if, say, v1 = StringVal? For that matter, what if somebody calls "+" with a different number of arguments? We need some error checking here.
I'm guessing the thing you're looking for probably looks something like this:
checkInt :: Value -> Interpreter Int
checkInt (IntVal x) = return x
checkInt _ = Interpreter $ ctx -> Left "Not an integer."
addExpr :: [Value] -> Interpreter Value
addExpr [expr1, expr2] do
val1 <- eval expr1
int1 <- checkInt val1
val2 <- eval expr2
int2 <- checkInt val2
return (IntVal $ int1 + int2)
addExpr _ = Interpreter $ ctx -> error "Wrong number of arguments."

Neighborhood of a mathematical expression using Haskell

I'm trying to implement with Haskell an algorithm to manipulate mathematical expressions.
I have this data type :
data Exp = Var String | IVal Int | Add Exp Exp
This will be enough for my question.
Given a set of expression transformations, for example :
(Add a b) => (Add b a)
(Add (Add a b) c) => (Add a (Add b c))
And an expression, for example : x = (Add (Add x y) (Add z t)), I want to find all expressions in the neighborhood of x. Given that neighborhood of x is defined as: y in Neighborhood(x) if y can be reached from x within a single transformation.
I am new to Haskell. I am not even sure Haskell is the right tool for this job.
The final goal is to get a function : equivalent x which returns a set of all expressions that are equivalent to x. In other words, the set of all expressions that are in the closure of the neighborhood of x (given a set of transformations).
Right now, I have the following :
import Data.List(nub)
import Data.Set
data Exp = IVal Int
| Scalar String
| Add Exp Exp
deriving (Show, Eq, Ord)
commu (Add a b) = (Add b a)
commu x = x
assoc (Add (Add a b) c) = (Add a (Add b c))
assoc (Add a (Add b c)) = (Add (Add a b) c)
assoc x = x
neighbors x = [commu x, assoc x]
equiv :: [Exp] -> [Exp]
equiv closure
| closure == closureUntilNow = closure
| otherwise = equiv closureUntilNow
where closureUntilNow = nub $ closure ++ concat [neighbors x|x<-closure]
But It's probably slower than needed (nub is O(n^2)) and some terms are missing.
For example, if you have f = (x+y)+z, then, you will not get (x+z)+y, and some others.
Imports, etc. below. I'll be using the multiset package.
import Control.Monad
import Data.MultiSet as M
data Exp = Var String | IVal Int | Add Exp Exp deriving (Eq, Ord, Show, Read)
A bit of paper-and-pencil work shows the following fact: expressions e1 and e2 are in the congruence closure of your relation iff the multiset of leaves are equal. By leaves, I mean the Var and IVal values, e.g. the output of the following function:
leaves :: Exp -> MultiSet Exp
leaves (Add a b) = leaves a `union` leaves b
leaves e = singleton e
So this suggests a nice clean way to generate all the elements in a particular value's neighborhood (without attempting to generate any duplicates in the first place). First, generate the multiset of leaves; then nondeterministically choose a partition of the multiset and recurse. The code to generate partitions might look like this:
partitions :: Ord k => MultiSet k -> [(MultiSet k, MultiSet k)]
partitions = go . toOccurList where
go [] = [(empty, empty)]
go ((k, n):bag) = do
n' <- [0..n]
(left, right) <- go bag
return (insertMany k n' left, insertMany k (n-n') right)
Actually, we only want partitions where both the left and right part are non-empty. But we'll check that after we've generated them all; it's cheap, as there's only two that aren't like that per invocation of partitions. So now we can generate the whole neighborhood in one fell swoop:
neighborhood :: Exp -> [Exp]
neighborhood = go . leaves where
full = guard . not . M.null
go m
| size m == 1 = toList m
| otherwise = do
(leftBag, rightBag) <- partitions m
full leftBag
full rightBag
left <- go leftBag
right <- go rightBag
return (Add left right)
By the way, the reason you're not getting all the terms is because you're generating the reflexive, transitive closure but not the congruence closure: you need to apply your rewrite rules deep in the term, not just at the top level.

LaTeX natural deduction proofs using Haskell

How can one create LaTeX source for natural deduction proof trees (like those shown here) via Haskell eg using HaTeX? I'd like to emulate LaTeX .stys like bussproofs.sty or proof.sty.
I'm using your question as an excuse to improve and demo a Haskell
call-tracing library I'm working
on. In the context of
tracing, an obvious way to create a proof tree is to trace a type
checker and then format the trace as a natural-deduction proof. To
keep things simple my example logic is the simply-typed lambda
calculus
(STLC),
which corresponds to the implicational fragment of propositional
intuitionistic
logic.
I am using proofs.sty, but not via HaTeX or any other Haskell
Latex library. The Latex for proof trees is very simple and using a
Haskell Latex library would just complicate things.
I've written the proof-tree generation code twice:
in a self-contained way, by writing a type checker that also
returns a proof tree;
using my tracing library, by call-tracing a type checker and then
post processing the trace into a proof tree.
Since you didn't ask about call-tracing libraries, you may be less
interested in the call-trace based version, but I think it's
interesting to compare both versions.
Examples
Let's start with some output examples first, to see what all this gets us.
The first three examples are motivated
by an axiom system for implicational propositional
calculus;
the first two also happen to correspond to S and
K:
The first axiom, K, with proof terms:
The second axiom, S, with proof terms, but with premises in the
context, not lambda bound:
The fourth axiom, modus ponens, without proof terms:
The third axiom in that Wikipedia article (Peirce's law) is
non-constructive and so we can't prove it here.
For a different kind of example, here's a failed type check of the Y
combinator:
The arrows are meant to lead you to the error, which is marked with a
bang (!).
Code
Now I'll describe the code which generated those examples. The code is
from this
file
unless otherwise noted. I'm not including every line of code here;
see that link if you want something you can actually build using GHC
7.6.3.
Most of the code -- the grammar, parser, and pretty printer -- is the
same for both versions; only the type checkers and proof-tree
generators differ. All of the common code is in the file just
referenced.
STLC grammar
The STLC grammar in ASCII:
-- Terms
e ::= x | \x:T.e | e e
-- Types
T ::= A | T -> T
-- Contexts
C ::= . | C,x:T
And the corresponding Haskell:
type TmVar = String
type TyVar = String
data Tm = Lam TmVar Ty Tm
| TmVar TmVar
| Tm :#: Tm
deriving Show
data Ty = TyVar TyVar
| Ty :->: Ty
deriving (Eq , Show)
type Ctx = [(TmVar,Ty)]
Type checking + proof tree generation
Both versions implement the same abstract STLC type checker. In ASCII:
(x:T) in C
---------- Axiom
C |- x : T
C,x:T1 |- e : T2
--------------------- -> Introduction
C |- \x:T1.e : T1->T2
C |- e : T1 -> T2 C |- e1 : T1
--------------------------------- -> Elimination
C |- e e1 : T2
Version 1: self-contained with inline proof-tree generation
The full code for this version is
here.
The proof-tree generation happens in the type checker, but the actual
proof-tree generation code is factored out into addProof and
conclusion.
Type checking
-- The mode is 'True' if proof terms should be included.
data R = R { _ctx :: Ctx , _mode :: Bool }
type M a = Reader R a
extendCtx :: TmVar -> Ty -> M a -> M a
extendCtx x t = local extend where
extend r = r { _ctx = _ctx r ++ [(x,t)] }
-- These take the place of the inferred type when there is a type
-- error.
here , there :: String
here = "\\,!"
there = "\\,\\uparrow"
-- Return the inferred type---or error string if type inference
-- fails---and the latex proof-tree presentation of the inference.
--
-- This produces different output than 'infer' in the error case: here
-- all premises are always computed, whereas 'infer' stops at the
-- first failing premise.
inferProof :: Tm -> M (Either String Ty , String)
inferProof tm#(Lam x t e) = do
(et' , p) <- extendCtx x t . inferProof $ e
let et'' = (t :->:) <$> et'
addProof et'' [p] tm
inferProof tm#(TmVar x) = do
mt <- lookup x <$> asks _ctx
let et = maybe (Left here) Right mt
addProof et [] tm
inferProof tm#(e :#: e1) = do
(et , p) <- inferProof e
(et1 , p1) <- inferProof e1
case (et , et1) of
(Right t , Right t1) ->
case t of
t1' :->: t2 | t1' == t1 -> addProof (Right t2) [p , p1] tm
_ -> addProof (Left here) [p , p1] tm
_ -> addProof (Left there) [p , p1] tm
Proof tree generation
The addProof corresponds to proofTree in the other version:
-- Given the inferred type, the proof-trees for all premise inferences
-- (subcalls), and the input term, annotate the inferred type with a
-- result proof tree.
addProof :: Either String Ty -> [String] -> Tm -> M (Either String Ty , String)
addProof et premises tm = do
R { _mode , _ctx } <- ask
let (judgment , rule) = conclusion _mode _ctx tm et
let tex = "\\infer[ " ++ rule ++ " ]{ " ++
judgment ++ " }{ " ++
intercalate " & " premises ++ " }"
return (et , tex)
The code for conclusion is common to both versions:
conclusion :: Mode -> Ctx -> Tm -> Either String Ty -> (String , String)
conclusion mode ctx tm e = (judgment mode , rule tm)
where
rule (TmVar _) = "\\textsc{Axiom}"
rule (Lam {}) = "\\to \\text{I}"
rule (_ :#: _) = "\\to \\text{E}"
tyOrError = either id pp e
judgment True = pp ctx ++ " \\vdash " ++ pp tm ++ " : " ++ tyOrError
judgment False = ppCtxOnlyTypes ctx ++ " \\vdash " ++ tyOrError
Version 2: via call-tracing, with proof-tree generation as post processing
Here the type checker is not even aware of proof-tree generation, and
adding call-tracing is just one line.
Type checking
type Mode = Bool
type Stream = LogStream (ProofTree Mode)
type M a = ErrorT String (ReaderT Ctx (Writer Stream)) a
type InferTy = Tm -> M Ty
infer , infer' :: InferTy
infer = simpleLogger (Proxy::Proxy "infer") ask (return ()) infer'
infer' (TmVar x) = maybe err pure . lookup x =<< ask where
err = throwError $ "Variable " ++ x ++ " not in context!"
infer' (Lam x t e) = (t :->:) <$> (local (++ [(x,t)]) . infer $ e)
infer' (e :#: e1) = do
t <- infer e
t1 <- infer e1
case t of
t1' :->: t2 | t1' == t1 -> pure t2
_ -> throwError $ "Can't apply " ++ show t ++ " to " ++ show t1 ++ "!"
The LogStream
type
and ProofTree
class
are from the library. The LogStream is the type of log events that
the "magic"
simpleLogger
logs. Note the line
infer = simpleLogger (Proxy::Proxy "infer") ask (return ()) infer'
which defines infer to be a logged version of infer', the actual
type checker. That's all you have to do to trace a monadic function!
I won't get into how simpleLogger actually works here, but the
result is that each call to infer gets logged, including the
context, arguments, and return value, and these data get grouped
together with all logged subcalls (here only to infer). It would be
easy to manually write such logging code for infer, but it's nice
that with the library you don't have to.
Proof-tree generation
To generate the Latex proof trees, we implement ProofTree to post
process infer's call trace. The library provides a proofTree
function that calls the ProofTree methods and assembles the proof
trees; we just need to specify how the conclusions of the typing
judgments will be formatted:
instance ProofTree Mode (Proxy (SimpleCall "infer" Ctx InferTy ())) where
callAndReturn mode t = conclusion mode ctx tm (Right ty)
where
(tm , ()) = _arg t
ty = _ret t
ctx = _before t
callAndError mode t = conclusion mode ctx tm (Left error)
where
(tm , ()) = _arg' t
how = _how t
ctx = _before' t
error = maybe "\\,!" (const "\\,\\uparrow") how
The pp calls are to a user defined pretty printer; obviously, the
library can't know how to pretty print your data types.
Because calls can be erroneous -- the library detects errors
-- we have to say how to format
successful and failing calls. Refer to the Y-combinator example above
for an example a failing type check, corresponding to the
callAndError case here.
The library's proofTree
function
is quite simple: it builds a proofs.sty proof tree with the current
call as conclusion, and the subcalls as premises:
proofTree :: mode -> Ex2T (LogTree (ProofTree mode)) -> String
proofTree mode (Ex2T t#(CallAndReturn {})) =
"\\infer[ " ++ rule ++ " ]{ " ++ conclusion ++ " }{ " ++ intercalate " & " premises ++ " }"
where
(conclusion , rule) = callAndReturn mode t
premises = map (proofTree mode) (_children t)
proofTree mode (Ex2T t#(CallAndError {})) =
"\\infer[ " ++ rule ++ " ]{ " ++ conclusion ++ " }{ " ++ intercalate " & " premises ++ " }"
where
(conclusion , rule) = callAndError mode t
premises = map (proofTree mode)
(_children' t ++ maybe [] (:[]) (_how t))
I use proofs.sty in the library because it allows arbitrarily many
premises, although bussproofs.sty would work for this STLC example
since no rule has more than five premises (the limit for
bussproofs). Both Latex packages are described
here.
Pretty printing
Now we return to code that is common between both versions.
The pretty printer that defines the pp used above is rather long --
it handles precedence and associativity, and is written in a way that
should be extensible if more terms, e.g. products, are added to the
calculus -- but mostly straightforward. First, we set up a precedence
table and a precedence-and-associativity-aware parenthesizer:
- Precedence: higher value means tighter binding.
type Prec = Double
between :: Prec -> Prec -> Prec
between x y = (x + y) / 2
lowest , highest , precLam , precApp , precArr :: Prec
highest = 1
lowest = 0
precLam = lowest
precApp = between precLam highest
precArr = lowest
-- Associativity: left, none, or right.
data Assoc = L | N | R deriving Eq
-- Wrap a pretty print when the context dictates.
wrap :: Pretty a => Assoc -> a -> a -> String
wrap side ctx x = if prec x `comp` prec ctx
then pp x
else parens . pp $ x
where
comp = if side == assoc x || assoc x == N
then (>=)
else (>)
parens s = "(" ++ s ++ ")"
And then we define the individual pretty printers:
class Pretty t where
pp :: t -> String
prec :: t -> Prec
prec _ = highest
assoc :: t -> Assoc
assoc _ = N
instance Pretty Ty where
pp (TyVar v) = v
pp t#(t1 :->: t2) = wrap L t t1 ++ " {\\to} " ++ wrap R t t2
prec (_ :->: _) = precArr
prec _ = highest
assoc (_ :->: _) = R
assoc _ = N
instance Pretty Tm where
pp (TmVar v) = v
pp (Lam x t e) = "\\lambda " ++ x ++ " {:} " ++ pp t ++ " . " ++ pp e
pp e#(e1 :#: e2) = wrap L e e1 ++ " " ++ wrap R e e2
prec (Lam {}) = precLam
prec (_ :#: _) = precApp
prec _ = highest
assoc (_ :#: _) = L
assoc _ = N
instance Pretty Ctx where
pp [] = "\\cdot"
pp ctx#(_:_) =
intercalate " , " [ x ++ " {:} " ++ pp t | (x,t) <- ctx ]
By adding a "mode" argument, it would be easy to use the same pretty
printer to print plain ASCII, which would be useful with other
call-trace post processors, such as the (unfinished) UnixTree
processor.
Parsing
A parser is not essential to the example, but of course I did not enter the
example input terms directly as Haskell Tms.
Recall the STLC grammar in ASCII:
-- Terms
e ::= x | \x:T.e | e e
-- Types
T ::= A | T -> T
-- Contexts
C ::= . | C,x:T
This grammar is ambiguous: both the term application e e
and function type T -> T have no associativity given by the
grammar. But in STLC term application is left associative and function
types are right associative, and so the corresponding disambiguated
grammar we actually parse is
-- Terms
e ::= e' | \x:T.e | e e'
e' ::= x | ( e )
-- Types
T ::= T' | T' -> T
T' ::= A | ( T )
-- Contexts
C ::= . | C,x:T
The parser is maybe too simple -- I'm not using a languageDef and
it's whitespace sensitive -- but it gets the job done:
type P a = Parsec String () a
parens :: P a -> P a
parens = Text.Parsec.between (char '(') (char ')')
tmVar , tyVar :: P String
tmVar = (:[]) <$> lower
tyVar = (:[]) <$> upper
tyAtom , arrs , ty :: P Ty
tyAtom = parens ty
<|> TyVar <$> tyVar
arrs = chainr1 tyAtom arrOp where
arrOp = string "->" *> pure (:->:)
ty = arrs
tmAtom , apps , lam , tm :: P Tm
tmAtom = parens tm
<|> TmVar <$> tmVar
apps = chainl1 tmAtom appOp where
appOp = pure (:#:)
lam = uncurry Lam <$> (char '\\' *> typing)
<*> (char '.' *> tm)
tm = apps <|> lam
typing :: P (TmVar , Ty)
typing = (,) <$> tmVar
<*> (char ':' *> ty)
ctx :: P Ctx
ctx = typing `sepBy` (char ',')
To clarify what the input terms look like, here are the examples from
the Makefile:
# OUTFILE CONTEXT TERM
./tm2latex.sh S.ctx 'x:P->Q->R,y:P->Q,z:P' 'xz(yz)'
./tm2latex.sh S.lam '' '\x:P->Q->R.\y:P->Q.\z:P.xz(yz)'
./tm2latex.sh S.err '' '\x:P->Q->R.\y:P->Q.\z:P.xzyz'
./tm2latex.sh K.ctx 'x:P,y:Q' 'x'
./tm2latex.sh K.lam '' '\x:P.\y:Q.x'
./tm2latex.sh I.ctx 'x:P' 'x'
./tm2latex.sh I.lam '' '\x:P.x'
./tm2latex.sh MP.ctx 'x:P,y:P->Q' 'yx'
./tm2latex.sh MP.lam '' '\x:P.\y:P->Q.yx'
./tm2latex.sh ZERO '' '\s:A->A.\z:A.z'
./tm2latex.sh SUCC '' '\n:(A->A)->(A->A).\s:A->A.\z:A.s(nsz)'
./tm2latex.sh ADD '' '\m:(A->A)->(A->A).\n:(A->A)->(A->A).\s:A->A.\z:A.ms(nsz)'
./tm2latex.sh MULT '' '\m:(A->A)->(A->A).\n:(A->A)->(A->A).\s:A->A.\z:A.m(ns)z'
./tm2latex.sh Y.err '' '\f:A->A.(\x:A.f(xx))(\x:A.f(xx))'
./tm2latex.sh Y.ctx 'a:A->(A->A),y:(A->A)->A' '\f:A->A.(\x:A.f(axx))(y(\x:A.f(axx)))'
Latex document generation
The ./tm2latex.sh script just calls pdflatex on the output of the
Haskell programs described above. The Haskell programs produce the proof
tree and then wrap it in a minimal Latex document:
unlines
[ "\\documentclass[10pt]{article}"
, "\\usepackage{proof}"
, "\\usepackage{amsmath}"
, "\\usepackage[landscape]{geometry}"
, "\\usepackage[cm]{fullpage}"
-- The most slender font I could find:
-- http://www.tug.dk/FontCatalogue/iwonalc/
, "\\usepackage[light,condensed,math]{iwona}"
, "\\usepackage[T1]{fontenc}"
, "\\begin{document}"
, "\\tiny"
, "\\[" ++ tex ++ "\\]"
, "\\end{document}"
]
As you can see, most of the Latex is devoted to making the proof trees
as small as possible; I plan to also write an ASCII proof tree post
processor, which may be more useful in practice when the examples are
larger.
Conclusion
As always, it takes a bit of code to write a parser, type checker, and
pretty printer. On top of that, adding proof-tree generation is
pretty simple in both versions. This is a fun toy example, but I
expect to do something similar in the context of a "real"
unification-based type checker for a dependently-typed language; there
I expect call tracing and proof-tree generation (in ASCII) to provide
significant help in debugging the type checker.

Resources