How can one create LaTeX source for natural deduction proof trees (like those shown here) via Haskell eg using HaTeX? I'd like to emulate LaTeX .stys like bussproofs.sty or proof.sty.
I'm using your question as an excuse to improve and demo a Haskell
call-tracing library I'm working
on. In the context of
tracing, an obvious way to create a proof tree is to trace a type
checker and then format the trace as a natural-deduction proof. To
keep things simple my example logic is the simply-typed lambda
calculus
(STLC),
which corresponds to the implicational fragment of propositional
intuitionistic
logic.
I am using proofs.sty, but not via HaTeX or any other Haskell
Latex library. The Latex for proof trees is very simple and using a
Haskell Latex library would just complicate things.
I've written the proof-tree generation code twice:
in a self-contained way, by writing a type checker that also
returns a proof tree;
using my tracing library, by call-tracing a type checker and then
post processing the trace into a proof tree.
Since you didn't ask about call-tracing libraries, you may be less
interested in the call-trace based version, but I think it's
interesting to compare both versions.
Examples
Let's start with some output examples first, to see what all this gets us.
The first three examples are motivated
by an axiom system for implicational propositional
calculus;
the first two also happen to correspond to S and
K:
The first axiom, K, with proof terms:
The second axiom, S, with proof terms, but with premises in the
context, not lambda bound:
The fourth axiom, modus ponens, without proof terms:
The third axiom in that Wikipedia article (Peirce's law) is
non-constructive and so we can't prove it here.
For a different kind of example, here's a failed type check of the Y
combinator:
The arrows are meant to lead you to the error, which is marked with a
bang (!).
Code
Now I'll describe the code which generated those examples. The code is
from this
file
unless otherwise noted. I'm not including every line of code here;
see that link if you want something you can actually build using GHC
7.6.3.
Most of the code -- the grammar, parser, and pretty printer -- is the
same for both versions; only the type checkers and proof-tree
generators differ. All of the common code is in the file just
referenced.
STLC grammar
The STLC grammar in ASCII:
-- Terms
e ::= x | \x:T.e | e e
-- Types
T ::= A | T -> T
-- Contexts
C ::= . | C,x:T
And the corresponding Haskell:
type TmVar = String
type TyVar = String
data Tm = Lam TmVar Ty Tm
| TmVar TmVar
| Tm :#: Tm
deriving Show
data Ty = TyVar TyVar
| Ty :->: Ty
deriving (Eq , Show)
type Ctx = [(TmVar,Ty)]
Type checking + proof tree generation
Both versions implement the same abstract STLC type checker. In ASCII:
(x:T) in C
---------- Axiom
C |- x : T
C,x:T1 |- e : T2
--------------------- -> Introduction
C |- \x:T1.e : T1->T2
C |- e : T1 -> T2 C |- e1 : T1
--------------------------------- -> Elimination
C |- e e1 : T2
Version 1: self-contained with inline proof-tree generation
The full code for this version is
here.
The proof-tree generation happens in the type checker, but the actual
proof-tree generation code is factored out into addProof and
conclusion.
Type checking
-- The mode is 'True' if proof terms should be included.
data R = R { _ctx :: Ctx , _mode :: Bool }
type M a = Reader R a
extendCtx :: TmVar -> Ty -> M a -> M a
extendCtx x t = local extend where
extend r = r { _ctx = _ctx r ++ [(x,t)] }
-- These take the place of the inferred type when there is a type
-- error.
here , there :: String
here = "\\,!"
there = "\\,\\uparrow"
-- Return the inferred type---or error string if type inference
-- fails---and the latex proof-tree presentation of the inference.
--
-- This produces different output than 'infer' in the error case: here
-- all premises are always computed, whereas 'infer' stops at the
-- first failing premise.
inferProof :: Tm -> M (Either String Ty , String)
inferProof tm#(Lam x t e) = do
(et' , p) <- extendCtx x t . inferProof $ e
let et'' = (t :->:) <$> et'
addProof et'' [p] tm
inferProof tm#(TmVar x) = do
mt <- lookup x <$> asks _ctx
let et = maybe (Left here) Right mt
addProof et [] tm
inferProof tm#(e :#: e1) = do
(et , p) <- inferProof e
(et1 , p1) <- inferProof e1
case (et , et1) of
(Right t , Right t1) ->
case t of
t1' :->: t2 | t1' == t1 -> addProof (Right t2) [p , p1] tm
_ -> addProof (Left here) [p , p1] tm
_ -> addProof (Left there) [p , p1] tm
Proof tree generation
The addProof corresponds to proofTree in the other version:
-- Given the inferred type, the proof-trees for all premise inferences
-- (subcalls), and the input term, annotate the inferred type with a
-- result proof tree.
addProof :: Either String Ty -> [String] -> Tm -> M (Either String Ty , String)
addProof et premises tm = do
R { _mode , _ctx } <- ask
let (judgment , rule) = conclusion _mode _ctx tm et
let tex = "\\infer[ " ++ rule ++ " ]{ " ++
judgment ++ " }{ " ++
intercalate " & " premises ++ " }"
return (et , tex)
The code for conclusion is common to both versions:
conclusion :: Mode -> Ctx -> Tm -> Either String Ty -> (String , String)
conclusion mode ctx tm e = (judgment mode , rule tm)
where
rule (TmVar _) = "\\textsc{Axiom}"
rule (Lam {}) = "\\to \\text{I}"
rule (_ :#: _) = "\\to \\text{E}"
tyOrError = either id pp e
judgment True = pp ctx ++ " \\vdash " ++ pp tm ++ " : " ++ tyOrError
judgment False = ppCtxOnlyTypes ctx ++ " \\vdash " ++ tyOrError
Version 2: via call-tracing, with proof-tree generation as post processing
Here the type checker is not even aware of proof-tree generation, and
adding call-tracing is just one line.
Type checking
type Mode = Bool
type Stream = LogStream (ProofTree Mode)
type M a = ErrorT String (ReaderT Ctx (Writer Stream)) a
type InferTy = Tm -> M Ty
infer , infer' :: InferTy
infer = simpleLogger (Proxy::Proxy "infer") ask (return ()) infer'
infer' (TmVar x) = maybe err pure . lookup x =<< ask where
err = throwError $ "Variable " ++ x ++ " not in context!"
infer' (Lam x t e) = (t :->:) <$> (local (++ [(x,t)]) . infer $ e)
infer' (e :#: e1) = do
t <- infer e
t1 <- infer e1
case t of
t1' :->: t2 | t1' == t1 -> pure t2
_ -> throwError $ "Can't apply " ++ show t ++ " to " ++ show t1 ++ "!"
The LogStream
type
and ProofTree
class
are from the library. The LogStream is the type of log events that
the "magic"
simpleLogger
logs. Note the line
infer = simpleLogger (Proxy::Proxy "infer") ask (return ()) infer'
which defines infer to be a logged version of infer', the actual
type checker. That's all you have to do to trace a monadic function!
I won't get into how simpleLogger actually works here, but the
result is that each call to infer gets logged, including the
context, arguments, and return value, and these data get grouped
together with all logged subcalls (here only to infer). It would be
easy to manually write such logging code for infer, but it's nice
that with the library you don't have to.
Proof-tree generation
To generate the Latex proof trees, we implement ProofTree to post
process infer's call trace. The library provides a proofTree
function that calls the ProofTree methods and assembles the proof
trees; we just need to specify how the conclusions of the typing
judgments will be formatted:
instance ProofTree Mode (Proxy (SimpleCall "infer" Ctx InferTy ())) where
callAndReturn mode t = conclusion mode ctx tm (Right ty)
where
(tm , ()) = _arg t
ty = _ret t
ctx = _before t
callAndError mode t = conclusion mode ctx tm (Left error)
where
(tm , ()) = _arg' t
how = _how t
ctx = _before' t
error = maybe "\\,!" (const "\\,\\uparrow") how
The pp calls are to a user defined pretty printer; obviously, the
library can't know how to pretty print your data types.
Because calls can be erroneous -- the library detects errors
-- we have to say how to format
successful and failing calls. Refer to the Y-combinator example above
for an example a failing type check, corresponding to the
callAndError case here.
The library's proofTree
function
is quite simple: it builds a proofs.sty proof tree with the current
call as conclusion, and the subcalls as premises:
proofTree :: mode -> Ex2T (LogTree (ProofTree mode)) -> String
proofTree mode (Ex2T t#(CallAndReturn {})) =
"\\infer[ " ++ rule ++ " ]{ " ++ conclusion ++ " }{ " ++ intercalate " & " premises ++ " }"
where
(conclusion , rule) = callAndReturn mode t
premises = map (proofTree mode) (_children t)
proofTree mode (Ex2T t#(CallAndError {})) =
"\\infer[ " ++ rule ++ " ]{ " ++ conclusion ++ " }{ " ++ intercalate " & " premises ++ " }"
where
(conclusion , rule) = callAndError mode t
premises = map (proofTree mode)
(_children' t ++ maybe [] (:[]) (_how t))
I use proofs.sty in the library because it allows arbitrarily many
premises, although bussproofs.sty would work for this STLC example
since no rule has more than five premises (the limit for
bussproofs). Both Latex packages are described
here.
Pretty printing
Now we return to code that is common between both versions.
The pretty printer that defines the pp used above is rather long --
it handles precedence and associativity, and is written in a way that
should be extensible if more terms, e.g. products, are added to the
calculus -- but mostly straightforward. First, we set up a precedence
table and a precedence-and-associativity-aware parenthesizer:
- Precedence: higher value means tighter binding.
type Prec = Double
between :: Prec -> Prec -> Prec
between x y = (x + y) / 2
lowest , highest , precLam , precApp , precArr :: Prec
highest = 1
lowest = 0
precLam = lowest
precApp = between precLam highest
precArr = lowest
-- Associativity: left, none, or right.
data Assoc = L | N | R deriving Eq
-- Wrap a pretty print when the context dictates.
wrap :: Pretty a => Assoc -> a -> a -> String
wrap side ctx x = if prec x `comp` prec ctx
then pp x
else parens . pp $ x
where
comp = if side == assoc x || assoc x == N
then (>=)
else (>)
parens s = "(" ++ s ++ ")"
And then we define the individual pretty printers:
class Pretty t where
pp :: t -> String
prec :: t -> Prec
prec _ = highest
assoc :: t -> Assoc
assoc _ = N
instance Pretty Ty where
pp (TyVar v) = v
pp t#(t1 :->: t2) = wrap L t t1 ++ " {\\to} " ++ wrap R t t2
prec (_ :->: _) = precArr
prec _ = highest
assoc (_ :->: _) = R
assoc _ = N
instance Pretty Tm where
pp (TmVar v) = v
pp (Lam x t e) = "\\lambda " ++ x ++ " {:} " ++ pp t ++ " . " ++ pp e
pp e#(e1 :#: e2) = wrap L e e1 ++ " " ++ wrap R e e2
prec (Lam {}) = precLam
prec (_ :#: _) = precApp
prec _ = highest
assoc (_ :#: _) = L
assoc _ = N
instance Pretty Ctx where
pp [] = "\\cdot"
pp ctx#(_:_) =
intercalate " , " [ x ++ " {:} " ++ pp t | (x,t) <- ctx ]
By adding a "mode" argument, it would be easy to use the same pretty
printer to print plain ASCII, which would be useful with other
call-trace post processors, such as the (unfinished) UnixTree
processor.
Parsing
A parser is not essential to the example, but of course I did not enter the
example input terms directly as Haskell Tms.
Recall the STLC grammar in ASCII:
-- Terms
e ::= x | \x:T.e | e e
-- Types
T ::= A | T -> T
-- Contexts
C ::= . | C,x:T
This grammar is ambiguous: both the term application e e
and function type T -> T have no associativity given by the
grammar. But in STLC term application is left associative and function
types are right associative, and so the corresponding disambiguated
grammar we actually parse is
-- Terms
e ::= e' | \x:T.e | e e'
e' ::= x | ( e )
-- Types
T ::= T' | T' -> T
T' ::= A | ( T )
-- Contexts
C ::= . | C,x:T
The parser is maybe too simple -- I'm not using a languageDef and
it's whitespace sensitive -- but it gets the job done:
type P a = Parsec String () a
parens :: P a -> P a
parens = Text.Parsec.between (char '(') (char ')')
tmVar , tyVar :: P String
tmVar = (:[]) <$> lower
tyVar = (:[]) <$> upper
tyAtom , arrs , ty :: P Ty
tyAtom = parens ty
<|> TyVar <$> tyVar
arrs = chainr1 tyAtom arrOp where
arrOp = string "->" *> pure (:->:)
ty = arrs
tmAtom , apps , lam , tm :: P Tm
tmAtom = parens tm
<|> TmVar <$> tmVar
apps = chainl1 tmAtom appOp where
appOp = pure (:#:)
lam = uncurry Lam <$> (char '\\' *> typing)
<*> (char '.' *> tm)
tm = apps <|> lam
typing :: P (TmVar , Ty)
typing = (,) <$> tmVar
<*> (char ':' *> ty)
ctx :: P Ctx
ctx = typing `sepBy` (char ',')
To clarify what the input terms look like, here are the examples from
the Makefile:
# OUTFILE CONTEXT TERM
./tm2latex.sh S.ctx 'x:P->Q->R,y:P->Q,z:P' 'xz(yz)'
./tm2latex.sh S.lam '' '\x:P->Q->R.\y:P->Q.\z:P.xz(yz)'
./tm2latex.sh S.err '' '\x:P->Q->R.\y:P->Q.\z:P.xzyz'
./tm2latex.sh K.ctx 'x:P,y:Q' 'x'
./tm2latex.sh K.lam '' '\x:P.\y:Q.x'
./tm2latex.sh I.ctx 'x:P' 'x'
./tm2latex.sh I.lam '' '\x:P.x'
./tm2latex.sh MP.ctx 'x:P,y:P->Q' 'yx'
./tm2latex.sh MP.lam '' '\x:P.\y:P->Q.yx'
./tm2latex.sh ZERO '' '\s:A->A.\z:A.z'
./tm2latex.sh SUCC '' '\n:(A->A)->(A->A).\s:A->A.\z:A.s(nsz)'
./tm2latex.sh ADD '' '\m:(A->A)->(A->A).\n:(A->A)->(A->A).\s:A->A.\z:A.ms(nsz)'
./tm2latex.sh MULT '' '\m:(A->A)->(A->A).\n:(A->A)->(A->A).\s:A->A.\z:A.m(ns)z'
./tm2latex.sh Y.err '' '\f:A->A.(\x:A.f(xx))(\x:A.f(xx))'
./tm2latex.sh Y.ctx 'a:A->(A->A),y:(A->A)->A' '\f:A->A.(\x:A.f(axx))(y(\x:A.f(axx)))'
Latex document generation
The ./tm2latex.sh script just calls pdflatex on the output of the
Haskell programs described above. The Haskell programs produce the proof
tree and then wrap it in a minimal Latex document:
unlines
[ "\\documentclass[10pt]{article}"
, "\\usepackage{proof}"
, "\\usepackage{amsmath}"
, "\\usepackage[landscape]{geometry}"
, "\\usepackage[cm]{fullpage}"
-- The most slender font I could find:
-- http://www.tug.dk/FontCatalogue/iwonalc/
, "\\usepackage[light,condensed,math]{iwona}"
, "\\usepackage[T1]{fontenc}"
, "\\begin{document}"
, "\\tiny"
, "\\[" ++ tex ++ "\\]"
, "\\end{document}"
]
As you can see, most of the Latex is devoted to making the proof trees
as small as possible; I plan to also write an ASCII proof tree post
processor, which may be more useful in practice when the examples are
larger.
Conclusion
As always, it takes a bit of code to write a parser, type checker, and
pretty printer. On top of that, adding proof-tree generation is
pretty simple in both versions. This is a fun toy example, but I
expect to do something similar in the context of a "real"
unification-based type checker for a dependently-typed language; there
I expect call tracing and proof-tree generation (in ASCII) to provide
significant help in debugging the type checker.
Related
I have a type to represent haskell types:
data Type
= TApp Type Type
| TVar Name
| TLit Name
infixl 8 `TApp`
-- a -> b
aToB = TLit "Fun" `TApp` TVar "a" `TApp` TVar "b"
-- Maybe (IO Int)
maybeIOInt = TLit "Maybe" `TApp` (TLit "IO" `TApp` TLit "Int")
I want to print it as haskell does, namely, literals that are symbols are printed infix while other literal are printed prefix.
also parenthesis should be added when necessary:
show aToB = "a -> b"
show maybeIOInt = "Maybe (IO Int)"
show ast = ???
How can I implement this?
The usual way to do this is thread through a precedence variable to your printing function. Also, you almost always should prefer a pretty printing library instead of just using raw strings (both for performance reasons, and ease). GHC ships with pretty and I also recommend the newer prettyprinter.
Using the former (and assuming type Name = String):
import Text.PrettyPrint.HughesPJ
prettyType :: Type -> Doc
prettyType = go 0
where
go :: Int -> Type -> Doc
go _ (TVar x) = text x
go _ (TLit n) = text n
go n (TLit "Fun" `TApp` l `TApp` r) = maybeParens (n > 0) (go 1 l <+> text "->" <+> go 0 r)
go n (l `TApp` r) = maybeParens (n > 1) (go 1 l <+> go 2 r)
I have a "public safe" that may fail with a (potentially informative) errors:
data EnigmaError = BadRotors
| BadWindows
| MiscError String
instance Show EnigmaError where
show BadRotors = "Bad rotors"
show BadWindows = "Bad windows"
show (MiscError str) = str
configEnigma :: String -> String -> String -> String -> Except EnigmaError EnigmaConfig
configEnigma rots winds plug rngs = do
unless (and $ [(>=1),(<=26)] <*> rngs') (throwError BadRotors)
unless (and $ (`elem` letters) <$> winds') (throwError BadWindows)
-- ...
return EnigmaConfig {
components = components',
positions = zipWith (\w r -> (mod (numA0 w - r + 1) 26) + 1) winds' rngs',
rings = rngs'
}
where
rngs' = reverse $ (read <$> (splitOn "." $ "01." ++ rngs ++ ".01") :: [Int])
winds' = "A" ++ reverse winds ++ "A"
components' = reverse $ splitOn "-" $ rots ++ "-" ++ plug
but it is unclear how I should call this, particularly (and specifically) in implementing Read and Arbitrary (for QuickCheck).
For the former, I can get as far as
instance Read EnigmaConfig where
readsPrec _ i = case runExcept (configEnigma c w s r) of
Right cfg -> [(cfg, "")]
Left err -> undefined
where [c, w, s, r] = words i
but this seems to end up hiding error information available in err; while for the latter, I'm stuck at
instance Arbitrary EnigmaConfig where
arbitrary = do
nc <- choose (3,4) -- This could cover a wider range
ws <- replicateM nc capitals
cs <- replicateM nc (elements rotors)
uk <- elements reflectors
rs <- replicateM nc (choose (1,26))
return $ configEnigma (intercalate "-" (uk:cs))
ws
"UX.MO.KZ.AY.EF.PL" -- TBD - Generate plugboard and test <<<
(intercalate "." $ (printf "%02d") <$> (rs :: [Int]))
which fails with a mismatch between the expected and actual types:
Expected type: Gen EnigmaConfig
Actual type: Gen (transformers-0.4.2.0:Control.Monad.Trans.Except.Except Crypto.Enigma.EnigmaError EnigmaConfig)
How do I call a ("public safe") constructor when it may fail, particularly when using it in implementing Read and Arbitrary for my class?
The Read typeclass represents parses as lists of successes (with failures being the same as no successes); so rather than undefined you should return []. As for losing information about what went wrong: that's true, and the type of readsPrec means you can't do much about that. If you really, really wanted to [note: I don't think you should want this] you could define a newtype wrapper around Except EnigmaError EnigmaConfig and give that a Read instance that had successful parses of configuration errors.
For Arbitrary you have a couple choices. One choice is so-called rejection sampling; e.g.
arbitrary = do
-- ...
case configEnigma ... of
Left err -> arbitrary -- try again
Right v -> return v
You might also consider an Arbitrary instance to be part of your internal API, and use unsafe, internal calls rather than using the safe, public API for constructing your configuration. Other options include calling error or fail. (I consider these four options to be in roughly preference order -- rejection sampling, then unsafe internal calls, then error, then fail -- though your judgement may differ.)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
so I have to build a simple compiler for a simple language, I used Haskell's Alex and Happy to build the parser, and it is printing the right AST's already, so next step I haveto do is to translate that data structure to another one, that represents the program in Three Address Code.
I'm yet a bit lost on how to do this, so, using Haskell data structure, how can I translate an AST to it's three Address Code? would really appreciate some help :)
thanks in advance !
Your parser is quite irrelevant to this question - which seems to be, how do I translate one AST to another. To address this I will use a simplified language. Also, the code below is not intended to be simple, but easily extensible and maintainable.
{-# LANGUAGE DeriveFunctor #-}
import Control.Monad.State
import Control.Monad.Reader
import Control.Monad.Free
import Data.Functor.Foldable
import qualified Data.Set as S
data Ident = Ident Int deriving (Eq)
data ExpF a = IntLitF Int
| PlusF a a
| IntVarF Ident deriving (Eq, Functor)
data Exp = IntLit Int
| Plus Exp Exp
| IntVar Ident deriving (Eq)
data Cmd
= CmdAtrib Ident Exp
| CmdSeq Cmd Cmd
| CmdNone deriving (Eq)
data TAC_F r = Assign Ident (ExpF Ident) r deriving (Eq, Show, Functor)
type TAC = Free TAC_F ()
(=:) :: Ident -> ExpF Ident -> TAC
(=:) i e = Free (Assign i e (Pure ()))
Some of the above definitions might seem strange. ExpF and Exp are defined in the recursion schemes style and used the recursion-schemes package. More info. TAC is defined in terms of Free because like the name implies, you get monad syntax for free. The only thing that a >> b does for TAC is creates the ast which contains a followed by b.
You need a way to generate fresh variables:
freshVar :: Monad m => StateT [Ident] m Ident
freshVar = do
s <- get
case s of
[] -> put [Ident (-1)] >> return (Ident (-1))
(Ident x:xs) -> put (Ident (x-1) : xs) >> return (Ident (x-1))
I use a list because it is simple but you may want to attach more information to your identifiers, in which case you should use Data.Set.Set or Data.Map.Map. By convention, fresh variables are negative while quantified variables are positive. Not a very sophisticted method, but it works.
Now this is where the magic happens. Thanks to recursion schemes, recursion over the tree is very simple:
translateExp :: Exp -> State [Ident] (TAC, Ident)
translateExp = cata go where
go (PlusF a b) = do
(ae,av) <- a
(be,bv) <- b
t <- freshVar
return (ae >> be >> t =: PlusF av bv, t)
go (IntLitF i) = do
t <- freshVar
return (t =: IntLitF i, t)
go (IntVarF a) = return (return (), a)
translateCmd :: Cmd -> State [Ident] TAC
translateCmd (CmdAtrib ident exp) = do
(e,v) <- translateExp exp
return (e >> ident =: IntVarF v)
translateCmd (CmdSeq a b) = do
x <- translateCmd a
y <- translateCmd b
return (x >> y)
translateCmd CmdNone = return (return ())
Then an example:
test0 = CmdSeq (CmdAtrib (Ident 1) (IntLit 10 `Plus` IntVar (Ident 2)))
(CmdAtrib (Ident 3) (IntVar (Ident 1) `Plus` IntVar (Ident 1) `Plus` IntVar (Ident 2)))
>putStrLn $ showTAC $ fst $ runState (translateCmd test0) []
t1 =: 10
t2 =: t1 + v2
v1 =: t2
t3 =: v1 + v1
t4 =: t3 + v2
v3 =: t4
Note that variables bound by the LHS of CmdAtrib will never collide with those found in the RHS.
Boilerplate / show instances:
instance Show Ident where
show (Ident i) | i < 0 = "t" ++ show (abs i)
| otherwise = "v" ++ show i
instance Show a => Show (ExpF a) where
show (IntLitF i) = show i
show (PlusF a b) = show a ++ " + " ++ show b
show (IntVarF i) = show i
type instance Base Exp = ExpF
instance Foldable Exp where
project (IntLit i) = IntLitF i
project (Plus a b) = PlusF a b
project (IntVar b) = IntVarF b
instance Show Cmd where
show (CmdAtrib i e) = show i ++ " <- " ++ show e
show (CmdSeq a b) = show a ++ " ;\n " ++ show b
show (CmdNone) = ""
instance Show Exp where
show (IntLit i) = show i
show (Plus a b) = show a ++ " + " ++ show b
show (IntVar i) = show i
showTAC (Free (Assign i exp xs)) = show i ++ " =: " ++ show exp ++ "\n" ++ showTAC xs
showTAC (Pure a) = ""
I want to implement an imperative language interpreter in Haskell (for educational purposes). But it's difficult for me to create right architecture for my interpreter: How should I store variables? How can I implement nested function calls? How should I implement variable scoping? How can I add debugging possibilities in my language? Should I use monads/monad transformers/other techniques? etc.
Does anybody know good articles/papers/tutorials/sources on this subject?
If you are new to writing this kind of processors, I would recommend to put off using monads for a while and first focus on getting a barebones implementation without any bells or whistles.
The following may serve as a minitutorial.
I assume that you have already tackled the issue of parsing the source text of the programs you want to write an interpreter for and that you have some types for capturing the abstract syntax of your language. The language that I use here is very simple and only consists of integer expressions and some basic statements.
Preliminaries
Let us first import some modules that we will use in just a bit.
import Data.Function
import Data.List
The essence of an imperative language is that it has some form of mutable variables. Here, variables simply represented by strings:
type Var = String
Expressions
Next, we define expressions. Expressions are constructed from integer constants, variable references, and arithmetic operations.
infixl 6 :+:, :-:
infixl 7 :*:, :/:
data Exp
= C Int -- constant
| V Var -- variable
| Exp :+: Exp -- addition
| Exp :-: Exp -- subtraction
| Exp :*: Exp -- multiplication
| Exp :/: Exp -- division
For example, the expression that adds the constant 2 to the variable x is represented by V "x" :+: C 2.
Statements
The statement language is rather minimal. We have three forms of statements: variable assignments, while loops, and sequences.
infix 1 :=
data Stmt
= Var := Exp -- assignment
| While Exp Stmt -- loop
| Seq [Stmt] -- sequence
For example, a sequence of statements for "swapping" the values of the variables x and y can be represented by Seq ["tmp" := V "x", "x" := V "y", "y" := V "tmp"].
Programs
A program is just a statement.
type Prog = Stmt
Stores
Now, let us move to the actual interpreter. While running a program, we need to keep track of the values assigned to the different variables in the programs. These values are just integers and as a representation of our "memory" we just use lists of pairs consisting of a variable and a value.
type Val = Int
type Store = [(Var, Val)]
Evaluating expressions
Expressions are evaluated by mapping constants to their value, looking up the values of variables in the store, and mapping arithmetic operations to their Haskell counterparts.
eval :: Exp -> Store -> Val
eval (C n) r = n
eval (V x) r = case lookup x r of
Nothing -> error ("unbound variable `" ++ x ++ "'")
Just v -> v
eval (e1 :+: e2) r = eval e1 r + eval e2 r
eval (e1 :-: e2) r = eval e1 r - eval e2 r
eval (e1 :*: e2) r = eval e1 r * eval e2 r
eval (e1 :/: e2) r = eval e1 r `div` eval e2 r
Note that if the store contains multiple bindings for a variable, lookup selects the bindings that comes first in the store.
Executing statements
While the evaluation of an expression cannot alter the contents of the store, executing a statement may in fact result in an update of the store. Hence, the function for executing a statement takes a store as an argument and produces a possibly updated store.
exec :: Stmt -> Store -> Store
exec (x := e) r = (x, eval e r) : r
exec (While e s) r | eval e r /= 0 = exec (Seq [s, While e s]) r
| otherwise = r
exec (Seq []) r = r
exec (Seq (s : ss)) r = exec (Seq ss) (exec s r)
Note that, in the case of assignments, we simply push a new binding for the updated variable to the store, effectively shadowing any previous bindings for that variable.
Top-level Interpreter
Running a program reduces to executing its top-level statement in the context of an initial store.
run :: Prog -> Store -> Store
run p r = nubBy ((==) `on` fst) (exec p r)
After executing the statement we clean up any shadowed bindings, so that we can easily read off the contents of the final store.
Example
As an example, consider the following program that computes the Fibonacci number of the number stored in the variable n and stores its result in the variable x.
fib :: Prog
fib = Seq
[ "x" := C 0
, "y" := C 1
, While (V "n") $ Seq
[ "z" := V "x" :+: V "y"
, "x" := V "y"
, "y" := V "z"
, "n" := V "n" :-: C 1
]
]
For instance, in an interactive environment, we can now use our interpreter to compute the 25th Fibonacci number:
> lookup "x" $ run fib [("n", 25)]
Just 75025
Monadic Interpretation
Of course, here, we are dealing with a very simple and tiny imperative language. As your language gets more complex, so will the implementation of your interpreter. Think for example about what additions you need when you add procedures and need to distinguish between local (stack-based) storage and global (heap-based) storage. Returning to that part of your question, you may then indeed consider the introduction of monads to streamline the implementation of your interpreter a bit.
In the example interpreter above, there are two "effects" that are candidates for being captured by a monadic structure:
The passing around and updating of the store.
Aborting running the program when a run-time error is encountered. (In the implementation above, the interpreter simply crashes when such an error occurs.)
The first effect is typically captured by a state monad, the second by an error monad. Let us briefly investigate how to do this for our interpreter.
We prepare by importing just one more module from the standard libraries.
import Control.Monad
We can use monad transformers to construct a composite monad for our two effects by combining a basic state monad and a basic error monad. Here, however, we simply construct the composite monad in one go.
newtype Interp a = Interp { runInterp :: Store -> Either String (a, Store) }
instance Monad Interp where
return x = Interp $ \r -> Right (x, r)
i >>= k = Interp $ \r -> case runInterp i r of
Left msg -> Left msg
Right (x, r') -> runInterp (k x) r'
fail msg = Interp $ \_ -> Left msg
Edit 2018: The Applicative Monad Proposal
Since the Applicative Monad Proposal (AMP) every Monad must also be an instance of Functor and Applicative. To do this we can add
import Control.Applicative -- Otherwise you can't do the Applicative instance.
to the imports and make Interp an instance of Functor and Applicative like this
instance Functor Interp where
fmap = liftM -- imported from Control.Monad
instance Applicative Interp where
pure = return
(<*>) = ap -- imported from Control.Monad
Edit 2018 end
For reading from and writing to the store, we introduce effectful functions rd and wr:
rd :: Var -> Interp Val
rd x = Interp $ \r -> case lookup x r of
Nothing -> Left ("unbound variable `" ++ x ++ "'")
Just v -> Right (v, r)
wr :: Var -> Val -> Interp ()
wr x v = Interp $ \r -> Right ((), (x, v) : r)
Note that rd produces a Left-wrapped error message if a variable lookup fails.
The monadic version of the expression evaluator now reads
eval :: Exp -> Interp Val
eval (C n) = do return n
eval (V x) = do rd x
eval (e1 :+: e2) = do v1 <- eval e1
v2 <- eval e2
return (v1 + v2)
eval (e1 :-: e2) = do v1 <- eval e1
v2 <- eval e2
return (v1 - v2)
eval (e1 :*: e2) = do v1 <- eval e1
v2 <- eval e2
return (v1 * v2)
eval (e1 :/: e2) = do v1 <- eval e1
v2 <- eval e2
if v2 == 0
then fail "division by zero"
else return (v1 `div` v2)
In the case for :/:, division by zero results in an error message being produced through the Monad-method fail, which, for Interp, reduces to wrapping the message in a Left-value.
For the execution of statements we have
exec :: Stmt -> Interp ()
exec (x := e) = do v <- eval e
wr x v
exec (While e s) = do v <- eval e
when (v /= 0) (exec (Seq [s, While e s]))
exec (Seq []) = do return ()
exec (Seq (s : ss)) = do exec s
exec (Seq ss)
The type of exec conveys that statements do not result in values but are executed only for their effects on the store or the run-time errors they may trigger.
Finally, in the function run we perform a monadic computation and process its effects.
run :: Prog -> Store -> Either String Store
run p r = case runInterp (exec p) r of
Left msg -> Left msg
Right (_, r') -> Right (nubBy ((==) `on` fst) r')
In the interactive environment, we can now revisit the interpretation of our example program:
> lookup "x" `fmap` run fib [("n", 25)]
Right (Just 75025)
> lookup "x" `fmap` run fib []
Left "unbound variable `n'"
A couple of good papers I've finally found:
Building Interpreters by Composing Monads
Monad Transformers Step by Step - how incrementally build tiny interpreter using
monad transformers
How to build a monadic interpreter in one day
Monad Transformers and Modular Interpreters
This function generates simple .dot files for visualizing automata transition functions using Graphviz. It's primary purpose is debugging large sets of automatically generated transitions (e.g., the inflections of Latin verbs).
prepGraph :: ( ... ) => NFA c b a -> [String]
prepGraph nfa = "digraph finite_state_machine {"
: wrapSp "rankdir = LR"
: wrapSp ("node [shape = circle]" ++ (mapSp (states nfa \\ terminal nfa)))
: wrapSp ("node [shape = doublecircle]" ++ (mapSp $ terminal nfa))
: formatGraph nfa ++ ["}"]
formatGraph :: ( ... ) => NFA c b a -> [String]
formatGraph = map formatDelta . deltaTuples
where formatDelta (a, a', bc) = wrapSp (mkArrow a a' ++ " " ++ mkLabel bc)
mkArrow x y = show x ++ " -> " ++ show y
mkLabel (y, z) = case z of
(Just t) -> "[ label = \"(" ++ show y ++ ", " ++ show t ++ ")\" ]"
Nothing -> "[ label = \"(" ++ show y ++ ", " ++ "Null" ++ ")\" ]"
where wrap, wrapSp and mapSp are formatting functions, as is deltaTuples.
The problem is that formatGraph retains double quotes around Strings, which causes errors in Graphviz. E.g., when I print unlines $ prepGraph to a file, I get things like:
0 -> 1 [ label = "('a', "N. SF")" ];
instead of
0 -> 1 [ label = "('a', N. SF)" ];
(However, "Null" seems to work fine, and outputs perfectly well). Now of course the string "N. SF" isn't the actual form I use to store inflections, but that form does include a String or two. So how can I tell Haskell: when you show a String values, don't double-quote it?
Check out how Martin Erwig handled the same problem in Data.Graph.Inductive.Graphviz:
http://hackage.haskell.org/packages/archive/fgl/5.4.2.3/doc/html/src/Data-Graph-Inductive-Graphviz.html
The function you're looking for is "sq" at the bottom:
sq :: String -> String
sq s#[c] = s
sq ('"':s) | last s == '"' = init s
| otherwise = s
sq ('\'':s) | last s == '\'' = init s
| otherwise = s
sq s = s
(check out the context and adapt for your own code, of course)
Use dotgen package - it has special safeguards in place to prevent forbidden chars from sneaking into attribute values.
You could define your own typeClass like this:
class GShow a where
gShow :: a -> String
gShow = show
instance GShow String where
show = id
instance GShow Integer
instance GShow Char
-- And so on for all the types you need.
The default implementation for "gShow" is "show", so you don't need a "where" clause for every instance. But you do need all the instances, which is a bit of a drag.
Alternatively you could use overlapping instances. I think (although I haven't tried it) that this will let you replace the list of instances using the default "gShow" by a single line:
instance (Show a) => GShow a
The idea is that with overlapping instances the compiler will chose the most specific instance available. So for strings it will pick the string instance over the more general one, and for everything else the general one is the only one that matches.
It seems a little ugly, but you could apply a filter to show t
filter (/='"') (show t)