Haskell: Pretty print infix and prefix - haskell

I have a type to represent haskell types:
data Type
= TApp Type Type
| TVar Name
| TLit Name
infixl 8 `TApp`
-- a -> b
aToB = TLit "Fun" `TApp` TVar "a" `TApp` TVar "b"
-- Maybe (IO Int)
maybeIOInt = TLit "Maybe" `TApp` (TLit "IO" `TApp` TLit "Int")
I want to print it as haskell does, namely, literals that are symbols are printed infix while other literal are printed prefix.
also parenthesis should be added when necessary:
show aToB = "a -> b"
show maybeIOInt = "Maybe (IO Int)"
show ast = ???
How can I implement this?

The usual way to do this is thread through a precedence variable to your printing function. Also, you almost always should prefer a pretty printing library instead of just using raw strings (both for performance reasons, and ease). GHC ships with pretty and I also recommend the newer prettyprinter.
Using the former (and assuming type Name = String):
import Text.PrettyPrint.HughesPJ
prettyType :: Type -> Doc
prettyType = go 0
where
go :: Int -> Type -> Doc
go _ (TVar x) = text x
go _ (TLit n) = text n
go n (TLit "Fun" `TApp` l `TApp` r) = maybeParens (n > 0) (go 1 l <+> text "->" <+> go 0 r)
go n (l `TApp` r) = maybeParens (n > 1) (go 1 l <+> go 2 r)

Related

Haskell: Exception <<loop>> on recursive data entry

So I'm trying to make a little program that can take in data captured during an experiment, and for the most part I think I've figured out how to recursively take in data until the user signals there is no more, however upon termination of data taking haskell throws Exception: <<loop>> and I can't really figure out why. Here's the code:
readData :: (Num a, Read a) => [Point a] -> IO [Point a]
readData l = do putStr "Enter Point (x,y,<e>) or (d)one: "
entered <- getLine
if (entered == "d" || entered == "done")
then return l
else do let l = addPoint l entered
nl <- readData l
return nl
addPoint :: (Num a, Read a) => [Point a] -> String -> [Point a]
addPoint l s = l ++ [Point (dataList !! 0) (dataList !! 1) (dataList !! 2)]
where dataList = (map read $ checkInputData . splitOn "," $ s) :: (Read a) => [a]
checkInputData :: [String] -> [String]
checkInputData xs
| length xs < 2 = ["0","0","0"]
| length xs < 3 = (xs ++ ["0"])
| length xs == 3 = xs
| length xs > 3 = ["0","0","0"]
As far as I can tell, the exception is indication that there is an infinite loop somewhere, but I can't figure out why this is occurring. As far as I can tell when "done" is entered the current level should simply return l, the list it's given, which should then cascade up the previous iterations of the function.
Thanks for any help. (And yes, checkInputData will have proper error handling once I figure out how to do that.)
<<loop>> basically means GHC has detected an infinite loop caused by a value which depends immediately on itself (cf. this question, or this one for further technical details if you are curious). In this case, that is triggered by:
else do let l = addPoint l entered
This definition, which shadows the l you passed as an argument, defines l in terms of itself. You meant to write something like...
else do let l' = addPoint l entered
... which defines a new value, l', in terms of the original l.
As Carl points out, turning on -Wall (e.g. by passing it to GHC at the command line, or with :set -Wall in GHCi) would make GHC warn you about the shadowing:
<interactive>:171:33: warning: [-Wname-shadowing]
This binding for ‘l’ shadows the existing binding
bound at <interactive>:167:10
Also, as hightlighted by dfeuer, the whole do-block in the else branch can be replaced by:
readData (addPoint l entered)
As an unrelated suggestion, in this case it is a good idea to replace your uses of length and (!!) with pattern matching. For instance, checkInputData can be written as:
checkInputData :: [String] -> [String]
checkInputData xs = case xs of
[_,_] -> xs ++ ["0"]
[_,_,_] -> xs
_ -> ["0","0","0"]
addPoint, in its turn, might become:
addPoint :: (Num a, Read a) => [Point a] -> String -> [Point a]
addPoint l s = l ++ [Point x y z]
where [x,y,z] = (map read $ checkInputData . splitOn "," $ s) :: (Read a) => [a]
That becomes even neater if you change checkInputData so that it returns a (String, String, String) triple, which would better express the invariant that you are reading exactly three values.

Haskell: Why I can load this file in ghci but when I try to do the same in hugs I get a syntax error?

This is the file I am trying to load:
import Data.List (foldl')
import Text.Printf (printf)
import Data.Char (ord)
--data IntParsedStr = Int | ParsingError
--data ParsingError = ParsingError String
asInt :: String -> Either String Integer
asInt "" = Left "Empty string"
asInt xs#(x:xt) | x == '-' = either Left (Right . ((-1) *)) (asInt' xt)
| otherwise = either Left Right (asInt' xs)
where asInt' :: String -> Either String Integer
asInt' "" = Left "No digits"
asInt' xs = foldl' step (Right 0) xs
where step :: Either String Integer -> Char -> Either String Integer
step (Left e) _ = Left e
step (Right ac) c = either Left (Right . (10 * ac + ) . fromIntegral) (charAsInt c)
charAsInt :: Char -> Either String Int
charAsInt c | between (ord '0') (ord c) (ord '9') = Right $ ord c - ord '0'
| otherwise = Left $ printf "'%c' is not a digit" c
checkPrecision str = either error ((str == ). show) (asInt str)
between :: Ord t => t -> t -> t -> Bool
between a b c = a <= b && b <= c
It loads without any problem in ghci but in hugs I get this error:
ERROR "folds.hs":17 - Syntax error in expression (unexpected `)')
Line 17 is the last in the definition of asInt function
Edit:
Hi!
I recently founded that this is in fact a known hugs issue as said in this question
where there is a link to the Hugs 98 Users Guide where says that
Legal expressions like (a+b+) and (a*b+) are rejected.
I believe this is a bug in Hugs, not a liberality of GHC. The Haskell 98 report (appropriate in the context of Hugs usage) says
Syntactic precedence rules apply to sections as follows. (op e) is legal if and only if (x op e) parses in the same way as (x op (e)); and similarly for (e op). For example, (*a+b) is syntactically invalid, but (+a*b) and (*(a+b)) are valid. Because (+) is left associative, (a+b+) is syntactically correct, but (+a+b) is not; the latter may legally be written as (+(a+b)).
I interpret that as allowing (10 * ac + ) since both (*) and (+) are left associative, and indeed (*) has higher precedence.
As pointed out in the comments, ((10 * ac) + ) is accepted by both, so is a workaround.
Interestingly, this isn't listed in the Hugs vs Haskell 98 page, so maybe Mark P. Jones reads this section of the report differently to me. Certainly I can forgive him for this; Gofer implemented constructor classes long before Haskell allowed them, and Hugs is still faster than GHCi at compilation, and still gives better error messages.

Haskell calculator - order of operations

I am very new to Haskell and I need to make a working calculator what will give answers to expressions like: 2+3*(5+12)
I have something that manages to calculate more or less but I have a problem with order of operations. I have no idea how to do it. Here is my code:
import Text.Regex.Posix
import Data.Maybe
oblicz :: String -> Double
oblicz str = eval (Nothing, None) $ map convertToExpression $ ( tokenize str )
eval :: (Maybe Double,Expression)->[Expression]->Double
eval (Nothing, _) ((Variable v):reszta) = eval (Just v, None) reszta
eval (Just aktualnyWynik, None) ((Operator o):reszta) = eval ((Just aktualnyWynik), (Operator o)) reszta
eval (Just aktualnyWynik, (Operator o)) ((Variable v):reszta) = eval (Just $ o aktualnyWynik v , None) reszta
eval (aktualnyWynik, operator) (LeftParenthesis:reszta)
= eval (aktualnyWynik, operator) ((Variable (eval (Nothing, None) reszta)):(getPartAfterParentheses reszta))
eval (Just aktualnyWynik, _) [] = aktualnyWynik
eval (Just aktualnyWynik, _) (RightParenthesis:_) = aktualnyWynik
data Expression = Operator (Double->Double->Double)
| Variable Double
| LeftParenthesis
| RightParenthesis
| None
tokenize :: String -> [String]
tokenize expression = getAllTextMatches(expression =~ "([0-9]+|\\(|\\)|\\+|-|%|/|\\*)" :: AllTextMatches [] String)
convertToExpression :: String -> Expression
convertToExpression "-" = Operator (-)
convertToExpression "+" = Operator (+)
convertToExpression "*" = Operator (*)
convertToExpression "/" = Operator (/)
convertToExpression "(" = LeftParenthesis
convertToExpression ")" = RightParenthesis
convertToExpression variable = Variable (read variable)
getPartAfterParentheses :: [Expression] -> [Expression]
getPartAfterParentheses [] = []
getPartAfterParentheses (RightParenthesis:expressionsList) = expressionsList
getPartAfterParentheses (LeftParenthesis:expressionsList) = getPartAfterParentheses (getPartAfterParentheses expressionsList)
getPartAfterParentheses (expression:expressionsList) = getPartAfterParentheses expressionsList
I thought maybe I could create two stacks - one with numbers and one with operators. While reading the expression, I could push numbers on one stack and operators on another. When it is an operator I would check if there is something already on the stack and if there is check if I should pop it from the stack and do the math or not - just like in onp notation.
Unfortunately, as I said, I am VERY new to haskell and have no clue how to go about writing this.
Any hints or help would be nice :)
Pushing things on different stacks sure feels very much a prcedural thing to do, and that's generally not nice in Haskell. (Stacks can be realised as lists, which works quite nice in a purely functional fashion. Even real mutable state can be fine if only as an optimisation, but if more than one object needs to be modified at a time then this isn't exactly enjoyable.)
The preferrable way would be to build up a tree representing the expression.
type DInfix = Double -> Double -> Double -- for readability's sake
data ExprTree = Op DInfix ExprTree ExprTree
| Value Double
Evaluating this tree is basically evalTree (Op c t1 t2) = c (evalTree t1) (evalTree t2), i.e. ExprTree->Double right away.
To build the tree up, the crucial point: get the operator fixities right. Different operators have different fixity. I'd put that information in the Operator field:
type Fixity = Int
data Expression = Operator (Double->Double->Double) Fixity
| ...
which then requires e.g.
...
convertToExpression "+" = Operator (+) 6
convertToExpression "*" = Operator (*) 7
...
(Those are the fixities that Haskell itself has for the operators. You can :i + in GHCi to see them.)
Then you'd build the tree.
toExprTree :: [Expression] -> ExprTree
Obvious base case:
toExprTree [Variable v] = Value v
You might continue with
toExprTree (Variable v : Operator c _ : exprs) = Op c (Value v) (toExprTree exprs)
But that's actually not right: for e.g. 4 * 3 + 2 it would give 4 * (3 + 2). We actually need to bring the 4 * down the remaining expressions tree, as deep as the fixities are lower. So the tree needs to know about that as well
data ExprTree = Op DInfix Fixity ExprTree ExprTree
| Value Double
mergeOpL :: Double -> DInfix -> Fixity -> ExprTree -> ExprTree
mergeOpL v c f t#(Op c' f' t' t'')
| c > c' = Op c' f' (mergeOpL v c f t') t''
mergeOpL v c f t = Op c f (Value v) t
What remains to be done is handling parentheses. You'd need to take a whole matching-brackets expression and assign it a tree-fixity of, say tight = 100 :: Fixity.
As a note: such a tokenisation - manual parsing workflow is pretty cumbersome, regardless how nicely functional you do it. Haskell has powerful parser-combinator libraries like parsec, which take most of the work and bookkeeping off you.
What you need to solve this problem is the Shunting-yard Algorithm of Edsger Dijstra as described at http://www.wcipeg.com/wiki/Shunting_yard_algorithm. You can see my implementation at the bottom of this file.
If you are limiting your self to just +,-,*,/ you can also solve the problem using the usual trick in most intro to compiler examples simply parsing into two different non-terminals, ofter called term and product to build the correct tree. This get unwieldy if you have to deal with a lot of operators or they are user defined.

LaTeX natural deduction proofs using Haskell

How can one create LaTeX source for natural deduction proof trees (like those shown here) via Haskell eg using HaTeX? I'd like to emulate LaTeX .stys like bussproofs.sty or proof.sty.
I'm using your question as an excuse to improve and demo a Haskell
call-tracing library I'm working
on. In the context of
tracing, an obvious way to create a proof tree is to trace a type
checker and then format the trace as a natural-deduction proof. To
keep things simple my example logic is the simply-typed lambda
calculus
(STLC),
which corresponds to the implicational fragment of propositional
intuitionistic
logic.
I am using proofs.sty, but not via HaTeX or any other Haskell
Latex library. The Latex for proof trees is very simple and using a
Haskell Latex library would just complicate things.
I've written the proof-tree generation code twice:
in a self-contained way, by writing a type checker that also
returns a proof tree;
using my tracing library, by call-tracing a type checker and then
post processing the trace into a proof tree.
Since you didn't ask about call-tracing libraries, you may be less
interested in the call-trace based version, but I think it's
interesting to compare both versions.
Examples
Let's start with some output examples first, to see what all this gets us.
The first three examples are motivated
by an axiom system for implicational propositional
calculus;
the first two also happen to correspond to S and
K:
The first axiom, K, with proof terms:
The second axiom, S, with proof terms, but with premises in the
context, not lambda bound:
The fourth axiom, modus ponens, without proof terms:
The third axiom in that Wikipedia article (Peirce's law) is
non-constructive and so we can't prove it here.
For a different kind of example, here's a failed type check of the Y
combinator:
The arrows are meant to lead you to the error, which is marked with a
bang (!).
Code
Now I'll describe the code which generated those examples. The code is
from this
file
unless otherwise noted. I'm not including every line of code here;
see that link if you want something you can actually build using GHC
7.6.3.
Most of the code -- the grammar, parser, and pretty printer -- is the
same for both versions; only the type checkers and proof-tree
generators differ. All of the common code is in the file just
referenced.
STLC grammar
The STLC grammar in ASCII:
-- Terms
e ::= x | \x:T.e | e e
-- Types
T ::= A | T -> T
-- Contexts
C ::= . | C,x:T
And the corresponding Haskell:
type TmVar = String
type TyVar = String
data Tm = Lam TmVar Ty Tm
| TmVar TmVar
| Tm :#: Tm
deriving Show
data Ty = TyVar TyVar
| Ty :->: Ty
deriving (Eq , Show)
type Ctx = [(TmVar,Ty)]
Type checking + proof tree generation
Both versions implement the same abstract STLC type checker. In ASCII:
(x:T) in C
---------- Axiom
C |- x : T
C,x:T1 |- e : T2
--------------------- -> Introduction
C |- \x:T1.e : T1->T2
C |- e : T1 -> T2 C |- e1 : T1
--------------------------------- -> Elimination
C |- e e1 : T2
Version 1: self-contained with inline proof-tree generation
The full code for this version is
here.
The proof-tree generation happens in the type checker, but the actual
proof-tree generation code is factored out into addProof and
conclusion.
Type checking
-- The mode is 'True' if proof terms should be included.
data R = R { _ctx :: Ctx , _mode :: Bool }
type M a = Reader R a
extendCtx :: TmVar -> Ty -> M a -> M a
extendCtx x t = local extend where
extend r = r { _ctx = _ctx r ++ [(x,t)] }
-- These take the place of the inferred type when there is a type
-- error.
here , there :: String
here = "\\,!"
there = "\\,\\uparrow"
-- Return the inferred type---or error string if type inference
-- fails---and the latex proof-tree presentation of the inference.
--
-- This produces different output than 'infer' in the error case: here
-- all premises are always computed, whereas 'infer' stops at the
-- first failing premise.
inferProof :: Tm -> M (Either String Ty , String)
inferProof tm#(Lam x t e) = do
(et' , p) <- extendCtx x t . inferProof $ e
let et'' = (t :->:) <$> et'
addProof et'' [p] tm
inferProof tm#(TmVar x) = do
mt <- lookup x <$> asks _ctx
let et = maybe (Left here) Right mt
addProof et [] tm
inferProof tm#(e :#: e1) = do
(et , p) <- inferProof e
(et1 , p1) <- inferProof e1
case (et , et1) of
(Right t , Right t1) ->
case t of
t1' :->: t2 | t1' == t1 -> addProof (Right t2) [p , p1] tm
_ -> addProof (Left here) [p , p1] tm
_ -> addProof (Left there) [p , p1] tm
Proof tree generation
The addProof corresponds to proofTree in the other version:
-- Given the inferred type, the proof-trees for all premise inferences
-- (subcalls), and the input term, annotate the inferred type with a
-- result proof tree.
addProof :: Either String Ty -> [String] -> Tm -> M (Either String Ty , String)
addProof et premises tm = do
R { _mode , _ctx } <- ask
let (judgment , rule) = conclusion _mode _ctx tm et
let tex = "\\infer[ " ++ rule ++ " ]{ " ++
judgment ++ " }{ " ++
intercalate " & " premises ++ " }"
return (et , tex)
The code for conclusion is common to both versions:
conclusion :: Mode -> Ctx -> Tm -> Either String Ty -> (String , String)
conclusion mode ctx tm e = (judgment mode , rule tm)
where
rule (TmVar _) = "\\textsc{Axiom}"
rule (Lam {}) = "\\to \\text{I}"
rule (_ :#: _) = "\\to \\text{E}"
tyOrError = either id pp e
judgment True = pp ctx ++ " \\vdash " ++ pp tm ++ " : " ++ tyOrError
judgment False = ppCtxOnlyTypes ctx ++ " \\vdash " ++ tyOrError
Version 2: via call-tracing, with proof-tree generation as post processing
Here the type checker is not even aware of proof-tree generation, and
adding call-tracing is just one line.
Type checking
type Mode = Bool
type Stream = LogStream (ProofTree Mode)
type M a = ErrorT String (ReaderT Ctx (Writer Stream)) a
type InferTy = Tm -> M Ty
infer , infer' :: InferTy
infer = simpleLogger (Proxy::Proxy "infer") ask (return ()) infer'
infer' (TmVar x) = maybe err pure . lookup x =<< ask where
err = throwError $ "Variable " ++ x ++ " not in context!"
infer' (Lam x t e) = (t :->:) <$> (local (++ [(x,t)]) . infer $ e)
infer' (e :#: e1) = do
t <- infer e
t1 <- infer e1
case t of
t1' :->: t2 | t1' == t1 -> pure t2
_ -> throwError $ "Can't apply " ++ show t ++ " to " ++ show t1 ++ "!"
The LogStream
type
and ProofTree
class
are from the library. The LogStream is the type of log events that
the "magic"
simpleLogger
logs. Note the line
infer = simpleLogger (Proxy::Proxy "infer") ask (return ()) infer'
which defines infer to be a logged version of infer', the actual
type checker. That's all you have to do to trace a monadic function!
I won't get into how simpleLogger actually works here, but the
result is that each call to infer gets logged, including the
context, arguments, and return value, and these data get grouped
together with all logged subcalls (here only to infer). It would be
easy to manually write such logging code for infer, but it's nice
that with the library you don't have to.
Proof-tree generation
To generate the Latex proof trees, we implement ProofTree to post
process infer's call trace. The library provides a proofTree
function that calls the ProofTree methods and assembles the proof
trees; we just need to specify how the conclusions of the typing
judgments will be formatted:
instance ProofTree Mode (Proxy (SimpleCall "infer" Ctx InferTy ())) where
callAndReturn mode t = conclusion mode ctx tm (Right ty)
where
(tm , ()) = _arg t
ty = _ret t
ctx = _before t
callAndError mode t = conclusion mode ctx tm (Left error)
where
(tm , ()) = _arg' t
how = _how t
ctx = _before' t
error = maybe "\\,!" (const "\\,\\uparrow") how
The pp calls are to a user defined pretty printer; obviously, the
library can't know how to pretty print your data types.
Because calls can be erroneous -- the library detects errors
-- we have to say how to format
successful and failing calls. Refer to the Y-combinator example above
for an example a failing type check, corresponding to the
callAndError case here.
The library's proofTree
function
is quite simple: it builds a proofs.sty proof tree with the current
call as conclusion, and the subcalls as premises:
proofTree :: mode -> Ex2T (LogTree (ProofTree mode)) -> String
proofTree mode (Ex2T t#(CallAndReturn {})) =
"\\infer[ " ++ rule ++ " ]{ " ++ conclusion ++ " }{ " ++ intercalate " & " premises ++ " }"
where
(conclusion , rule) = callAndReturn mode t
premises = map (proofTree mode) (_children t)
proofTree mode (Ex2T t#(CallAndError {})) =
"\\infer[ " ++ rule ++ " ]{ " ++ conclusion ++ " }{ " ++ intercalate " & " premises ++ " }"
where
(conclusion , rule) = callAndError mode t
premises = map (proofTree mode)
(_children' t ++ maybe [] (:[]) (_how t))
I use proofs.sty in the library because it allows arbitrarily many
premises, although bussproofs.sty would work for this STLC example
since no rule has more than five premises (the limit for
bussproofs). Both Latex packages are described
here.
Pretty printing
Now we return to code that is common between both versions.
The pretty printer that defines the pp used above is rather long --
it handles precedence and associativity, and is written in a way that
should be extensible if more terms, e.g. products, are added to the
calculus -- but mostly straightforward. First, we set up a precedence
table and a precedence-and-associativity-aware parenthesizer:
- Precedence: higher value means tighter binding.
type Prec = Double
between :: Prec -> Prec -> Prec
between x y = (x + y) / 2
lowest , highest , precLam , precApp , precArr :: Prec
highest = 1
lowest = 0
precLam = lowest
precApp = between precLam highest
precArr = lowest
-- Associativity: left, none, or right.
data Assoc = L | N | R deriving Eq
-- Wrap a pretty print when the context dictates.
wrap :: Pretty a => Assoc -> a -> a -> String
wrap side ctx x = if prec x `comp` prec ctx
then pp x
else parens . pp $ x
where
comp = if side == assoc x || assoc x == N
then (>=)
else (>)
parens s = "(" ++ s ++ ")"
And then we define the individual pretty printers:
class Pretty t where
pp :: t -> String
prec :: t -> Prec
prec _ = highest
assoc :: t -> Assoc
assoc _ = N
instance Pretty Ty where
pp (TyVar v) = v
pp t#(t1 :->: t2) = wrap L t t1 ++ " {\\to} " ++ wrap R t t2
prec (_ :->: _) = precArr
prec _ = highest
assoc (_ :->: _) = R
assoc _ = N
instance Pretty Tm where
pp (TmVar v) = v
pp (Lam x t e) = "\\lambda " ++ x ++ " {:} " ++ pp t ++ " . " ++ pp e
pp e#(e1 :#: e2) = wrap L e e1 ++ " " ++ wrap R e e2
prec (Lam {}) = precLam
prec (_ :#: _) = precApp
prec _ = highest
assoc (_ :#: _) = L
assoc _ = N
instance Pretty Ctx where
pp [] = "\\cdot"
pp ctx#(_:_) =
intercalate " , " [ x ++ " {:} " ++ pp t | (x,t) <- ctx ]
By adding a "mode" argument, it would be easy to use the same pretty
printer to print plain ASCII, which would be useful with other
call-trace post processors, such as the (unfinished) UnixTree
processor.
Parsing
A parser is not essential to the example, but of course I did not enter the
example input terms directly as Haskell Tms.
Recall the STLC grammar in ASCII:
-- Terms
e ::= x | \x:T.e | e e
-- Types
T ::= A | T -> T
-- Contexts
C ::= . | C,x:T
This grammar is ambiguous: both the term application e e
and function type T -> T have no associativity given by the
grammar. But in STLC term application is left associative and function
types are right associative, and so the corresponding disambiguated
grammar we actually parse is
-- Terms
e ::= e' | \x:T.e | e e'
e' ::= x | ( e )
-- Types
T ::= T' | T' -> T
T' ::= A | ( T )
-- Contexts
C ::= . | C,x:T
The parser is maybe too simple -- I'm not using a languageDef and
it's whitespace sensitive -- but it gets the job done:
type P a = Parsec String () a
parens :: P a -> P a
parens = Text.Parsec.between (char '(') (char ')')
tmVar , tyVar :: P String
tmVar = (:[]) <$> lower
tyVar = (:[]) <$> upper
tyAtom , arrs , ty :: P Ty
tyAtom = parens ty
<|> TyVar <$> tyVar
arrs = chainr1 tyAtom arrOp where
arrOp = string "->" *> pure (:->:)
ty = arrs
tmAtom , apps , lam , tm :: P Tm
tmAtom = parens tm
<|> TmVar <$> tmVar
apps = chainl1 tmAtom appOp where
appOp = pure (:#:)
lam = uncurry Lam <$> (char '\\' *> typing)
<*> (char '.' *> tm)
tm = apps <|> lam
typing :: P (TmVar , Ty)
typing = (,) <$> tmVar
<*> (char ':' *> ty)
ctx :: P Ctx
ctx = typing `sepBy` (char ',')
To clarify what the input terms look like, here are the examples from
the Makefile:
# OUTFILE CONTEXT TERM
./tm2latex.sh S.ctx 'x:P->Q->R,y:P->Q,z:P' 'xz(yz)'
./tm2latex.sh S.lam '' '\x:P->Q->R.\y:P->Q.\z:P.xz(yz)'
./tm2latex.sh S.err '' '\x:P->Q->R.\y:P->Q.\z:P.xzyz'
./tm2latex.sh K.ctx 'x:P,y:Q' 'x'
./tm2latex.sh K.lam '' '\x:P.\y:Q.x'
./tm2latex.sh I.ctx 'x:P' 'x'
./tm2latex.sh I.lam '' '\x:P.x'
./tm2latex.sh MP.ctx 'x:P,y:P->Q' 'yx'
./tm2latex.sh MP.lam '' '\x:P.\y:P->Q.yx'
./tm2latex.sh ZERO '' '\s:A->A.\z:A.z'
./tm2latex.sh SUCC '' '\n:(A->A)->(A->A).\s:A->A.\z:A.s(nsz)'
./tm2latex.sh ADD '' '\m:(A->A)->(A->A).\n:(A->A)->(A->A).\s:A->A.\z:A.ms(nsz)'
./tm2latex.sh MULT '' '\m:(A->A)->(A->A).\n:(A->A)->(A->A).\s:A->A.\z:A.m(ns)z'
./tm2latex.sh Y.err '' '\f:A->A.(\x:A.f(xx))(\x:A.f(xx))'
./tm2latex.sh Y.ctx 'a:A->(A->A),y:(A->A)->A' '\f:A->A.(\x:A.f(axx))(y(\x:A.f(axx)))'
Latex document generation
The ./tm2latex.sh script just calls pdflatex on the output of the
Haskell programs described above. The Haskell programs produce the proof
tree and then wrap it in a minimal Latex document:
unlines
[ "\\documentclass[10pt]{article}"
, "\\usepackage{proof}"
, "\\usepackage{amsmath}"
, "\\usepackage[landscape]{geometry}"
, "\\usepackage[cm]{fullpage}"
-- The most slender font I could find:
-- http://www.tug.dk/FontCatalogue/iwonalc/
, "\\usepackage[light,condensed,math]{iwona}"
, "\\usepackage[T1]{fontenc}"
, "\\begin{document}"
, "\\tiny"
, "\\[" ++ tex ++ "\\]"
, "\\end{document}"
]
As you can see, most of the Latex is devoted to making the proof trees
as small as possible; I plan to also write an ASCII proof tree post
processor, which may be more useful in practice when the examples are
larger.
Conclusion
As always, it takes a bit of code to write a parser, type checker, and
pretty printer. On top of that, adding proof-tree generation is
pretty simple in both versions. This is a fun toy example, but I
expect to do something similar in the context of a "real"
unification-based type checker for a dependently-typed language; there
I expect call tracing and proof-tree generation (in ASCII) to provide
significant help in debugging the type checker.

What can be improved on my first haskell program?

Here is my first Haskell program. What parts would you write in a better way?
-- Multiplication table
-- Returns n*n multiplication table in base b
import Text.Printf
import Data.List
import Data.Char
-- Returns n*n multiplication table in base b
mulTable :: Int -> Int -> String
mulTable n b = foldl (++) (verticalHeader n b w) (map (line n b w) [0..n])
where
lo = 2* (logBase (fromIntegral b) (fromIntegral n))
w = 1+fromInteger (floor lo)
verticalHeader :: Int -> Int -> Int -> String
verticalHeader n b w = (foldl (++) tableHeader columnHeaders)
++ "\n"
++ minusSignLine
++ "\n"
where
tableHeader = replicate (w+2) ' '
columnHeaders = map (horizontalHeader b w) [0..n]
minusSignLine = concat ( replicate ((w+1)* (n+2)) "-" )
horizontalHeader :: Int -> Int -> Int -> String
horizontalHeader b w i = format i b w
line :: Int -> Int -> Int -> Int -> String
line n b w y = (foldl (++) ((format y b w) ++ "|" )
(map (element b w y) [0..n])) ++ "\n"
element :: Int -> Int -> Int -> Int -> String
element b w y x = format (y * x) b w
toBase :: Int -> Int -> [Int]
toBase b v = toBase' [] v where
toBase' a 0 = a
toBase' a v = toBase' (r:a) q where (q,r) = v `divMod` b
toAlphaDigits :: [Int] -> String
toAlphaDigits = map convert where
convert n | n < 10 = chr (n + ord '0')
| otherwise = chr (n + ord 'a' - 10)
format :: Int -> Int -> Int -> String
format v b w = concat spaces ++ digits ++ " "
where
digits = if v == 0
then "0"
else toAlphaDigits ( toBase b v )
l = length digits
spaceCount = if (l > w) then 0 else (w-l)
spaces = replicate spaceCount " "
Here are some suggestions:
To make the tabularity of the computation more obvious, I would pass the list [0..n] to the line function rather than passing n.
I would further split out the computation of the horizontal and vertical axes so that they are passed as arguments to mulTable rather than computed there.
Haskell is higher-order, and almost none of the computation has to do with multiplication. So I would change the name of mulTable to binopTable and pass the actual multiplication in as a parameter.
Finally, the formatting of individual numbers is repetitious. Why not pass \x -> format x b w as a parameter, eliminating the need for b and w?
The net effect of the changes I am suggesting is that you build a general higher-order function for creating tables for binary operators. Its type becomes something like
binopTable :: (i -> String) -> (i -> i -> i) -> [i] -> [i] -> String
and you wind up with a much more reusable function—for example, Boolean truth tables should be a piece of cake.
Higher-order and reusable is the Haskell Way.
You don't use anything from import Text.Printf.
Stylistically, you use more parentheses than necessary. Haskellers tend to find code more readable when it's cleaned of extraneous stuff like that. Instead of something like h x = f (g x), write h = f . g.
Nothing here really requires Int; (Integral a) => a ought to do.
foldl (++) x xs == concat $ x : xs: I trust the built-in concat to work better than your implementation.
Also, you should prefer foldr when the function is lazy in its second argument, as (++) is – because Haskell is lazy, this reduces stack space (and also works on infinite lists).
Also, unwords and unlines are shortcuts for intercalate " " and concat . map (++ "\n") respectively, i.e. "join with spaces" and "join with newlines (plus trailing newline)"; you can replace a couple things by those.
Unless you use big numbers, w = length $ takeWhile (<= n) $ iterate (* b) 1 is probably faster. Or, in the case of a lazy programmer, let w = length $ toBase b n.
concat ( (replicate ((w+1)* (n+2)) "-" ) == replicate ((w+1) * (n+2)) '-' – not sure how you missed this one, you got it right just a couple lines up.
You do the same thing with concat spaces, too. However, wouldn't it be easier to actually use the Text.Printf import and write printf "%*s " w digits?
Norman Ramsey gave excellent high-level (design) suggestions; Below are some low-level ones:
First, consult with HLint. HLint is a friendly program that gives you rudimentary advice on how to improve your Haskell code!
In your case HLint gives 7 suggestions. (mostly about redundant brackets)
Modify your code according to HLint's suggestions until it likes what you feed it.
More HLint-like stuff:
concat (replicate i "-"). Why not replicate i '-'?
Consult with Hoogle whenever there is reason to believe that a function you need is already available in Haskell's libraries. Haskell comes with tons of useful functions so Hoogle should come in handy quite often.
Need to concatenate strings? Search for [String] -> String, and voila you found concat. Now go replace all those folds.
The previous search also suggested unlines. Actually, this even better suits your needs. It's magic!
Optional: pause and thank in your heart to Neil M for making Hoogle and HLint, and thank others for making other good stuff like Haskell, bridges, tennis balls, and sanitation.
Now, for every function that takes several arguments of the same type, make it clear which means what, by giving them descriptive names. This is better than comments, but you can still use both.
So
-- Returns n*n multiplication table in base b
mulTable :: Int -> Int -> String
mulTable n b =
becomes
mulTable :: Int -> Int -> String
mulTable size base =
To soften the extra characters blow of the previous suggestion: When a function is only used once, and is not very useful by itself, put it inside its caller's scope in its where clause, where it could use the callers' variables, saving you the need to pass everything to it.
So
line :: Int -> Int -> Int -> Int -> String
line n b w y =
concat
$ format y b w
: "|"
: map (element b w y) [0 .. n]
element :: Int -> Int -> Int -> Int -> String
element b w y x = format (y * x) b w
becomes
line :: Int -> Int -> Int -> Int -> String
line n b w y =
concat
$ format y b w
: "|"
: map element [0 .. n]
where
element x = format (y * x) b w
You can even move line into mulTable's where clause; imho, you should.
If you find a where clause nested inside another where clause troubling, then I suggest to change your indentation habits. My recommendation is to use consistent indentation of always 2 or always 4 spaces. Then you can easily see, everywhere, where the where in the other where is at. ok
Below's what it looks like (with a few other changes in style):
import Data.List
import Data.Char
mulTable :: Int -> Int -> String
mulTable size base =
unlines $
[ vertHeaders
, minusSignsLine
] ++ map line [0 .. size]
where
vertHeaders =
concat
$ replicate (cellWidth + 2) ' '
: map horizontalHeader [0 .. size]
horizontalHeader i = format i base cellWidth
minusSignsLine = replicate ((cellWidth + 1) * (size + 2)) '-'
cellWidth = length $ toBase base (size * size)
line y =
concat
$ format y base cellWidth
: "|"
: map element [0 .. size]
where
element x = format (y * x) base cellWidth
toBase :: Integral i => i -> i -> [i]
toBase base
= reverse
. map (`mod` base)
. takeWhile (> 0)
. iterate (`div` base)
toAlphaDigit :: Int -> Char
toAlphaDigit n
| n < 10 = chr (n + ord '0')
| otherwise = chr (n + ord 'a' - 10)
format :: Int -> Int -> Int -> String
format v b w =
spaces ++ digits ++ " "
where
digits
| v == 0 = "0"
| otherwise = map toAlphaDigit (toBase b v)
spaces = replicate (w - length digits) ' '
0) add a main function :-) at least rudimentary
import System.Environment (getArgs)
import Control.Monad (liftM)
main :: IO ()
main = do
args <- liftM (map read) $ getArgs
case args of
(n:b:_) -> putStrLn $ mulTable n b
_ -> putStrLn "usage: nntable n base"
1) run ghc or runhaskell with -Wall; run through hlint.
While hlint doesn't suggest anything special here (only some redundant brackets), ghc will tell you that you don't actually need Text.Printf here...
2) try running it with base = 1 or base = 0 or base = -1
If you want multiline comments use:
{- A multiline
comment -}
Also, never use foldl, use foldl' instead, in cases where you are dealing with large lists which must be folded. It is more memory efficient.
A brief comments saying what each function does, its arguments and return value, is always good. I had to read the code pretty carefully to fully make sense of it.
Some would say if you do that, explicit type signatures may not be required. That's an aesthetic question, I don't have a strong opinion on it.
One minor caveat: if you do remove the type signatures, you'll automatically get the polymorphic Integral support ephemient mentioned, but you will still need one around toAlphaDigits because of the infamous "monomorphism restriction."

Resources