Is there a better way to add attribute field into AST in Haskell? - haskell

At first, I have a original AST definition like this:
data Expr = LitI Int | LitB Bool | Add Expr Expr
And I want to generalize it so that each AST node can contains some extra attributes:
data Expr a = LitI Int a | LitB Bool a | Add (Expr a) (Expr a) a
In this way, we can easily attach attribute into each node of the AST:
type ExprWithType = Expr TypeRep
type ExprWithSize = Expr Int
But this solution makes it hard to visit the attribute field, we must use pattern matching and process it case by case:
attribute :: Expr a -> a
attribute e = case e of
LitI _ a -> a
LitB _ a -> a
Add _ _ a -> a
We can image that if we can define our AST via a product type of the original AST and the type variable indicating attribute:
type ExprWithType = (Expr, TypeRep)
type ExprWithSize = (Expr, Int)
Then we can simplify the attribute visiting function like this:
attribute = snd
But we know that, the attribute from the outmost product type will not recursively appears in the subtrees.
So, is there a better solution for this problem?
Generally speaking, When we want to extract common field of different cases of a recursive sum type, we met this problem.

You could "lift" the type of the Expr for example like:
data Expr e = LitI Int | LitB Bool | Add e e
Now we can define a data type like:
data ExprAttr a = ExprAttr {
expression :: Expr (ExprAttr a),
attribute :: a
}
So here the ExprAttr has two parameters, the expression, which is thus an Expression that has ExprAttr as in the tree, and attribute which is an a.
You can thus process ExprAttrs, which is an AST of ExprAttrs. If you want to use a "simple" AST, you can define a type like:
newtype SimExpr = SimExpr (Expr SimExpr)

You might want to take a look at Cofree where f will be your recursive data type after abstracting the concept of recursion as an f-algebra and a will be the type of your annotation.
Nate Faubion gave a very accesible talk about this and similar approaches and you can watch it here: https://www.youtube.com/watch?v=eKkxmVFcd74

Related

Type design for the AST of my language remembering token locations

I wrote a parser and evaluator for a simple programming language. Here is a simplified version of the types for the AST:
data Value = IntV Int | FloatV Float | BoolV Bool
data Expr = IfE Value [Expr] | VarDefE String Value
type Program = [Expr]
I want error messages to tell the line and column of the source code in which the error occured. For example, if the value in an If expression is not a boolean, I want the evaluator to show an error saying "expected boolean at line x, column y", with x and y referring to the location of the value.
So, what I need to do is redefine the previous types so that they can store the relevant locations of different things. One option would be to add a location to each constructor for expressions, like so:
type Location = (Int, Int)
data Expr = IfE Value [Expr] Location | VarDef String Value Location
This clearly isn't optimal, because I have to add those Location fields to every possible expression, and if for example a value contained other values, I would need to add locations to that value too:
{-
this would turn into FunctionCall String [Value] [Location],
with one location for each value in the function call
-}
data Value = ... | FunctionCall String [Value]
I came up with another solution, which allows me to add locations to everything:
data Located a = Located Location a
type LocatedExpr = Located Expr
type LocatedValue = Located Value
data Value = IntV Int | FloatV Float | BoolV Bool | FunctionCall String [LocatedValue]
data Expr = IfE LocatedValue [LocatedExpr] | VarDef String LocatedValue
data Program = [LocatedExpr]
However I don't like this that much. First of all, it clutters the definition of the evaluator and pattern matching has an extra layer every time. Also, I don't think saying that a function call takes located values as arguments is quite right. Function calls should take values as arguments, and locations should be metadata that doesn't interfere with the evaluator.
I need help redefining my types so that the solution is as clean as possible. Maybe there is a language extension or a design pattern I don't know about that could be helpful.
There are many ways to annotate an AST! This is half of what’s known as the AST typing problem, the other half being how you manage an AST that changes over the course of compilation. The problem isn’t exactly “solved”: all of the solutions have tradeoffs, and which one to pick depends on your expected use cases. I’ll go over a few that you might like to investigate at the end.
Whichever method you choose for organising the actual data types, if it makes pattern-matching ugly or unwieldy, the natural solution is PatternSynonyms.
Considering your first example:
{-# Language PatternSynonyms #-}
type Location = (Int, Int)
data Expr
= LocatedIf Value [Expr] Location
| LocatedVarDef String Value Location
-- Unidirectional pattern synonyms which ignore the location:
pattern If :: Value -> [Expr] -> Expr
pattern If val exprs <- LocatedIf val exprs _loc
pattern VarDef :: String -> Value -> Expr
pattern VarDef name expr <- LocatedVarDef name expr _loc
-- Inform GHC that matching ‘If’ and ‘VarDef’ is just as good
-- as matching ‘LocatedIf’ and ‘LocatedVarDef’.
{-# Complete If, VarDef #-}
This may be sufficiently tidy for your purposes already. But here are a few more tips that I find helpful.
Put annotations first: when adding an annotation type to an AST directly, I often prefer to place it as the first parameter of each constructor, so that it can be conveniently partially applied.
data LocatedExpr
= LocatedIf Location Value [Expr]
| LocatedVarDef Location String Value
If the annotation is a location, then this also makes it more convenient to obtain when writing certain kinds of parsers, along the lines of AnnotatedIf <$> (getSourceLocation <* ifKeyword) <*> value <*> many expr in a parser combinator library.
Parameterise your annotations: I often make the annotation type into a type parameter, so that GHC can derive some useful classes for me:
{-# Language
DeriveFoldable,
DeriveFunctor,
DeriveTraversable #-}
data AnnotatedExpr a
= AnnotatedIf a Value [Expr]
| AnnotatedVarDef a String Value
deriving (Functor, Foldable, Traversable)
type LocatedExpr = AnnotatedExpr Location
-- Get the annotation of an expression.
-- (Total as long as every constructor is annotated.)
exprAnnotation :: AnnotatedExpr a -> a
exprAnnotation = head
-- Update annotations purely.
mapAnnotations
:: (a -> b)
-> AnnotatedExpr a -> AnnotatedExpr b
mapAnnotations = fmap
-- traverse, foldMap, &c.
If you want “doesn’t interfere”, use polymorphism: you can enforce that the evaluator can’t inspect the annotation type by being polymorphic over it. Pattern synonyms still let you match on these expressions conveniently:
pattern If :: Value -> [AnnotatedExpr a] -> AnnotatedExpr a
pattern If val exprs <- AnnotatedIf _anno val exprs
-- …
eval :: AnnotatedExpr a -> Value
eval expr = case expr of
If val exprs -> -- …
VarDef name expr -> -- …
Unannotated terms aren’t your enemy: a term without source locations is no good for error reporting, but I think it’s still a good idea to make the pattern synonyms bidirectional for the convenience of constructing unannotated terms with a unit () annotation. (Or something equivalent, if you use e.g. Maybe Location as the annotation type.)
The reason is that this is quite convenient for writing unit tests, where you want to check the output, but want to use Eq instead of pattern matching, and don’t want to have to compare all the source locations in tests that aren’t concerned with them. Using the derived classes, void :: (Functor f) => f a -> f () strips out all the annotations on an AST.
import Control.Monad (void)
type BareExpr = AnnotatedExpr ()
-- One way to define bidirectional synonyms, so e.g.
-- ‘If’ can be used as either a pattern or a constructor.
pattern If :: Value -> [BareExpr] -> BareExpr
pattern If val exprs = AnnotatedIf () val exprs
-- …
stripAnnotations :: AnnotatedExpr a -> BareExpr
stripAnnotations = void
Equivalently, you could use GADTs / ExistentialQuantification to say data AnyExpr where { AnyExpr :: AnnotatedExpr a -> AnyExpr } / data AnyExpr = forall a. AnyExpr (AnnotatedExpr a); that way, the annotations have exactly as much information as (), but you don’t need to fmap over the entire tree with void in order to strip it, just apply the AnyExpr constructor to hide the type.
Finally, here are some brief introductions to a few AST typing solutions.
Annotate each AST node with a tag (e.g. a unique ID), then store all metadata like source locations, types, and whatever else, separately from the AST:
import Data.IntMap (IntMap)
-- More sophisticated/stronglier-typed tags are possible.
newtype Tag = Tag Int
newtype TagMap a = TagMap (IntMap a)
data Expr
= If !Tag Value [Expr]
| VarDef !Tag String Expr
type Span = (Location, Location)
type SourceMap = TagMap Span
type CommentMap = TagMap (Span, String)
parse
:: String -- Input
-> Either ParseError
( Expr -- Parsed expression
, SourceMap -- Source locations of tags
, CommentMap -- Sideband for comments
-- …
)
The advantage is that you can very easily mix in arbitrary new types of annotations anywhere, without affecting the AST itself, and avoid rewriting the AST just to change annotations. You can think of the tree and annotation tables as a kind of database, where the tags are the “foreign keys” relating them. A downside is that you must be careful to maintain these tags when you do rewrite the AST.
I don’t know if this approach has an established name; I think of it as just “tagging” or a “tagged AST”.
recursion-schemes and/or Data Types à la CartePDF: separate out the “recursive” part of an annotated expression tree from the “annotation” part, and use Fix to tie them back together, with Compose (or Cofree) to add annotations in the middle.
data ExprF e
= IfF Value [e]
| VarDefF String e
-- …
deriving (Foldable, Functor, Traversable, …)
-- Unannotated: Expr ~ ExprF (ExprF (ExprF (…)))
type Expr = Fix ExprF
-- With a location at each recursive step:
--
-- LocatedExpr ~ Located (ExprF (Located (ExprF (…))))
type LocatedExpr = Fix (Compose Located ExprF)
data Located a = Located Location a
deriving (Foldable, Functor, Traversable, …)
-- or: type Located = (,) Location
A distinct advantage is that you get a bunch of nice traversal stuff like cata for free-ish, so you can avoid having to write manual traversals over your AST over and over. A downside is that it adds some pattern clutter to clean up, as does the “à la carte” approach, but they do offer a lot of flexibility.
Trees That GrowPDF is overkill for just source locations, but in a serious compiler it’s quite helpful. If you expect to have more than one annotation type (such as inferred types or other analysis results) or an AST that changes over time, then you add a type parameter for the “compilation phase” (parsed, renamed, typechecked, desugared, &c.) and select field types or enable & disable constructors based on that index.
A really unfortunate downside of this is that you often have to rewrite the tree even in places nothing has changed, because everything depends on the “phase”. An alternative that I use is to add one type parameter for each type of phase or annotation that can vary independently, e.g. data Expr annotation termVarName typeVarName, and abstract over that with type and pattern synonyms. This lets you update indices independently and still use classes like Functor and Bitraversable.

Clarifying Data Constructor in Haskell

In the following:
data DataType a = Data a | Datum
I understand that Data Constructor are value level function. What we do above is defining their type. They can be function of multiple arity or const. That's fine. I'm ok with saying Datum construct Datum. What is not that explicit and clear to me here is somehow the difference between the constructor function and what it produce. Please let me know if i am getting it well:
1 - a) Basically writing Data a, is defining both a Data Structure and its Constructor function (as in scala or java usually the class and the constructor have the same name) ?
2 - b) So if i unpack and make an analogy. With Data a We are both defining a Structure(don't want to use class cause class imply a type already i think, but maybe we could) of object (Data Structure), the constructor function (Data Constructor/Value constructor), and the later return an object of that object Structure. Finally The type of that Structure of object is given by the Type constructor. An Object Structure in a sense is just a Tag surrounding a bunch value of some type. Is my understanding correct ?
3 - c) Can I formally Say:
Data Constructor that are Nullary represent constant values -> Return the the constant value itself of which the type is given by the Type Constructor at the definition site.
Data Constructor that takes an argument represent class of values, where class is a Tag ? -> Return an infinite number of object of that class, of which the type is given by the Type constructor at the definition site.
Another way of writing this:
data DataType a = Data a | Datum
Is with generalised algebraic data type (GADT) syntax, using the GADTSyntax extension, which lets us specify the types of the constructors explicitly:
{-# LANGUAGE GADTSyntax #-}
data DataType a where
Data :: a -> DataType a
Datum :: DataType a
(The GADTs extension would work too; it would also allow us to specify constructors with different type arguments in the result, like DataType Int vs. DataType Bool, but that’s a more advanced topic, and we don’t need that functionality here.)
These are exactly the types you would see in GHCi if you asked for the types of the constructor functions with :type / :t:
> :{
| data DataType a where
| Data :: a -> DataType a
| Datum :: DataType a
| :}
> :type Data
Data :: a -> DataType a
> :t Datum
Datum :: DataType a
With ExplicitForAll we can also specify the scope of the type variables explicitly, and make it clearer that the a in the data definition is a separate variable from the a in the constructor definitions by also giving them different names:
data DataType a where
Data :: forall b. b -> DataType b
Datum :: forall c. DataType c
Some more examples of this notation with standard prelude types:
data Either a b where
Left :: forall a b. a -> Either a b
Right :: forall a b. b -> Either a b
data Maybe a where
Nothing :: Maybe a
Just :: a -> Maybe a
data Bool where
False :: Bool
True :: Bool
data Ordering where
LT, EQ, GT :: Ordering -- Shorthand for repeated ‘:: Ordering’
I understand that Data Constructor are value level function. What we do above is defining their type. They can be function of multiple arity or const. That's fine. I'm ok with saying Datum construct Datum. What is not that explicit and clear to me here is somehow the difference between the constructor function and what it produce.
Datum and Data are both “constructors” of DataType a values; neither Datum nor Data is a type! These are just “tags” that select between the possible varieties of a DataType a value.
What is produced is always a value of type DataType a for a given a; the constructor selects which “shape” it takes.
A rough analogue of this is a union in languages like C or C++, plus an enumeration for the “tag”. In pseudocode:
enum Tag {
DataTag,
DatumTag,
}
// A single anonymous field.
struct DataFields<A> {
A field1;
}
// No fields.
struct DatumFields<A> {};
// A union of the possible field types.
union Fields<A> {
DataFields<A> data;
DatumFields<A> datum;
}
// A pair of a tag with the fields for that tag.
struct DataType<A> {
Tag tag;
Fields<A> fields;
}
The constructors are then just functions returning a value with the appropriate tag and fields. Pseudocode:
<A> DataType<A> newData(A x) {
DataType<A> result;
result.tag = DataTag;
result.fields.data.field1 = x;
return result;
}
<A> DataType<A> newDatum() {
DataType<A> result;
result.tag = DatumTag;
// No fields.
return result;
}
Unions are unsafe, since the tag and fields can get out of sync, but sum types are safe because they couple these together.
A pattern-match like this in Haskell:
case someDT of
Datum -> f
Data x -> g x
Is a combination of testing the tag and extracting the fields. Again, in pseudocode:
if (someDT.tag == DatumTag) {
f();
} else if (someDT.tag == DataTag) {
var x = someDT.fields.data.field1;
g(x);
}
Again this is coupled in Haskell to ensure that you can only ever access the fields if you have checked the tag by pattern-matching.
So, in answer to your questions:
1 - a) Basically writing Data a, is defining both a Data Structure and its Constructor function (as in scala or java usually the class and the constructor have the same name) ?
Data a in your original code is not defining a data structure, in that Data is not a separate type from DataType a, it’s just one of the possible tags that a DataType a value may have. Internally, a value of type DataType Int is one of the following:
The tag for Data (in GHC, a pointer to an “info table” for the constructor), and a reference to a value of type Int.
x = Data (1 :: Int) :: DataType Int
+----------+----------------+ +---------+----------------+
x ---->| Data tag | pointer to Int |---->| Int tag | unboxed Int# 1 |
+----------+----------------+ +---------+----------------+
The tag for Datum, and no other fields.
y = Datum :: DataType Int
+-----------+
y ----> | Datum tag |
+-----------+
In a language with unions, the size of a union is the maximum of all its alternatives, since the type must support representing any of the alternatives with mutation. In Haskell, since values are immutable, they don’t require any extra “padding” since they can’t be changed.
It’s a similar situation for standard data types, e.g., a product or sum type:
(x :: X, y :: Y) :: (X, Y)
+---------+--------------+--------------+
| (,) tag | pointer to X | pointer to Y |
+---------+--------------+--------------+
Left (m :: M) :: Either M N
+-----------+--------------+
| Left tag | pointer to M |
+-----------+--------------+
Right (n :: N) :: Either M N
+-----------+--------------+
| Right tag | pointer to N |
+-----------+--------------+
2 - b) So if i unpack and make an analogy. With Data a We are both defining a Structure(don't want to use class cause class imply a type already i think, but maybe we could) of object (Data Structure), the constructor function (Data Constructor/Value constructor), and the later return an object of that object Structure. Finally The type of that Structure of object is given by the Type constructor. An Object Structure in a sense is just a Tag surrounding a bunch value of some type. Is my understanding correct ?
This is sort of correct, but again, the constructors Data and Datum aren’t “data structures” by themselves. They’re just the names used to introduce (construct) and eliminate (match) values of type DataType a, for some type a that is chosen by the caller of the constructors to fill in the forall
data DataType a = Data a | Datum says:
If some term e has type T, then the term Data e has type DataType T
Inversely, if some value of type DataType T matches the pattern Data x, then x has type T in the scope of the match (case branch or function equation)
The term Datum has type DataType T for any type T
3 - c) Can I formally Say:
Data Constructor that are Nullary represent constant values -> Return the the constant value itself of which the type is given by the Type Constructor at the definition site.
Data Constructor that takes an argument represent class of values, where class is a Tag ? -> Return an infinite number of object of that class, of which the type is given by the Type constructor at the definition site.
Not exactly. A type constructor like DataType :: Type -> Type, Maybe :: Type -> Type, or Either :: Type -> Type -> Type, or [] :: Type -> Type (list), or a polymorphic data type, represents an “infinite” family of concrete types (Maybe Int, Maybe Char, Maybe (String -> String), …) but only in the same way that id :: forall a. a -> a represents an “infinite” family of functions (id :: Int -> Int, id :: Char -> Char, id :: String -> String, …).
That is, the type a here is a parameter filled in with an argument value given by the caller. Usually this is implicit, through type inference, but you can specify it explicitly with the TypeApplications extension:
-- Akin to: \ (a :: Type) -> \ (x :: a) -> x
id :: forall a. a -> a
id x = x
id #Int :: Int -> Int
id #Int 1 :: Int
Data :: forall a. a -> DataType a
Data #Char :: Char -> DataType Char
Data #Char 'x' :: DataType Char
The data constructors of each instantiation don’t really have anything to do with each other. There’s nothing in common between the instantiations Data :: Int -> DataType Int and Data :: Char -> DataType Char, apart from the fact that they share the same tag name.
Another way of thinking about this in Java terms is with the visitor pattern. DataType would be represented as a function that accepts a “DataType visitor”, and then the constructors don’t correspond to separate data types, they’re just the methods of the visitor which accept the fields and return some result. Writing the equivalent code in Java is a worthwhile exercise, but here it is in Haskell:
{-# LANGUAGE RankNTypes #-}
-- (Allows passing polymorphic functions as arguments.)
type DataType a
= forall r. -- A visitor with a generic result type
r -- With one “method” for the ‘Datum’ case (no fields)
-> (a -> r) -- And one for the ‘Data’ case (one field)
-> r -- Returning the result
newData :: a -> DataType a
newData field = \ _visitDatum visitData -> visitData field
newDatum :: DataType a
newDatum = \ visitDatum _visitData -> visitDatum
Pattern-matching is simply running the visitor:
matchDT :: DataType a -> b -> (a -> b) -> b
matchDT dt visitDatum visitData = dt visitDatum visitData
-- Or: matchDT dt = dt
-- Or: matchDT = id
-- case someDT of { Datum -> f; Data x -> g x }
-- f :: r
-- g :: a -> r
-- someDT :: DataType a
-- :: forall r. r -> (a -> r) -> r
someDT f (\ x -> g x)
Similarly, in Haskell, data constructors are just the ways of introducing and eliminating values of a user-defined type.
What is not that explicit and clear to me here is somehow the difference between the constructor function and what it produce
I'm having trouble following your question, but I think you are complicating things. I would suggest not thinking too deeply about the "constructor" terminology.
But hopefully the following helps:
Starting simple:
data DataType = Data Int | Datum
The above reads "Declare a new type named DataType, which has the possible values Datum or Data <some_number> (e.g. Data 42)"
So e.g. Datum is a value of type DataType.
Going back to your example with a type parameter, I want to point out what the syntax is doing:
data DataType a = Data a | Datum
^ ^ ^ These things appear in type signatures (type level)
^ ^ These things appear in code (value level stuff)
There's a bit of punning happening here. so in the data declaration you might see "Data Int" and this is mixing type-level and value-level stuff in a way that you wouldn't see in code. In code you'd see e.g. Data 42 or Data someVal.
I hope that helps a little...

The limit set of types with new data like `Tree a`

Exploring and studing type system in Haskell I've found some problems.
1) Let's consider polymorphic type as Binary Tree:
data Tree a = Leaf a | Branch (Tree a) (Tree a) deriving Show
And, for example, I want to limit my considerations only with Tree Int, Tree Bool and Tree Char. Of course, I can make a such new type:
data TreeIWant = T1 (Tree Int) | T2 (Tree Bool) | T3 (Tree Char) deriving Show
But could it possible to make new restricted type (for homogeneous trees) in more elegant (and without new tags like T1,T2,T3) way (perhaps with some advanced type extensions)?
2) Second question is about trees with heterogeneous values. I can do them with usual Haskell, i.e. I can do the new helping type, contained tagged heterogeneous values:
data HeteroValues = H1 Int | H2 Bool | H3 Char deriving Show
and then make tree with values of this type:
type TreeH = Tree HeteroValues
But could it possible to make new type (for heterogeneous trees) in more elegant (and without new tags like H1,H2,H3) way (perhaps with some advanced type extensions)?
I know about heterogeneous list, perhaps it is the same question?
For question #2, it's easy to construct a "restricted" heterogeneous type without explicit tags using a GADT and a type class:
{-# LANGUAGE GADTs #-}
data Thing where
T :: THING a => a -> Thing
class THING a
Now, declare THING instances for the the things you want to allow:
instance THING Int
instance THING Bool
instance THING Char
and you can create Things and lists (or trees) of Things:
> t1 = T 'a' -- Char is okay
> t2 = T "hello" -- but String is not
... type error ...
> tl = [T (42 :: Int), T True, T 'x']
> tt = Branch (Leaf (T 'x')) (Leaf (T False))
>
In terms of the type names in your question, you have:
type HeteroValues = Thing
type TreeH = Tree Thing
You can use the same type class with a new GADT for question #1:
data ThingTree where
TT :: THING a => Tree a -> ThingTree
and you have:
type TreeIWant = ThingTree
and you can do:
> tt1 = TT $ Branch (Leaf 'x') (Leaf 'y')
> tt2 = TT $ Branch (Leaf 'x') (Leaf False)
... type error ...
>
That's all well and good, until you try to use any of the values you've constructed. For example, if you wanted to write a function to extract a Bool from a possibly boolish Thing:
maybeBool :: Thing -> Maybe Bool
maybeBool (T x) = ...
you'd find yourself stuck here. Without a "tag" of some kind, there's no way of determining if x is a Bool, Int, or Char.
Actually, though, you do have an implicit tag available, namely the THING type class dictionary for x. So, you can write:
maybeBool :: Thing -> Maybe Bool
maybeBool (T x) = maybeBool' x
and then implement maybeBool' in your type class:
class THING a where
maybeBool' :: a -> Maybe Bool
instance THING Int where
maybeBool' _ = Nothing
instance THING Bool where
maybeBool' = Just
instance THING Char where
maybeBool' _ = Nothing
and you're golden!
Of course, if you'd used explicit tags:
data Thing = T_Int Int | T_Bool Bool | T_Char Char
then you could skip the type class and write:
maybeBool :: Thing -> Maybe Bool
maybeBool (T_Bool x) = Just x
maybeBool _ = Nothing
In the end, it turns out that the best Haskell representation of an algebraic sum of three types is just an algebraic sum of three types:
data Thing = T_Int Int | T_Bool Bool | T_Char Char
Trying to avoid the need for explicit tags will probably lead to a lot of inelegant boilerplate elsewhere.
Update: As #DanielWagner pointed out in a comment, you can use Data.Typeable in place of this boilerplate (effectively, have GHC generate a lot of boilerplate for you), so you can write:
import Data.Typeable
data Thing where
T :: THING a => a -> Thing
class Typeable a => THING a
instance THING Int
instance THING Bool
instance THING Char
maybeBool :: Thing -> Maybe Bool
maybeBool = cast
This perhaps seems "elegant" at first, but if you try this approach in real code, I think you'll regret losing the ability to pattern match on Thing constructors at usage sites (and so having to substitute chains of casts and/or comparisons of TypeReps).

Recursion schemes with several types

Right now, I've got an AST for expression that's polymorphic over the type of recursion:
data Expr a = Const Int
| Add a a
This has been incredibly useful by allowing me to use a type for plain recursion (Fix Expr) and another one when I need to attach extra information (Cofree Expr ann).
The issue occurs when I want to introduce another type into this recursion scheme:
data Stmt a = Compound [a]
| Print (Expr ?)
I'm not sure what to put for the Expr term without introducing additional type variables and breaking compatibility with all the general functions I've already written.
Can this be done, and if so, is it a useful pattern?
The recursion-schemes perspective is to view recursive types as fixed points of functors. The type of expressions is the fixed point of the following functor:
data ExprF expr = Const Int
| Add expr expr
The point of changing the name of the variable is to make explicit the fact that it is a placeholder for the actual type of expressions, that would otherwise be defined as:
data Expr = Const Int | Add Expr Expr
In Stmt, there are two recursive types, Expr and Stmt itself. So we put two holes/unknowns.
data StmtF expr stmt = Compound [stmt]
| Print expr
When we take a fixpoint with Fix or Cofree, we are now solving a system of two equations (one for Expr, one for Stmt), and that comes with some amount of boilerplate.
Instead of applying Fix or Cofree directly, we generalize, taking those fixpoint combinators (Fix, Cofree, Free) as parameters in the construction of expressions and statements:
type Expr_ f = f ExprF
type Stmt_ f = f (StmtF (Expr_ f))
Now we can say Expr_ Fix or Stmt_ Fix for the unannotated trees, and Expr_ (Flip Cofree ann), Stmt_ (Flip Cofree ann). Unfortunately we have to pay another LOC fee to make the kinds match, and the types get ever more convoluted.
newtype Flip cofree a f b = Flip (cofree f a b)
(This also assumes we want to use the same Fix or Cofree everywhere at the same times.)
Another representation to consider is (called HKD nowadays):
data Expr f = Const Int
| Add (f Expr) (f Expr)
data Stmt f = Compount [f Stmt]
| Print (f (Expr f))
where you only abstract from annotation/no-annotation (f = Identity or (,) ann) and not from recursion.

Evolving data structure

I'm trying to write a compiler for a C-like language in Haskell. The compiler progresses by transforming an AST. The first pass parses the input to create the AST, tying a knot with the symbol table to allow symbols to be located before they have been defined without the need for forward references.
The AST contains information about types and expressions, and there can be connections between them; e.g. sizeof(T) is an expression that depends on a type T, and T[e] is an array type that depends on a constant expression e.
Types and expressions are represented by Haskell data types like so:
data Type = TypeInt Id
| TypePointer Id Type -- target type
| TypeArray Id Type Expr -- elt type, elt count
| TypeStruct Id [(String, Type)] -- [(field name, field type)]
| TypeOf Id Expr
| TypeDef Id Type
data Expr = ExprInt Int -- literal int
| ExprVar Var -- variable
| ExprSizeof Type
| ExprUnop Unop Expr
| ExprBinop Binop Expr Expr
| ExprField Bool Expr String -- Bool gives true=>s.field, false=>p->field
Where Unop includes operators like address-of (&), and dereference (*), and Binop includes operators like plus (+), and times (*), etc...
Note that each type is assigned a unique Id. This is used to construct a type dependency graph in order to detect cycles which lead to infinite types. Once we are sure that there are no cycles in the type graph, it is safe to apply recursive functions over them without getting into an infinite loop.
The next step is to determine the size of each type, assign offsets to struct fields, and replace ExprFields with pointer arithmetic. In doing so, we can determine the type of expressions, and eliminate ExprSizeofs, ExprFields, TypeDefs, and TypeOfs from the type graph, so our types and expressions have evolved, and now look more like this:
data Type' = TypeInt Id
| TypePointer Id Type'
| TypeArray Id Type' Int -- constant expression has been eval'd
| TypeStruct Id [(String, Int, Type')] -- field offset has been determined
data Expr' = ExprInt Type' Int
| ExprVar Type' Var
| ExprUnop Type' Unop Expr'
| ExprBinop Type' Binop Expr' Expr'
Note that we've eliminated some of the data constructors, and slightly changed some of the others. In particular, Type' no longer contains any Expr', and every Expr' has determined its Type'.
So, finally, the question: Is it better to create two almost identical sets of data types, or try to unify them into a single data type?
Keeping two separate data types makes it explicit that certain constructors can no longer appear. However, the function which performs constant folding to evaluate constant expressions will have type:
foldConstants :: Expr -> Either String Expr'
But this means that we cannot perform constant folding later on with Expr's (imagine some pass that manipulates an Expr', and wants to fold any emerged constant expressions). We would need another implementation:
foldConstants' :: Expr' -> Either String Expr'
On the other hand, keeping a single type would solve the constant folding problem, but would prevent the type checker from enforcing static invariants.
Furthermore, what do we put into the unknown fields (like field offsets, array sizes, and expression types) during the first pass? We could plug the holes with undefined, or error "*hole*", but that feels like a disaster waiting to happen (like NULL pointers that you can't even check). We could change the unknown fields into Maybes, and plug the holes with Nothing (like NULL pointers that we can check), but it would be annoying in subsequent passes to have to keep pulling values out of Maybes that are always going to be Justs.
Hopefully someone with more experience will have a more polished, battle-tested and ready answer, but here's my shot at it.
You can have your pie and eat part of it too at relatively little cost with GADTs:
{-# LANGUAGE GADTs #-}
data P0 -- phase zero
data P1 -- phase one
data Type p where
TypeInt :: Id -> Type p
TypePointer :: Id -> Type p -> Type p -- target type
TypeArray :: Id -> Type p -> Expr p -> Type p -- elt type, elt count
TypeStruct :: Id -> [(String, Type p)] -> Type p -- [(field name, field type)]
TypeOf :: Id -> Expr P0 -> Type P0
TypeDef :: Id -> Type P0 -> Type P0
data Expr p where
ExprInt :: Int -> Expr p -- literal int
ExprVar :: Var -> Expr p -- variable
ExprSizeof :: Type P0 -> Expr P0
ExprUnop :: Unop -> Expr p -> Expr p
ExprBinop :: Binop -> Expr p -> Expr p -> Expr p
ExprField :: Bool -> Expr P0 -> String -> Expr P0 -- Bool gives true=>s.field, false=>p->field
Here the things we've changed are:
The datatypes now use GADT syntax. This means that constructors are declared using their type signatures. data Foo = Bar Int Char becomes data Foo where Bar :: Int -> Char -> Foo (aside from syntax, the two are completely equivalent).
We have added a type variable to both Type and Expr. This is a so-called phantom type variable: there is no actual data stored that is of type p, it is used only to enforce invariants in the type system.
We've declared dummy types to represent the phases before and after the transformation: phase zero and phase one. (In a more elaborate system with multiple phases we could potentially use type-level numbers to denote them.)
GADTs let us store type-level invariants in the data structure. Here we have two of them. The first is that recursive positions must be of the same phase as the structure containing them. For example, looking at TypePointer :: Id -> Type p -> Type p, you pass a Type p to the TypePointer constructor and get a Type p as result, and those ps must be the same type. (If we wanted to allow different types, we could use p and q.)
The second is that we enforce the fact that some constructors can only be used in the first phase. Most of the constructors are polymorphic in the phantom type variable p, but some of them require that it be P0. This means that those constructors can only be used to construct values of type Type P0 or Expr P0, not any other phase.
GADTs work in two directions. The first is that if you have a function which returns a Type P1, and try to use one of the constructors which returns a Type P0 to construct it, you will get a type error. This is what's called "correct by construction": it's statically impossible to construct an invalid structure (provided you can encode all of the relevant invariants in the type system). The flip side of it is that if you have a value of Type P1, you can be sure that it was constructed correctly: the TypeOf and TypeDef constructors can't have been used (in fact, the compiler will complain if you try to pattern match on them), and any recursive positions must also be of phase P1. Essentially, when you construct a GADT you store evidence that the type constraints are satisfied, and when you pattern match on it you retrieve that evidence and can take advantage of it.
That was the easy part. Unfortunately, we have some differences between the two types beyond just which constructors are allowed: some of the constructor arguments are different between the phases, and some are only present in the post-transformation phase. We can again use GADTs to encode this, but it's not as low-cost and elegant. One solution would be to duplicate all of the constructors which are different, and have one each for P0 and P1. But duplication isn't nice. We can try to do it more fine-grained:
-- a couple of helper types
-- here I take advantage of the fact that of the things only present in one phase,
-- they're always present in P1 and not P0, and not vice versa
data MaybeP p a where
NothingP :: MaybeP P0 a
JustP :: a -> MaybeP P1 a
data EitherP p a b where
LeftP :: a -> EitherP P0 a b
RightP :: b -> EitherP P1 a b
data Type p where
TypeInt :: Id -> Type p
TypePointer :: Id -> Type p -> Type p
TypeArray :: Id -> Type p -> EitherP p (Expr p) Int -> Type p
TypeStruct :: Id -> [(String, MaybeP p Int, Type p)] -> Type p
TypeOf :: Id -> Expr P0 -> Type P0
TypeDef :: Id -> Type P0 -> Type P0
-- for brevity
type MaybeType p = MaybeP p (Type p)
data Expr p where
ExprInt :: MaybeType p -> Int -> Expr p
ExprVar :: MaybeType p -> Var -> Expr p
ExprSizeof :: Type P0 -> Expr P0
ExprUnop :: MaybeType p -> Unop -> Expr p -> Expr p
ExprBinop :: MaybeType p -> Binop -> Expr p -> Expr p -> Expr p
ExprField :: Bool -> Expr P0 -> String -> Expr P0
Here with some helper types we've enforced the fact that some constructor arguments can only be present in phase one (MaybeP) and some are different between the two phases (EitherP). While this makes us completely type-safe, it feels a bit ad-hoc and we still have to wrap things in and out of the MaybePs and EitherPs all the time. I don't know if there's a better solution in that respect. Complete type safety is something, though: we could write fromJustP :: MaybeP P1 a -> a and be sure that it is completely and utterly safe.
Update: An alternative is to use TypeFamilies:
data Proxy a = Proxy
class Phase p where
type MaybeP p a
type EitherP p a b
maybeP :: Proxy p -> MaybeP p a -> Maybe a
eitherP :: Proxy p -> EitherP p a b -> Either a b
phase :: Proxy p
phase = Proxy
instance Phase P0 where
type MaybeP P0 a = ()
type EitherP P0 a b = a
maybeP _ _ = Nothing
eitherP _ a = Left a
instance Phase P1 where
type MaybeP P1 a = a
type EitherP P1 a b = b
maybeP _ a = Just a
eitherP _ a = Right a
The only change to Expr and Type relative the previous version is that the constructors need to have an added Phase p constraint, e.g. ExprInt :: Phase p => MaybeType p -> Int -> Expr p.
Here if the type of p in a Type or Expr is known, you can statically know whether the MaybePs will be () or the given type and which of the types the EitherPs are, and can use them directly as that type without explicit unwrapping. When p is unknown you can use maybeP and eitherP from the Phase class to find out what they are. (The Proxy arguments are necessary, because otherwise the compiler wouldn't have any way to tell which phase you meant.) This is analogous to the GADT version where, if p is known, you can be sure of what MaybeP and EitherP contains, while otherwise you have to pattern match both possibilities. This solution isn't perfect either in the respect that the 'missing' arguments become () rather than disappearing entirely.
Constructing Exprs and Types also seems to be broadly similar between the two versions: if the value you're constructing has anything phase-specific about it then it must specify that phase in its type. The trouble seems to come when you want to write a function polymorphic in p but still handling phase-specific parts. With GADTs this is straightforward:
asdf :: MaybeP p a -> MaybeP p a
asdf NothingP = NothingP
asdf (JustP a) = JustP a
Note that if I had merely written asdf _ = NothingP the compiler would have complained, because the type of the output wouldn't be guaranteed to be the same as the input. By pattern matching we can figure out what type the input was and return a result of the same type.
With the TypeFamilies version, though, this is a lot harder. Just using maybeP and the resulting Maybe you can't prove anything to the compiler about types. You can get part of the way there by, instead of having maybeP and eitherP return Maybe and Either, making them deconstructor functions like maybe and either which also make a type equality available:
maybeP :: Proxy p -> (p ~ P0 => r) -> (p ~ P1 => a -> r) -> MaybeP p a -> r
eitherP :: Proxy p -> (p ~ P0 => a -> r) -> (p ~ P1 => b -> r) -> EitherP p a b -> r
(Note that we need Rank2Types for this, and note also that these are essentially the CPS-transformed versions of the GADT versions of MaybeP and EitherP.)
Then we can write:
asdf :: Phase p => MaybeP p a -> MaybeP p a
asdf a = maybeP phase () id a
But that's still not enough, because GHC says:
data.hs:116:29:
Could not deduce (MaybeP p a ~ MaybeP p0 a0)
from the context (Phase p)
bound by the type signature for
asdf :: Phase p => MaybeP p a -> MaybeP p a
at data.hs:116:1-29
NB: `MaybeP' is a type function, and may not be injective
In the fourth argument of `maybeP', namely `a'
In the expression: maybeP phase () id a
In an equation for `asdf': asdf a = maybeP phase () id a
Maybe you could solve this with a type signature somewhere, but at that point it seems like more bother than it's worth. So pending further information from someone else I'm going to recommend using the GADT version, being the simpler and more robust one, at the cost of a little syntactic noise.
Update again: The problem here was that because MaybeP p a is a type function and there is no other information to go by, GHC has no way to know what p and a should be. If I pass in a Proxy p and use that instead of phase that solves p, but a is still unknown.
There's no ideal solution to this problem, since each one has different pros and cons.
I would personally go with using a single data type "tree" and add separate data constructors for things that need to be differentiated. I.e.:
data Type
= ...
| TypeArray Id Type Expr
| TypeResolvedArray Id Type Int
| ...
This has the advantage that you can run the same phase multiple times on the same tree, as you say, but the reasoning goes deeper than that: Let's say that you are implementing a syntax element that generates more AST (something like include or a C++ template kind of deal, and it can depend on constant exprs like your TypeArray so you can't evaluate it in the first iteration). With the unified data type approach, you can just insert the new AST in your existing tree, and not only can you run the same phases as before on that tree directly, but you will get caching for free, i.e. if the new AST references an array by using sizeof(typeof(myarr)) or something, you don't have to determine the constant size of myarr again, because its type is already a TypeResolvedArray from your previous resolution phase.
You could use a different representation when you are past all of your compilation phases, and it is time to interpret the code (or something); then you are certain of the fact that no more AST changes will be necessary, and a more streamlined representation might be a good idea.
By the way, you should use Data.Word.Word instead of Data.Int.Int for array sizes. It is such a common error in C to use ints for indexing arrays, while C pointers actually are unsigned. Please don't make this mistake in your language, too, unless you really want to support arrays with negative sizes.

Resources