Restrictions of unboxed types - haskell

I wonder why unboxed types in Haskell have these restrictions:
You cannot define a newtype for unboxed type:
newtype Vec = Vec (# Float#, Float# #)
but you can define type synonim:
type Vec = (# Float#, Float# #)
Type families can't return unboxed type:
type family Unbox (a :: *) :: # where
Unbox Int = Int#
Unbox Word = Word#
Unbox Float = Float#
Unbox Double = Double#
Unbox Char = Char#
Are there some fundamental reasons behind this, or it's just because no one asked for this features?

Parametric polymorphism in Haskell relies on the fact that all values of t :: * types are uniformly represented as a pointer to a runtime object. Thus, the same machine code works for all instantiations of polymorphic values.
Contrast polymorphic functions in Rust or C++. For example, the identity function there still has type analoguous to forall a. a -> a, but since values of different a types may have different sizes, the compilers have to generate different code for each instatiation. This also means that we can't pass polymorphic functions around in runtime boxes:
data Id = Id (forall a. a -> a)
since such a function would have to work correctly for arbitrary-sized objects. It requires some additional infrastructure to allow this feature, for example we could require that a runtime forall a. a -> a function takes extra implicit arguments that carry information about the size and constructors/destructors of a values.
Now, the problem with newtype Vec = Vec (# Float#, Float# #) is that even though Vec has kind *, runtime code that expects values of some t :: * can't handle it. It's a stack-allocated pair of floats, not a pointer to a Haskell object, and passing it to code expecting Haskell objects would result in segfaults or errors.
In general (# a, b #) isn't necessarily pointer-sized, so we can't copy it into pointer-sized data fields.
Type families returning # types are disallowed for related reasons. Consider the following:
type family Foo (a :: *) :: # where
Foo Int = Int#
Foo a = (# Int#, Int# #)
data Box = forall (a :: *). Box (Foo a)
Our Box is not representable runtime, since Foo a has different sizes for different a-s. Generally, polymorphism over # would require generating different code for different instantiations, like in Rust, but this interacts badly with regular parametric polymorphism and makes runtime representation of polymorphic values difficult, so GHC doesn't bother with any of this.
(Not saying though that a usable implementation couldn't possibly be devised)

A newtype would allow one to define class instances
instance C Vec where ...
which can not be defined for unboxed tuples. Type synonyms instead do not offer such functionality.
Also, Vec would not be a boxed type. This means that you can no longer instantiate type variables with Vec in general, unless their kind allows it. For instance [Vec] should be disallowed. The compiler should keep track of "regular" newtypes and "unboxed" newtypes in some way. This will have, I think, the only benefit of allowing the data constructor Vec to wrap unboxed values at compile time (since it is removed at runtime). This would probably be not enough useful to justify making the necessary changes to the type inference engine, I guess.

Related

Clarification of Terms around Haskell Type system

Type system in haskell seem to be very Important and I wanted to clarify some terms revolving around haskell type system.
Some type classes
Functor
Applicative
Monad
After using :info I found that Functor is a type class, Applicative is a type class with => (deriving?) Functor and Monad deriving Applicative type class.
I've read that Maybe is a Monad, does that mean Maybe is also Applicative and Functor?
-> operator
When i define a type
data Maybe = Just a | Nothing
and check :t Just I get Just :: a -> Maybe a. How to read this -> operator?
It confuses me with the function where a -> b means it evaluates a to b (sort of returns a maybe) – I tend to think lhs to rhs association but it turns when defining types?
The term type is used in ambiguous ways, Type, Type Class, Type Constructor, Concrete Type etc... I would like to know what they mean to be exact
Indeed the word “type” is used in somewhat ambiguous ways.
The perhaps most practical way to look at it is that a type is just a set of values. For example, Bool is the finite set containing the values True and False.Mathematically, there are subtle differences between the concepts of set and type, but they aren't really important for a programmer to worry about. But you should in general consider the sets to be infinite, for example Integer contains arbitrarily big numbers.
The most obvious way to define a type is with a data declaration, which in the simplest case just lists all the values:
data Colour = Red | Green | Blue
There we have a type which, as a set, contains three values.
Concrete type is basically what we say to make it clear that we mean the above: a particular type that corresponds to a set of values. Bool is a concrete type, that can easily be understood as a data definition, but also String, Maybe Integer and Double -> IO String are concrete types, though they don't correspond to any single data declaration.
What a concrete type can't have is type variables†, nor can it be an incompletely applied type constructor. For example, Maybe is not a concrete type.
So what is a type constructor? It's the type-level analogue to value constructors. What we mean mathematically by “constructor” in Haskell is an injective function, i.e. a function f where if you're given f(x) you can clearly identify what was x. Furthermore, any different constructors are assumed to have disjoint ranges, which means you can also identify f.‡
Just is an example of a value constructor, but it complicates the discussion that it also has a type parameter. Let's consider a simplified version:
data MaybeInt = JustI Int | NothingI
Now we have
JustI :: Int -> MaybeInt
That's how JustI is a function. Like any function of the same signature, it can be applied to argument values of the right type, like, you can write JustI 5.What it means for this function to be injective is that I can define a variable, say,
quoxy :: MaybeInt
quoxy = JustI 9328
and then I can pattern match with the JustI constructor:
> case quoxy of { JustI n -> print n }
9328
This would not be possible with a general function of the same signature:
foo :: Int -> MaybeInt
foo i = JustI $ negate i
> case quoxy of { foo n -> print n }
<interactive>:5:17: error: Parse error in pattern: foo
Note that constructors can be nullary, in which case the injective property is meaningless because there is no contained data / arguments of the injective function. Nothing and True are examples of nullary constructors.
Type constructors are the same idea as value constructors: type-level functions that can be pattern-matched. Any type-name defined with data is a type constructor, for example Bool, Colour and Maybe are all type constructors. Bool and Colour are nullary, but Maybe is a unary type constructor: it takes a type argument and only the result is then a concrete type.
So unlike value-level functions, type-level functions are kind of by default type constructors. There are also type-level functions that aren't constructors, but they require -XTypeFamilies.
A type class may be understood as a set of types, in the same vein as a type can be seen as a set of values. This is not quite accurate, it's closer to true to say a class is a set of type constructors but again it's not as useful to ponder the mathematical details – better to look at examples.
There are two main differences between type-as-set-of-values and class-as-set-of-types:
How you define the “elements”: when writing a data declaration, you need to immediately describe what values are allowed. By contrast, a class is defined “empty”, and then the instances are defined later on, possibly in a different module.
How the elements are used. A data type basically enumerates all the values so they can be identified again. Classes meanwhile aren't generally concerned with identifying types, rather they specify properties that the element-types fulfill. These properties come in the form of methods of a class. For example, the instances of the Num class are types that have the property that you can add elements together.
You could say, Haskell is statically typed on the value level (fixed sets of values in each type), but duck-typed on the type level (classes just require that somebody somewhere implements the necessary methods).
A simplified version of the Num example:
class Num a where
(+) :: a -> a -> a
instance Num Int where
0 + x = x
x + y = ...
If the + operator weren't already defined in the prelude, you would now be able to use it with Int numbers. Then later on, perhaps in a different module, you could also make it usable with new, custom number types:
data MyNumberType = BinDigits [Bool]
instance Num MyNumberType where
BinDigits [] + BinDigits l = BinDigits l
BinDigits (False:ds) + BinDigits (False:es)
= BinDigits (False : ...)
Unlike Num, the Functor...Monad type classes are not classes of types, but of 1-ary type constructors. I.e. every functor is a type constructor taking one argument to make it a concrete type. For instance, recall that Maybe is a 1-ary type constructor.
class Functor f where
fmap :: (a->b) -> f a -> f b
instance Functor Maybe where
fmap f (Just a) = Just (f a)
fmap _ Nothing = Nothing
As you have concluded yourself, Applicative is a subclass of Functor. D being a subclass of C means basically that D is a subset of the set of type constructors in C. Therefore, yes, if Maybe is an instance of Monad it also is an instance of Functor.
†That's not quite true: if you consider the _universal quantor_ explicitly as part of the type, then a concrete type can contain variables. This is a bit of an advanced subject though.
‡This is not guaranteed to be true if the -XPatternSynonyms extension is used.

Clarification on Existential Types in Haskell

I am trying to understand Existential types in Haskell and came across a PDF http://www.ii.uni.wroc.pl/~dabi/courses/ZPF15/rlasocha/prezentacja.pdf
Please correct my below understandings that I have till now.
Existential Types not seem to be interested in the type they contain but pattern matching them say that there exists some type we don't know what type it is until & unless we use Typeable or Data.
We use them when we want to Hide types (ex: for Heterogeneous Lists) or we don't really know what the types at Compile Time.
GADT's provide the clear & better syntax to code using Existential Types by providing implicit forall's
My Doubts
In Page 20 of above PDF it is mentioned for below code that it is impossible for a Function to demand specific Buffer. Why is it so? When I am drafting a Function I exactly know what kind of buffer I gonna use eventhough I may not know what data I gonna put into that.
What's wrong in Having :: Worker MemoryBuffer Int If they really want to abstract over Buffer they can have a Sum type data Buffer = MemoryBuffer | NetBuffer | RandomBuffer and have a type like :: Worker Buffer Int
data Worker x = forall b. Buffer b => Worker {buffer :: b, input :: x}
data MemoryBuffer = MemoryBuffer
memoryWorker = Worker MemoryBuffer (1 :: Int)
memoryWorker :: Worker Int
As Haskell is a Full Type Erasure language like C then How does it know at Runtime which function to call. Is it something like we gonna maintain few information and pass in a Huge V-Table of Functions and at runtime it gonna figure out from V-Table? If it is so then what sort of Information it gonna store?
GADT's provide the clear & better syntax to code using Existential Types by providing implicit forall's
I think there's general agreement that the GADT syntax is better. I wouldn't say that it's because GADTs provide implicit foralls, but rather because the original syntax, enabled with the ExistentialQuantification extension, is potentially confusing/misleading. That syntax, of course, looks like:
data SomeType = forall a. SomeType a
or with a constraint:
data SomeShowableType = forall a. Show a => SomeShowableType a
and I think the consensus is that the use of the keyword forall here allows the type to be easily confused with the completely different type:
data AnyType = AnyType (forall a. a) -- need RankNTypes extension
A better syntax might have used a separate exists keyword, so you'd write:
data SomeType = SomeType (exists a. a) -- not valid GHC syntax
The GADT syntax, whether used with implicit or explicit forall, is more uniform across these types, and seems to be easier to understand. Even with an explicit forall, the following definition gets across the idea that you can take a value of any type a and put it inside a monomorphic SomeType':
data SomeType' where
SomeType' :: forall a. (a -> SomeType') -- parentheses optional
and it's easy to see and understand the difference between that type and:
data AnyType' where
AnyType' :: (forall a. a) -> AnyType'
Existential Types not seem to be interested in the type they contain but pattern matching them say that there exists some type we don't know what type it is until & unless we use Typeable or Data.
We use them when we want to Hide types (ex: for Heterogeneous Lists) or we don't really know what the types at Compile Time.
I guess these aren't too far off, though you don't have to use Typeable or Data to use existential types. I think it would be more accurate to say an existential type provides a well-typed "box" around an unspecified type. The box does "hide" the type in a sense, which allows you to make a heterogeneous list of such boxes, ignoring the types they contain. It turns out that an unconstrained existential, like SomeType' above is pretty useless, but a constrained type:
data SomeShowableType' where
SomeShowableType' :: forall a. (Show a) => a -> SomeShowableType'
allows you to pattern match to peek inside the "box" and make the type class facilities available:
showIt :: SomeShowableType' -> String
showIt (SomeShowableType' x) = show x
Note that this works for any type class, not just Typeable or Data.
With regard to your confusion about page 20 of the slide deck, the author is saying that it's impossible for a function that takes an existential Worker to demand a Worker having a particular Buffer instance. You can write a function to create a Worker using a particular type of Buffer, like MemoryBuffer:
class Buffer b where
output :: String -> b -> IO ()
data Worker x = forall b. Buffer b => Worker {buffer :: b, input :: x}
data MemoryBuffer = MemoryBuffer
instance Buffer MemoryBuffer
memoryWorker = Worker MemoryBuffer (1 :: Int)
memoryWorker :: Worker Int
but if you write a function that takes a Worker as argument, it can only use the general Buffer type class facilities (e.g., the function output):
doWork :: Worker Int -> IO ()
doWork (Worker b x) = output (show x) b
It can't try to demand that b be a particular type of buffer, even via pattern matching:
doWorkBroken :: Worker Int -> IO ()
doWorkBroken (Worker b x) = case b of
MemoryBuffer -> error "try this" -- type error
_ -> error "try that"
Finally, runtime information about existential types is made available through implicit "dictionary" arguments for the typeclasses that are involved. The Worker type above, in addtion to having fields for the buffer and input, also has an invisible implicit field that points to the Buffer dictionary (somewhat like v-table, though it's hardly huge, as it just contains a pointer to the appropriate output function).
Internally, the type class Buffer is represented as a data type with function fields, and instances are "dictionaries" of this type:
data Buffer' b = Buffer' { output' :: String -> b -> IO () }
dBuffer_MemoryBuffer :: Buffer' MemoryBuffer
dBuffer_MemoryBuffer = Buffer' { output' = undefined }
The existential type has a hidden field for this dictionary:
data Worker' x = forall b. Worker' { dBuffer :: Buffer' b, buffer' :: b, input' :: x }
and a function like doWork that operates on existential Worker' values is implemented as:
doWork' :: Worker' Int -> IO ()
doWork' (Worker' dBuf b x) = output' dBuf (show x) b
For a type class with only one function, the dictionary is actually optimized to a newtype, so in this example, the existential Worker type includes a hidden field that consists of a function pointer to the output function for the buffer, and that's the only runtime information needed by doWork.
In Page 20 of above PDF it is mentioned for below code that it is impossible for a Function to demand specific Buffer. Why is it so?
Because Worker, as defined, takes only one argument, the type of the "input" field (type variable x). E.g. Worker Int is a type. The type variable b, instead, is not a parameter of Worker, but is a sort of "local variable", so to speak. It can not be passed as in Worker Int String -- that would trigger a type error.
If we instead defined:
data Worker x b = Worker {buffer :: b, input :: x}
then Worker Int String would work, but the type is no longer existential -- we now always have to pass the buffer type as well.
As Haskell is a Full Type Erasure language like C then How does it know at Runtime which function to call. Is it something like we gonna maintain few information and pass in a Huge V-Table of Functions and at runtime it gonna figure out from V-Table? If it is so then what sort of Information it gonna store?
This is roughly correct. Briefly put, each time you apply constructor Worker, GHC infers the b type from the arguments of Worker, and then searches for an instance Buffer b. If that is found, GHC includes an additional pointer to the instance in the object. In its simplest form, this is not too different from the "pointer to vtable" which is added to each object in OOP when virtual functions are present.
In the general case, it can be much more complex, though. The compiler might use a different representation and add more pointers instead of a single one (say, directly adding the pointers to all the instance methods), if that speeds up code. Also, sometimes the compiler needs to use multiple instances to satisfy a constraint. E.g., if we need to store the instance for Eq [Int] ... then there is not one but two: one for Int and one for lists, and the two needs to be combined (at run time, barring optimizations).
It is hard to guess exactly what GHC does in each case: that depends on a ton of optimizations which might or might not trigger.
You could try googling for the "dictionary based" implementation of type classes to see more about what's going on. You can also ask GHC to print the internal optimized Core with -ddump-simpl and observe the dictionaries being constructed, stored, and passed around. I have to warn you: Core is rather low level, and can be hard to read at first.

Bind type parameter

I am currently working with the quite nice haskell-eigen library and stumbled upon a fundamental yet probably basic problem (I am quite new to practical haskell development).
I use their basic matrix type
data Matrix a b :: * -> * -> *
where a denotes the haskell and b the internal C type. This is realized via the restriction
Elem a b
with
Elem Double CDouble
Elem Float CFloat
-- more for complex types...
Although not really the question I want to ask here I kind of don't understand why this is done this way. Since it is obviously a kind of functional mapping I already don't understand why this is formulated as an equivalency relation, but anyway...
I now want to define (as a simple example - I got several) an instance of Key from the keys package. It defines the index key for a given container, for example
type instance [] = Int
So instances of Key are defined over types of kind * -> *.
However due to that requirement, this won't work:
type instance Key Matrix = (Int, Int)
I have to in some way make Matrix be of kind * -> *. So (coming from c++ where I would do this using traits classes), I tried this:
type family CType a where
CType Double = CDouble
CType Float = CFloat
type MatX a = Matrix a (CType a)
In other words I tried to use type synonyms as a means of realizing that above mentioned functional type map.
Now I tried the following:
type instance Key MatX = (Int, Int)
which gives me "The type synonym ‘MatX’ should have 1 argument, but has been given none" and I even tried the obviously wrong
type instance Key (MatX a) = (Int, Int)
which gives me "Expected kind * -> *, but MatX a has kind *". This sounds to me like "I the compiler expect a type with more than 0 but - being a type synonym - less than 1 argument".
So my question is: How does one commonly map types in haskell in order to solve such a kind mismatch or get rid of it in another way.
P.S.: I am well aware that the eigen matrix has an indexing function, but
I want it to be a common one with other data types
I have this problem in other variants for other type instances.
Edit: Added reference links to mentioned packages.
You're nearly there. The one missing piece is that type synonyms must be used saturated - that is, you have to supply all of its arguments. MatX on its own is not a valid type, only MatX a. The reason for this is that type synonyms are just synonyms - they're expanded at compile time, which means that the compiler needs to know all of the type synonym's arguments in order to get a valid type after expansion.
The fix is to change your type synonym to a newtype.
newtype MatX a = MatX { getMatX :: Matrix a (CType a) }
newtypes can be partially applied, because MatX a is now a different type to Matrix a (CType a).
type instance Key MatX = (Int, Int)
The other answer shows the general case for converting type synonyms into things that can be used in instance declarations. But in this specific case it can be much simpler: since the index type is the same for all different matrices, you can supply just the arguments needed to get the kind correct. Thus:
type instance Key (Matrix a) = (Int, Int)
No extra type families relating Haskell and C types needed, no new types needed. This will also make working with the keys' library's API much simpler, as you won't need to do any newtype wrapping and unwrapping around each call.

What exactly is the kind "*" in Haskell?

In Haskell, (value-level) expressions are classified into types, which can be notated with :: like so: 3 :: Int, "Hello" :: String, (+ 1) :: Num a => a -> a. Similarly, types are classified into kinds. In GHCi, you can inspect the kind of a type expression using the command :kind or :k:
> :k Int
Int :: *
> :k Maybe
Maybe :: * -> *
> :k Either
Either :: * -> * -> *
> :k Num
Num :: * -> Constraint
> :k Monad
Monad :: (* -> *) -> Constraint
There are definitions floating around that * is the kind of "concrete types" or "values" or "runtime values." See, for example, Learn You A Haskell. How true is that? We've had a few questions about kinds that address the topic in passing, but it'd be nice to have a canonical and precise explanation of *.
What exactly does * mean? And how does it relate to other more complex kinds?
Also, do the DataKinds or PolyKinds extensions change the answer?
First off, * is not a wildcard! It's also typically pronounced "star."
Bleeding edge note: There is as of Feb. 2015 a proposal to simplify GHC's subkind system (in 7.12 or later). That page contains a good discussion of the GHC 7.8/7.10 story. Looking forward, GHC may drop the distinction between types and kinds, with * :: *. See Weirich, Hsu, and Eisenberg, System FC with Explicit Kind Equality.
The Standard: A description of type expressions.
The Haskell 98 report defines * in this context as:
The symbol * represents the kind of all nullary type constructors.
In this context, "nullary" simply means that the constructor takes no parameters. Either is binary; it can be applied to two parameters: Either a b. Maybe is unary; it can be applied to one parameter: Maybe a. Int is nullary; it can be applied to no parameters.
This definition is a little bit incomplete on its own. An expression containing a fully-applied unary, binary, etc. type constructor also has kind *, e.g. Maybe Int :: *.
In GHC: Something that contains values?
If we poke around the GHC documentation, we get something closer to the "can contain a runtime value" definition. The GHC Commentary page "Kinds" states that "'*' is the kind of boxed values. Things like Int and Maybe Float have kind *." The GHC user's guide for version 7.4.1, on the other hand, stated that * is the kind of "lifted types". (That passage wasn't retained when the section was revised for
PolyKinds.)
Boxed values and lifted types are a bit different. According to the GHC Commentary page "TypeType",
A type is unboxed iff its representation is other than a pointer. Unboxed types are also unlifted.
A type is lifted iff it has bottom as an element. Closures always have lifted types: i.e. any let-bound identifier in Core must have a lifted type. Operationally, a lifted object is one that can be entered. Only lifted types may be unified with a type variable.
So ByteArray#, the type of raw blocks of memory, is boxed because it is represented as a pointer, but unlifted because bottom is not an element.
> undefined :: ByteArray#
Error: Kind incompatibility when matching types:
a0 :: *
ByteArray# :: #
Therefore it appears that the old User's Guide definition is more accurate than the GHC Commentary one: * is the kind of lifted types. (And, conversely, # is the kind of unlifted types.)
Note that if types of kind * are always lifted, for any type t :: * you can construct a "value" of sorts with undefined :: t or some other mechanism to create bottom. Therefore even "logically uninhabited" types like Void can have a value, i.e. bottom.
So it seems that, yes, * represents the kind of types that can contain runtime values, if undefined is your idea of a runtime value. (Which isn't a totally crazy idea, I don't think.)
GHC Extensions?
There are several extensions which liven up the kind system a bit. Some of these are mundane: KindSignatures lets us write kind annotations, like type annotations.
ConstraintKinds adds the kind Constraint, which is, roughly, the kind of the left-hand side of =>.
DataKinds lets us introduce new kinds besides * and #, just as we can introduce new types with data, newtype, and type.
With DataKinds every data declaration (terms and conditions may apply) generates a promoted kind declaration. So
data Bool = True | False
introduces the usual value constructor and type name; additionally, it produces a new kind, Bool, and two types: True :: Bool and False :: Bool.
PolyKinds introduces kind variables. This just a way to say "for any kind k" just like we say "for any type t" at the type level. As regards our friend * and whether it still means "types with values", I suppose you could say a type t :: k where k is a kind variable could contain values, if k ~ * or k ~ #.
In the most basic form of the kind language, where there are only the kind * and the kind constructor ->, then * is the kind of things that can stand in a type-of relationship to values; nothing with a different kind can be a type of values.
Types exist to classify values. All values with the same type are interchangeable for the purpose of type-checking, so the type checker only has to care about types, not specific values. So we have the "value level" where all the actual values live, and the "type level" where their types live. The "type-of" relationship forms links between the two levels, with a single type being the type of (usually) many values. Haskell makes these two levels quite explicit; it's why you can have declarations like data Foo = Foo Int Chat Bool where you've declared a type-level thing Foo (a type with kind *) and a value-level thing Foo (a constructor with type Int -> Char -> Bool -> Foo). The two Foos involved simply refer to different entities on different levels, and Haskell separates these so completely that it can always tell what level you're referring to and thus can allow (sometimes confusingly) things on the different levels to have the same name.
But as soon as we introduce types that themselves have structure (like Maybe Int, which is a type constructor Maybe applied to a type Int), then we have things that exist at the type level which do not actually stand in a type-of relationship to any values. There are no values whose type is just Maybe, only values with type Maybe Int (and Maybe Bool, Maybe (), even Maybe Void, etc). So we need to classify our type-level things for the same reason we need to classify our values; only certain type-expressions actually represent something that can be the type of values, but many of them work interchangeably for the purpose of "kind-checking" (whether it's a correct type for the value-level thing it's declared to be the type of is a problem for a different level).1
So * (which is often stated to be pronounced "type") is the basic kind; it's the kind of all type-level things that can be stated to be the type of values. Int has values; therefore its type is *. Maybe does not have values, but it takes an argument and produces a type that has values; this gets us a kind like ___ -> *. We can fill in the blank by observing that Maybe's argument is used as the type of the value appearing in Just a, so its argument must also be a type of values (with kind *), and so Maybe must have kind * -> *. And so on.
When you're dealing with kinds that only involve stars and arrows, then only type-expressions of kind * are types of values. Any other kind (e.g. * -> (* -> * -> *) -> (* -> *)) only contains other "type-level entities" that are not actual types that contain values.
PolyKinds, as I understand it, doesn't really change this picture at all. It just allows you to make polymorphic declarations at the kind-level, meaning it adds variables to our kind language (in addition to stars and arrows). So now I can contemplate type-level things of kind k -> *; this could be instantiated to work as either kind * -> * or (* -> *) -> * or (* -> (* -> *)) -> *. We've gained exactly the same kind of power as having (a -> b) -> [a] -> [b] at the type level gained us; we can write one map function with a type that contains variables, instead of having to write every possible map function separately. But there's still only one kind that contains type-level things that are the types of values: *.
DataKinds also introduces new things to the kind language. Effectively what it does though is to let us declare arbitrary new kinds, which contain new type-level entities (just as ordinary data declarations allow us to declare arbitrary new types, which contain new value-level entities). But it doesn't let us declare things with a correspondence of entities across all 3 levels; if I have data Nat :: Z | S Nat and use DataKinds to lift it to the kind level, then we have two different things named Nat that exist on the type level (as the type of value-level Z, S Z, S (S Z), etc), and at the kind level (as the kind of type-level Z, S Z, S (S Z)). The type-level Z is not the type of any values though; the value Z inhabits the type-level Nat (which in turn is of kind *), not the type-level Z. So DataKinds adds new user defined things to the kind language, which can be the kind of new user-defined things at the type level, but it remains the case that the only type-level things that can be the types of values are of kind *.
The only addition to the kind language that I'm aware of which truly does change this are the kinds mentioned in #ChristianConkle's answer, such as # (I believe there are a couple more now too? I'm not really terribly knowledgeable about "low level" types such as ByteArray#). These are the kinds of types that have values that GHC needs to know to treat differently (such as not assuming they can be boxed and lazily evaluated), even when polymorphic functions are involved, so we can't just attach the knowledge that they need to be treated differently to these values' types, or it would be lost when calling polymorphic functions on them.
1 The word "type" can thus be a little confusing. Sometimes it is used to refer to things that actually stand in a type-of relationship to things on the value level (this is the interpretation used when people say "Maybe is not a type, it's a type-constructor"). And sometimes it's used to refer to anything that exists at the type-level (under this interpretation Maybe is in fact a type). In this post I'm trying to very explicitly refer to "type-level things" rather than use "type" as a short-hand.
For beginners that are trying to learn about kinds (you can think of them as the type of a type) I recommend this chapter of the Learn you a Haskell book.
I personally think of kinds in this way:
You have concrete types, e.g. Int, Bool,String, [Int], Maybe Int or Either Int String.
All of these have the kind *. Why? Because they can't take any more types as a parameter; an Int, is an Int; a Maybe Int is a Maybe Int. What about Maybe or [] or Either, though?
When you say Maybe, you do not have a concrete type, because you didn't specify its parameter. Maybe Int or Maybe String are different but both have a * kind, but Maybe is waiting for a type of kind * to return a kind *. To clarify, let's look at what GHCI's :kind command can tell us:
Prelude> :kind Maybe Int
Maybe Int :: *
Prelude> :kind Maybe
Maybe :: * -> *
With lists it's the same:
Prelude> :k [String]
[String] :: *
Prelude> :k []
[] :: * -> *
What about Either?
Prelude> :k Either Int String
Either Int String :: *
Prelude> :k Either Int
Either Int :: * -> *
You could think of intuitively think of Either as a function that takes parameters, but the parameters are types:
Prelude> :k Either Int
Either Int :: * -> *
means Either Int is waiting for a type parameter.

Relationship between TypeRep and "Type" GADT

In Scrap your boilerplate reloaded, the authors describe a new presentation of Scrap Your Boilerplate, which is supposed to be equivalent to the original.
However, one difference is that they assume a finite, closed set of "base" types, encoded with a GADT
data Type :: * -> * where
Int :: Type Int
List :: Type a -> Type [a]
...
In the original SYB, type-safe cast is used, implemented using the Typeable class.
My questions are:
What is the relationship between these two approaches?
Why was the GADT representation chosen for the "SYB Reloaded" presentation?
[I am one of the authors of the "SYB Reloaded" paper.]
TL;DR We really just used it because it seemed more beautiful to us. The class-based Typeable approach is more practical. The Spine view can be combined with the Typeable class and does not depend on the Type GADT.
The paper states this in its conclusions:
Our implementation handles the two central ingredients of generic programming differently from the original SYB paper: we use overloaded functions with
explicit type arguments instead of overloaded functions based on a type-safe
cast 1 or a class-based extensible scheme [20]; and we use the explicit spine
view rather than a combinator-based approach. Both changes are independent
of each other, and have been made with clarity in mind: we think that the structure of the SYB approach is more visible in our setting, and that the relations
to PolyP and Generic Haskell become clearer. We have revealed that while the
spine view is limited in the class of generic functions that can be written, it is
applicable to a very large class of data types, including GADTs.
Our approach cannot be used easily as a library, because the encoding of
overloaded functions using explicit type arguments requires the extensibility of
the Type data type and of functions such as toSpine. One can, however, incorporate Spine into the SYB library while still using the techniques of the SYB
papers to encode overloaded functions.
So, the choice of using a GADT for type representation is one we made mainly for clarity. As Don states in his answer, there are some obvious advantages in this representation, namely that it maintains static information about what type a type representation is for, and that it allows us to implement cast without any further magic, and in particular without the use of unsafeCoerce. Type-indexed functions can also be implemented directly by using pattern matching on the type, and without falling back to various combinators such as mkQ or extQ.
Fact is that I (and I think the co-authors) simply were not very fond of the Typeable class. (In fact, I'm still not, although it is finally becoming a bit more disciplined now in that GHC adds auto-deriving for Typeable, makes it kind-polymorphic, and will ultimately remove the possibility to define your own instances.) In addition, Typeable wasn't quite as established and widely known as it is perhaps now, so it seemed appealing to "explain" it by using the GADT encoding. And furthermore, this was the time when we were also thinking about adding open datatypes to Haskell, thereby alleviating the restriction that the GADT is closed.
So, to summarize: If you actually need dynamic type information only for a closed universe, I'd always go for the GADT, because you can use pattern matching to define type-indexed functions, and you do not have to rely on unsafeCoerce nor advanced compiler magic. If the universe is open, however, which is quite common, certainly for the generic programming setting, then the GADT approach might be instructive, but isn't practical, and using Typeable is the way to go.
However, as we also state in the conclusions of the paper, the choice of Type over Typeable isn't a prerequisite for the other choice we're making, namely to use the Spine view, which I think is more important and really the core of the paper.
The paper itself shows (in Section 8) a variation inspired by the "Scrap your Boilerplate with Class" paper, which uses a Spine view with a class constraint instead. But we can also do a more direct development, which I show in the following. For this, we'll use Typeable from Data.Typeable, but define our own Data class which, for simplicity, just contains the toSpine method:
class Typeable a => Data a where
toSpine :: a -> Spine a
The Spine datatype now uses the Data constraint:
data Spine :: * -> * where
Constr :: a -> Spine a
(:<>:) :: (Data a) => Spine (a -> b) -> a -> Spine b
The function fromSpine is as trivial as with the other representation:
fromSpine :: Spine a -> a
fromSpine (Constr x) = x
fromSpine (c :<>: x) = fromSpine c x
Instances for Data are trivial for flat types such as Int:
instance Data Int where
toSpine = Constr
And they're still entirely straightforward for structured types such as binary trees:
data Tree a = Empty | Node (Tree a) a (Tree a)
instance Data a => Data (Tree a) where
toSpine Empty = Constr Empty
toSpine (Node l x r) = Constr Node :<>: l :<>: x :<>: r
The paper then goes on and defines various generic functions, such as mapQ. These definitions hardly change. We only get class constraints for Data a => where the paper has function arguments of Type a ->:
mapQ :: Query r -> Query [r]
mapQ q = mapQ' q . toSpine
mapQ' :: Query r -> (forall a. Spine a -> [r])
mapQ' q (Constr c) = []
mapQ' q (f :<>: x) = mapQ' q f ++ [q x]
Higher-level functions such as everything also just lose their explicit type arguments (and then actually look exactly the same as in original SYB):
everything :: (r -> r -> r) -> Query r -> Query r
everything op q x = foldl op (q x) (mapQ (everything op q) x)
As I said above, if we now want to define a generic sum function summing up all Int occurrences, we cannot pattern match anymore, but have to fall back to mkQ, but mkQ is defined purely in terms of Typeable and completely independent of Spine:
mkQ :: (Typeable a, Typeable b) => r -> (b -> r) -> a -> r
(r `mkQ` br) a = maybe r br (cast a)
And then (again exactly as in original SYB):
sum :: Query Int
sum = everything (+) sumQ
sumQ :: Query Int
sumQ = mkQ 0 id
For some of the stuff later in the paper (e.g., adding constructor information), a bit more work is needed, but it can all be done. So using Spine really does not depend on using Type at all.
Well, obviously the Typeable use is open -- new variants can be added after the fact, and without modifying the original definitions.
The important change though is that in that TypeRep is untyped. That is, there is no connection between the runtime type , TypeRep, and the static type it encodes. With the GADT approach we can encode the mapping between a type a and its Type, given by the GADT Type a.
We thus bake in evidence for the type rep being statically linked to its origin type, and can write statically typed dynamic application (for example) using Type a as evidence that we have a runtime a.
In the older TypeRep case, we have no such evidence and it comes down to runtime string equality, and a coerce and hope for the best through fromDynamic.
Compare the signatures:
toDyn :: Typeable a => a -> TypeRep -> Dynamic
versus GADT style:
toDyn :: Type a => a -> Type a -> Dynamic
I can't fake my type evidence, and I can use that later when reconstructing things, to e.g. lookup the type class instances for a when all I have is a Type a.

Resources