Data families vs Injective type families - haskell

Now that we have injective type families, is there any remaining use case for using data families over type families?
Looking at past StackOverflow questions about data families, there is this question from a couple years ago discussing the difference between type families and data families, and this answer about use cases of data families. Both say that the injectivity of data families is their greatest strength.
Looking at the docs on data families, I see reason not to rewrite all uses of data families using injective type families.
For example, say I have a data family (I've merged some examples from the docs to try to squeeze in all the features of data families)
data family G a b
data instance G Int Bool = G11 Int | G12 Bool deriving (Eq)
newtype instance G () a = G21 a
data instance G [a] b where
G31 :: c -> G [Int] b
G32 :: G [a] Bool
I might as well rewrite it as
type family G a b = g | g -> a b
type instance G Int Bool = G_Int_Bool
type instance G () a = G_Unit_a a
type instance G [a] b = G_lal_b a b
data G_Int_Bool = G11 Int | G12 Bool deriving (Eq)
newtype G_Unit_a a = G21 a
data G_lal_b a b where
G31 :: c -> G_lal_b [Int] b
G32 :: G_lal_b [a] Bool
It goes without saying that associated instances for data families correspond to associated instances with type families in the same way. Then is the only remaining difference that we have less things in the type-namespace?
As a followup, is there any benefit to having less things in the type-namespace? All I can think of is that this will become debugging hell for someone playing with this on ghci - the types of the constructors all seem to indicate that the constructors are all under one GADT...

type family T a = r | r -> a
data family D a
An injective type family T satisfies the injectivity axiom
if T a ~ T b then a ~ b
But a data family satisfies the much stronger generativity axiom
if D a ~ g b then D ~ g and a ~ b
(If you like: Because the instances of D define new types that are different from any existing types.)
In fact D itself is a legitimate type in the type system, unlike a type family like T, which can only ever appear in a fully saturated application like T a. This means
D can be the argument to another type constructor, like MaybeT D. (MaybeT T is illegal.)
You can define instances for D, like instance Functor D. (You can't define instances for a type family Functor T, and it would be unusable anyway because instance selection for, e.g., map :: Functor f => (a -> b) -> f a -> f b relies on the fact that from the type f a you can determine both f and a; for this to work f cannot be allowed to vary over type families, even injective ones.)

You're missing one other detail - data families create new types. Type families can only refer to other types. In particular, every instance of a data family declares new constructors. And it's nicely generic. You can create a data instance with newtype instance if you want newtype semantics. Your instance can be a record. It can have multiple constructors. It can even be a GADT if you want.
It's exactly the difference between the type and data/newtype keywords. Injective type families don't give you new types, rendering them useless in the case where you need that.
I understand where you're coming from. I had this same issue with the difference initially. Then I finally ran into a use case where they're useful, even without a type class getting involved.
I wanted to write an api for dealing with mutable cells in a few different contexts, without using classes. I knew I wanted to do it with a free monad with interpreters in IO, ST, and maybe some horrible hacks with unsafeCoerce to even go so far as shoehorning it into State. This wasn't for any practical purpose, of course - I was just exploring API designs.
So I had something like this:
data MutableEnv (s :: k) a ...
newRef :: a -> MutableEnv s (Ref s a)
readRef :: Ref s a -> MutableEnv s a
writeRef :: Ref s a -> a -> MutableEnv s ()
The definition of MutableEnv wasn't important. Just standard free/operational monad stuff with constructors matching the three functions in the api.
But I was stuck on what to define Ref as. I didn't want some sort of class, I wanted it to be a concrete type as far as the type system was concerned.
Then late one night I was out for a walk and it hit me - what I essentially want is a type whose constructors are indexed by an argument type. But it had to be open, unlike a GADT - new interpreters could be added at will. And then it hit me. That's exactly what a data family is. An open, type-indexed family of data values. I could complete the api with just the following:
data family Ref (s :: k) :: * -> *
Then, dealing with the underlying representation for a Ref was no big deal. Just create a data instance (or newtype instance, more likely) whenever an interpreter for MutableEnv is defined.
This exact example isn't really useful. But it clearly illustrates something data families can do that injective type families can't.

The answer by Reid Barton explains the distinction between my two examples perfectly. It has reminded me of something I read in Richard Eisenberg's thesis about adding dependent types to Haskell and I thought that since the heart of this question is injectivity and generativity, it would be worth mentioning how DependentHaskell will deal with this (when it eventually gets implemented, and if the quantifiers proposed now are the ones eventually implemented).
What follows is based on pages 56 and 57 (4.3.4 Matchability) of the aforementioned thesis:
Definition (Generativity). If f and g are generative, then f a ~ g b implies f ~ g
Definition (Injectivity). If f is injective, then f a ~ f b implies a ~ b
Definition (Matchability). A function f is matchable iff it is generative and injective
In Haskell as we know it now (8.0.1) the matchable (type-level) functions consist exactly of newtype, data, and data family type constructors. In the future, under DependentHaskell, one of the new quantifiers we will get will be '-> and this will be used to denote matchable functions. In other words, there will be a way to inform the compiler a type-level function is generative (which currently can only be done by making sure that function is a type constructor).

Related

Is anything generative?

In the paper "Higher-order Type-level Programming in Haskell", an f :: Type -> Type is defined to be "generative" in the following way:
Definition (Generativity). f is generative ⇔ f a ~ g b ⇒ f ~ g
I'm going to explicitly write out the intended quantification as I understand it:
type IsGenerative :: (Type -> Type) -> Constraint
class (forall g a b. f a ~ g b => f ~ g) => IsGenerative f
Conversely, in words:
F :: Type -> Type is generative if there is no G :: Type -> Type besides F such that there exist A, B :: Type for which F A ~ G B
The paper goes on to make a statement about the generativity of unsaturated type-families (they're not generative). To my understanding, in order to be able to form the proposition of whether or not unsaturated type-families are generative, the variables f, g :: Type -> Type should range over type-families as well as type constructors. Note that this means the ~ in f ~ g must represent some more abstract sense of definitional equality than GHC's (~) :: (Type -> Type) -> (Type -> Type) -> Constraint, which cannot be applied to unsaturated type families.
Now here's the problem: it doesn't seem like anything is generative. You'd expect that a datatype constructor like Maybe :: Type -> Type would be generative, but I can easily construct a distinct type family G :: Type -> Type and A, B :: Type for which F A ~ G B (despite F /~ G).
type G :: Type -> Type
type family G a
where
G _ = Maybe Int
data Dict c
where
Dict :: c => Dict c
lhs :: Dict (Maybe Int ~ G String)
lhs = Dict
As I said before, we can't actually form the proposition Maybe ~ G within GHC (because G is not saturated), but if F ~ G is taken to mean "F is definitionally equal to G", it's pretty obvious that Maybe /~ G. So it seems like Maybe is not actually generative in the sense defined in the paper. And it seems to me that any data/newtype is susceptible to a similar sequence of reasoning.
So where am I going wrong?
Is my assumption that F, G are allowed to range over type-families as well as type constructors justified? If not, generativity seems like a rather trivial property: "we cannot form the proposition of whether type families are generative, so type families are not generative".
Am I misunderstanding how the variables are quantified in the statement of generativity?
Are there actually any type-level expressions f :: Type -> Type that satisfy the formal property of being generative?
Eh, you're overthinking it. The ~ really is the one from GHC. If you prefer, replace the claim "unsaturated type families are not generative" with "if we expanded ~ to allow unsaturated type families1, then they would not be guaranteed generative2". This latter fact is (part of) the reason we don't bother expanding ~ to allow unsaturated type families -- it would be much less useful for them than it is for other type expressions.
If they were not precise about this divide in the paper, it's just a bit of slightly sloppy writing, such as we've all done at one point or another.
1 You can probably deal with the G/Maybe situation by simply allowing type families on one side of ~ but not the other.
2 In fact, I believe it's even stronger: they would be guaranteed not to be generative.

Types constructors and existential types

Only polymorphic function can be applied to values of existential types.
Those properties can be expressed by the corresponding quantifiers for expressions, and characterized by natural transformations.
Similarly, when we define a type constructor
data List a = Nil | Cons a (List a)
This type constructor works for all a whereas type families allows to have non uniform type constructors
type family TRes i o
type instance TRes Bool = String
type instance TRes String = Bool
What natural transformation characterizes precisely this idea of "uniformity" at type level ?
Is there an equivalent of forcing naturality like we have at value level with rank-n types ?
ApplyNat :: (forall a. a -> F a) -> b -> F b
I think you've confused a couple of different ideas here.
This type constructor works for all a.
That's totality. List :: * -> * produces a valid type of kind * given any argument a of kind *. Haskell 98 datatypes are always total, but, as you point out, in modern Haskell you can write type families which don't cover all possible cases. TRes Int is not a "real" type, in the sense that it contains no values, it doesn't reduce to any other type, and it's not equal to any type other than TRes Int.
Haskell has no totality checker at the value level or the type level (apart from the rules about undecidable instances, which are a blunt instrument), so, just as there is no way to rule out undefined values, there is no way to rule out "stuck" type families like TRes Int. (For more on "stuck" type families see this blog post by Richard Eisenberg, the designer of TypeInType.)
Naturality is an altogether different idea. In value-level Haskell, a natural transformation between f and g is a polymorphic function mapping values of type f x to values of type g x, without knowing anything about x.
type f ~> g = forall x. f x -> g x
With GHC 8 and TypeInType we can talk about kinds using the same language we use to talk about types, because kinds are types. The type expression forall x. f x -> g x has kind * ((~>) :: forall k. (k -> *) -> (k -> *) -> *), so it's a perfectly valid classifier for types as well. A type with that kind is a polymorphic type function mapping types of kind f x to types of kind g x.
What would you use a type-level natural transformation for, in the real world? I dunno. You wouldn't, probably.

Practical applications of Rank 2 polymorphism?

I'm covering polymorphism and I'm trying to see the practical uses of such a feature.
My basic understanding of Rank 2 is:
type MyType = ∀ a. a -> a
subFunction :: a -> a
subFunction el = el
mainFunction :: MyType -> Int
mainFunction func = func 3
I understand that this is allowing the user to use a polymorphic function (subFunction) inside mainFunction and strictly specify it's output (Int). This seems very similar to GADT's:
data Example a where
ExampleInt :: Int -> Example Int
ExampleBool :: Bool -> Example Bool
1) Given the above, is my understanding of Rank 2 polymorphism correct?
2) What are the general situations where Rank 2 polymorphism can be used, as opposed to GADT's, for example?
If you pass a polymorphic function as and argument to a Rank2-polymorphic function, you're essentially passing not just one function but a whole family of functions – for all possible types that fulfill the constraints.
Typically, those forall quantifiers come with a class constraint. For example, I might wish to do number arithmetic with two different types simultaneously (for comparing precision or whatever).
data FloatCompare = FloatCompare {
singlePrecision :: Float
, doublePrecision :: Double
}
Now I might want to modify those numbers through some maths operation. Something like
modifyFloat :: (Num -> Num) -> FloatCompare -> FloatCompare
But Num is not a type, only a type class. I could of course pass a function that would modify any particular number type, but I couldn't use that to modify both a Float and a Double value, at least not without some ugly (and possibly lossy) converting back and forth.
Solution: Rank-2 polymorphism!
modifyFloat :: (∀ n . Num n => n -> n) -> FloatCompare -> FloatCompare
mofidyFloat f (FloatCompare single double)
= FloatCompare (f single) (f double)
The best single example of how this is useful in practice are probably lenses. A lens is a “smart accessor function” to a field in some larger data structure. It allows you to access fields, update them, gather results... while at the same time composing in a very simple way. How it works: Rank2-polymorphism; every lens is polymorphic, with the different instantiations corresponding to the “getter” / “setter” aspects, respectively.
The go-to example of an application of rank-2 types is runST as Benjamin Hodgson mentioned in the comments. This is a rather good example and there are a variety of examples using the same trick. For example, branding to maintain abstract data type invariants across multiple types, avoiding confusion of differentials in ad, a region-based version of ST.
But I'd actually like to talk about how Haskell programmers are implicitly using rank-2 types all the time. Every type class whose methods have universally quantified types desugars to a dictionary with a field with a rank-2 type. In practice, this is virtually always a higher-kinded type class* like Functor or Monad. I'll use a simplified version of Alternative as an example. The class declaration is:
class Alternative f where
empty :: f a
(<|>) :: f a -> f a -> f a
The dictionary representing this class would be:
data AlternativeDict f = AlternativeDict {
empty :: forall a. f a,
(<|>) :: forall a. f a -> f a -> f a }
Sometimes such an encoding is nice as it allows one to use different "instances" for the same type, perhaps only locally. For example, Maybe has two obvious instances of Alternative depending on whether Just a <|> Just b is Just a or Just b. Languages without type classes, such as Scala, do indeed use this encoding.
To connect to leftaroundabout's reference to lenses, you can view the hierarchy there as a hierarchy of type classes and the lens combinators as simply tools for explicitly building the relevant type class dictionaries. Of course, the reason it isn't actually a hierarchy of type classes is that we usually will have multiple "instances" for the same type. E.g. _head and _head . _tail are both "instances" of Traversal' s a.
* A higher-kinded type class doesn't necessarily lead to this, and it can happen for a type class of kind *. For example:
-- Higher-kinded but doesn't require universal quantification.
class Sum c where
sum :: c Int -> Int
-- Not higher-kinded but does require universal quantification.
class Length l where
length :: [a] -> l
If you are using modules in Haskell, you are already using Rank-2 types. Theoretically speaking, modules are records with rank-2 type properties.
For example, the Foo module below in Haskell ...
module Foo(id) where
id :: forall a. a -> a
id x = x
import qualified Foo
main = do
putStrLn (Foo.id "hello")
return ()
... can actually be thought as a record as follows:
type FooType = FooType {
id :: forall a. a -> a
}
Foo :: FooType
Foo = Foo {
id = \x -> x
}
P/S (unrelated this question): from a language design perspective, if you are going to support module system, then you might as well support higher-rank types (i.e. allow arbitrary quantification of type variables on any level) to reduce duplication of efforts (i.e. type checking a module should be almost the same as type checking a record with higher rank types).

Any advantage of using type constructors in type classes?

Take for example the class Functor:
class Functor a
instance Functor Maybe
Here Maybe is a type constructor.
But we can do this in two other ways:
Firstly, using multi-parameter type classes:
class MultiFunctor a e
instance MultiFunctor (Maybe a) a
Secondly using type families:
class MonoFunctor a
instance MonoFunctor (Maybe a)
type family Element
type instance Element (Maybe a) a
Now there's one obvious advantage of the two latter methods, namely that it allows us to do things like this:
instance Text Char
Or:
instance Text
type instance Element Text Char
So we can work with monomorphic containers.
The second advantage is that we can make instances of types that don't have the type parameter as the final parameter. Lets say we make an Either style type but put the types the wrong way around:
data Silly t errorT = Silly t errorT
instance Functor Silly -- oh no we can't do this without a newtype wrapper
Whereas
instance MultiFunctor (Silly t errorT) t
works fine and
instance MonoFunctor (Silly t errorT)
type instance Element (Silly t errorT) t
is also good.
Given these flexibility advantages of only using complete types (not type signatures) in type class definitions, is there any reason to use the original style definition, assuming you're using GHC and don't mind using the extensions? That is, is there anything special you can do putting a type constructor, not just a full type in a type class that you can't do with multi-parameter type classes or type families?
Your proposals ignore some rather important details about the existing Functor definition because you didn't work through the details of writing out what would happen with the class's member function.
class MultiFunctor a e where
mfmap :: (e -> ??) -> a -> ????
instance MultiFunctor (Maybe a) a where
mfmap = ???????
An important property of fmap at the moment is that its first argument can change types. fmap show :: (Functor f, Show a) => f a -> f String. You can't just throw that away, or you lose most of the value of fmap. So really, MultiFunctor would need to look more like...
class MultiFunctor s t a b | s -> a, t -> b, s b -> t, t a -> s where
mfmap :: (a -> b) -> s -> t
instance (a ~ c, b ~ d) => MultiFunctor (Maybe a) (Maybe b) c d where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (f a)
Note just how incredibly complicated this has become to try to make inference at least close to possible. All the functional dependencies are in place to allow instance selection without annotating types all over the place. (I may have missed a couple possible functional dependencies in there!) The instance itself grew some crazy type equality constraints to allow instance selection to be more reliable. And the worst part is - this still has worse properties for reasoning than fmap does.
Supposing my previous instance didn't exist, I could write an instance like this:
instance MultiFunctor (Maybe Int) (Maybe Int) Int Int where
mfmap _ Nothing = Nothing
mfmap f (Just a) = Just (if f a == a then a else f a * 2)
This is broken, of course - but it's broken in a new way that wasn't even possible before. A really important part of the definition of Functor is that the types a and b in fmap don't appear anywhere in the instance definition. Just looking at the class is enough to tell the programmer that the behavior of fmap cannot depend on the types a and b. You get that guarantee for free. You don't need to trust that instances were written correctly.
Because fmap gives you that guarantee for free, you don't even need to check both Functor laws when defining an instance. It's sufficient to check the law fmap id x == x. The second law comes along for free when the first law is proven. But with that broken mfmap I just provided, mfmap id x == x is true, even though the second law is not.
As the implementer of mfmap, you have more work to do to prove your implementation is correct. As a user of it, you have to put more trust in the implementation's correctness, since the type system can't guarantee as much.
If you work out more complete examples for the other systems, you find that they have just as many issues if you want to support the full functionality of fmap. And this is why they aren't really used. They add a lot of complexity for only a small gain in utility.
Well, for one thing the traditional functor class is just much simpler. That alone is a valid reason to prefer it, even though this is Haskell and not Python. And it also represents the mathematical idea better of what a functor is supposed to be: a mapping from objects to objects (f :: *->*), with extra property (->Constraint) that each (forall (a::*) (b::*)) morphism (a->b) is lifted to a morphism on the corresponding object mapped to (-> f a->f b). None of that can be seen very clearly in the * -> * -> Constraint version of the class, or its TypeFamilies equivalent.
On a more practical account, yes, there are also things you can only do with the (*->*)->Constraint version.
In particular, what this constraint guarantees you right away is that all Haskell types are valid objects you can put into the functor, whereas for MultiFunctor you need to check every possible contained type, one by one. Sometimes that's just not possible (or is it?), like when you're mapping over infinitely many types:
data Tough f a = Doable (f a)
| Tough (f (Tough f (a, a)))
instance (Applicative f) = Semigroup (Tough f a) where
Doable x <> Doable y = Tough . Doable $ (,)<$>x<*>y
Tough xs <> Tough ys = Tough $ xs <> ys
-- The following actually violates the semigroup associativity law. Hardly matters here I suppose...
xs <> Doable y = xs <> Tough (Doable $ fmap twice y)
Doable x <> ys = Tough (Doable $ fmap twice x) <> ys
twice x = (x,x)
Note that this uses the Applicative instance of f not just on the a type, but also on arbitrary tuples thereof. I can't see how you could express that with a MultiParamTypeClasses- or TypeFamilies-based applicative class. (It might be possible if you make Tough a suitable GADT, but without that... probably not.)
BTW, this example is perhaps not as useless as it may look – it basically expresses read-only vectors of length 2n in a monadic state.
The expanded variant is indeed more flexible. It was used e.g. by Oleg Kiselyov to define restricted monads. Roughly, you can have
class MN2 m a where
ret2 :: a -> m a
class (MN2 m a, MN2 m b) => MN3 m a b where
bind2 :: m a -> (a -> m b) -> m b
allowing monad instances to be parametrized over a and b. This is useful because you can restrict those types to members of some other class:
import Data.Set as Set
instance MN2 Set.Set a where
-- does not require Ord
return = Set.singleton
instance Prelude.Ord b => MN3 SMPlus a b where
-- Set.union requires Ord
m >>= f = Set.fold (Set.union . f) Set.empty m
Note than because of that Ord constraint, we are unable to define Monad Set.Set using unrestricted monads. Indeed, the monad class requires the monad to be usable at all types.
Also see: parameterized (indexed) monad.

What does Functor's fmap tell about types?

What does f a and f b tell me about its type?
class Functor f where
fmap :: (a -> b) -> f a -> f b
I think I get the idea behind standard instances of a functor. However I'm having hard time understanding what f a and f actually represent.
I understand that f a and f b are just types and they must carry information what type constructor was used to create them and type arguments that were used.
Is f a type constructor of kind * -> *? Is (->) r a type constructor just like Maybe is?
I understand that f a and f b are just types and they must carry information what type constructor was used to create them and type arguments that were used.
Good explanation.
Is f a type constructor of kind * -> *?
In effect.
Is (->) r a type constructor just like Maybe is?
In effect, yes:
Yes in the sense that you can apply it to a type like String and get r -> String, just like you can apply Maybe to String to get Maybe String. You can use for f anything that gives you a type from any other type.
..but no...
No, in the sense that Daniel Wagner points out; To be precise, Maybe and [] are type constructors, but (->) r and Either a are sort of like partially applied type constructors. Nevertheless they make good functors, because you can freely apply functions "inside" them and change the type of "the contents".
(Stuff in inverted commas is very hand-wavy imprecise terminology.)
My (possibly mildly tortured) reading of chapter 4 of the Haskell 2010 Report is that Maybe and (->) r are both types, of kind * -> *. Alternatively, the Report also labels them as type expressions—but I can't discern a firm difference in how the Report uses the two terms, except perhaps for surface syntax details. (->) and Maybe are type constructors; type expressions are assembled from type constructors and type variables.
For example, section 4.1.1 ("Kinds") of the 2010 report says (my boldface):
To ensure that they are valid, type expressions are classified into different kinds, which take one of two possible forms:
The symbol ∗ represents the kind of all nullary type constructors.
If κ1 and κ2 are kinds, then κ1 → κ2 is the kind of types that take a type of kind κ1 and return a type of kind κ2.
Section 4.3.2, "Instance Declarations" (my boldface):
An instance declaration that makes the type T to be an instance of class C is called a C-T instance declaration and is subject to these static restrictions:
A type may not be declared as an instance of a particular class more than once in the program.
The class and type must have the same kind; this can be determined using kind inference as described in Section 4.6.
So going by that language, the following instance declaration makes the type (->) r to be an instance of the class Functor:
instance Functor ((->) r) where
fmap f g = f . g
The funny thing about this terminology is that we call (->) r a "type" even though there are no expressions in Haskell that have that type—not even undefined:
foo :: (->) r
foo = undefined
{-
[1 of 1] Compiling Main ( ../src/scratch.hs, interpreted )
../src/scratch.hs:1:8:
Expecting one more argument to `(->) r'
In the type signature for `foo': foo :: (->) r
-}
But I think that's not a big deal. Basically, all declarations in Haskell must have types of kind *.
As a side note, from my limited understanding of dependently typed languages, many of these lack Haskell's firm distinction between terms and types, so that something like (->) Boolean is an expression whose value is a function that takes a type as its argument and produces a type as its result.

Resources