List of existentially quantified values in Haskell - haskell

I'm wondering why this piece of code doesn't type-check:
{-# LANGUAGE ScopedTypeVariables, Rank2Types, RankNTypes #-}
{-# OPTIONS -fglasgow-exts #-}
module Main where
foo :: [forall a. a]
foo = [1]
ghc complains:
Could not deduce (Num a) from the context ()
arising from the literal `1' at exist5.hs:7:7
Given that:
Prelude> :t 1
1 :: (Num t) => t
Prelude>
it seems that the (Num t) context can't match the () context of arg. The point I can't understand is that since () is more general than (Num t), the latter should and inclusion of the former. Has this anything to do with lack of Haskell support for sub-typing?
Thank you for any comment on this.

You're not using existential quantification here. You're using rank N types.
Here [forall a. a] means that every element must have every possible type (not any, every). So [undefined, undefined] would be a valid list of that type and that's basically it.
To expand on that a bit: if a list has type [forall a. a] that means that all the elements have type forall a. a. That means that any function that takes any kind of argument, can take an element of that list as argument. This is no longer true if you put in an element which has a more specific type than forall a. a, so you can't.
To get a list which can contain any type, you need to define your own list type with existential quantification. Like so:
data MyList = Nil | forall a. Cons a MyList
foo :: MyList
foo = Cons 1 Nil
Of course unless you restrain element types to at least instantiate Show, you can't do anything with a list of that type.

First, your example doesn't even get that far with me for the current GHC, because you need to enable ImpredecativeTypes as well. Doing so results in a warning that ImpredicativeTypes will be simplified or removed in the next GHC. So we're not in good territory here. Nonetheless, adding the proper Num constraint (foo :: [forall a. Num a => a]) does allow your example to compile.
Let's leave aside impredicative types and look at a simpler example:
data Foo = Foo (forall a. a)
foo = Foo 1
This also doesn't compile with the error Could not deduce (Num a) from the context ().
Why? Well, the type promises that you're going to give the Foo constructor something with the quality that for any type a, it produces an a. The only thing that satisfies this is bottom. An integer literal, on the other hand, promises that for any type a that is of class Num it produces an a. So the types are clearly incompatible. We can however pull the forall a bit further out, to get what you probably want:
data Foo = forall a. Foo a
foo = Foo 1
So that compiles. But what can we do with it? Well, let's try to define an extractor function:
unFoo (Foo x) = x
Oops! Quantified type variable 'a' escapes. So we can define that, but we can't do much interesting with it. If we gave a class context, then we could at least use some of the class functions on it.
There is a time and place for existentials, including ones without class context, but its fairly rare, especially when you're getting started. When you do end up using them, often it will be in the context of GADTs, which are a superset of existential types, but in which the way that existentials arise feels quite natural.

Because the declaration [forall a. a] is (in meaning) the equivalent of saying, "I have a list, and if you (i.e. the computer) pick a type, I guarantee that the elements of said list will be that type."
The compiler is "calling your bluff", so-to-speak, by complaining, "I 'know' that if you give me a 1, that its type is in the Num class, but you said that I could pick any type I wanted to for that list."
Basically, you're trying to use the value of a universal type as if it were the type of a universal value. Those aren't the same thing, though.

Related

Indirectly reference specific AND unspecific objects in Haskell without monads

I want to store a bunch of elements of different "sorts" in a container datastructure (list, set,..) and indirectly reference them from somewhere else. Normally, you would probably use a sum type for the elements and probably a list as container or similar:
data Element = Sort1 | Sort2 String | ...
type ElementList = List Element
Now I want to use (and save) some sort of references to the elements of the list (e.g. using an index to its position in the list - let's imagine for now that the list does not change and that this is thus a good idea). I want to to be able to reference elements where I don't care about the "sort" AND here's the catch: I also want to be able to reference elements and tell in the type that it's from some specific sort. This rules out the simple sum type solution above (because the sort is not in the type!). I tried a solution with GADTs:
{-#LANGUAGE GADTs, EmptyDataDecls, RankNTypes, ScopedTypeVariables #-}
data Sort1
data Sort2
data Element t where
Sort1Element :: Element Sort1
Sort2Element :: String -> Element Sort2
newtype ElementRef a = ElementRef Int
type UnspecificElementRefs = [forall t. ElementRef (Element t)]
type OneSpecificSort1ElementRef = ElementRef (Element Sort1)
main = do
let unspecificElementRefs :: UnspecificElementRefs = [ElementRef 1, ElementRef 2]
let oneSpecificElementRef :: OneSpecificSort1ElementRef = ElementRef 1
putStrLn "hello"
But this gives me the error:
- Illegal polymorphic type: forall t. ElementRef (Element t) - GHC doesn't yet support impredicative polymorphism
- In the type synonym declaration for ‘UnspecificElementRefs’
|
11 | type UnspecificElementRefs = [forall t. ElementRef (Element t)]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
How can I solve this? I'm not looking for a solution to this specific error (I guess this is a dead end?) or specifically with GADTs or even using an int index for referencing or so, I simply want a solution which:
solves the basic problem of indirectly referencing something unspecific and also something specific
does not use STRef or similar so that my whole program needs to be a monad (since this is a quite central datastructure)
“Impredicative polymorphism” means you have a ∀ that is wrapped in a type constructor other than ->. In this case, in []. As the error message tell you, GHC Haskell does not support this (it's not that it isn't in principle sensible, but as far as I understand it makes type checking quite a nightmare).
This can be solved though by wrapping the ∀ in something that forms a “type checker barrier”: in a newtype.
newtype UnspecificElementRef = UnspecificElementRef (∀ t. ElementRef (Element t))
type UnspecificElementRefs = [UnspecificElementRef]
In fact, this can be simplified seeing as ElementRef is itself just a newtype wrapper with a phantom type argument:
newtype UnspecificElementRef = UnspecificElementRef Int

What is Ord type?

Is every class not a type in Haskell :
Prelude> :t max
max :: Ord a => a -> a -> a
Prelude> :t Ord
<interactive>:1:1: Not in scope: data constructor ‘Ord’
Prelude>
Why does this not print Ord type signature ?
Okay, there's a couple of things going on here.
First when you write :t Ord you're looking for something called Ord in the value namespace; specifically it would have to be a constructor, since the name starts with a capital letter.
Haskell keeps types and values completely separate; there is no relationship between the name of a type and the names of a type's constructors. Often when there's only one constructor, people will use the same name as the type. An example being data Foo = Foo Int. This declares two new named entities: the type Foo and the constructor Foo :: Int -> Foo.
It's not really a good idea to think of it as just making a type Foo that can be used both in type expressions and to construct Foos. Because also common are declarations like data Maybe a = Nothing | Just a. Here there are 2 different constructors for Maybe a, and Maybe isn't a name of anything at all at the value level.
So just because you've seen Ord in a type expression doesn't mean that there is a name Ord at the value level for you to ask the type of with :t. Even if there were, it wouldn't necessarily be related top the type-level name Ord.
The second point that needs clarifying is that no, classes are not in fact types. A class is a set of types (which all support the interface defined in the class), but it is not a type itself.
In vanilla Haskell type classes are just "extra" things. You can declare them with a class declaration, instantiate them with an instance declaration, and use them in special syntax attached to types (the stuff left of the => arrow) as constraints on type variables. But they don't really interact with the rest of the language, and you cannot use them in the main part of a type signature (the stuff right of the `=> arrow).
However, with the ConstraintKinds extension on, type classes do become ordinary things that exist in the type namespace, just like Maybe. They are still not types in the sense that there can never be any values that have them as types, so you can't use Ord or Ord Int as an argument or return type in a function, or have a [Ord a] or anything like that.
In that they are a bit like type constructors like Maybe. Maybe is a name bound in the type namespace, but it is not a type as such; there are no values whose type is just Maybe, but Maybe can be used as part of an expression defining a type, as in Maybe Int.
If you're not familiar with kinds, probably ignore everything I've said from ConstraintKinds onwards; you'll probably learn about kinds eventually, but they're not a feature you need to know much about as a beginner. If you are, however, what ConstraintKinds does is make a special kind Constraint and have type class constraints (left of the => arrow) just be ordinary type-level things of kind Constraint instead of special purpose syntax. This means that Ord is a type-level thing, and we can ask it's kind with the :k command in GHCI:
Prelude> :k Ord
* -> Constraint
Which makes sense; max had type Ord a => a -> a -> a, so Ord a must have kind Constraint. If Ord can be applied to an ordinary type to yield a constraint, it must have kind * -> Constraint.
Ord isn't a type; it's a typeclass. Typeclasses allow you to associate supported operations with a given type (somewhat similar to interfaces in Java or protocols in Objective-C). A type (e.g. Int) being an "instance" of a typeclass (e.g. Ord) means that the type supports the functions of the Ord typeclass (e.g. compare, <, > etc.).
You can get most info about a typeclass using :i in ghci, which shows you the functions associated with the typeclass and which types are instances of it:
ghci > :i Ord
class Eq a => Ord a where
compare :: a -> a -> Ordering
(<) :: a -> a -> Bool
(>=) :: a -> a -> Bool
(>) :: a -> a -> Bool
(<=) :: a -> a -> Bool
max :: a -> a -> a
min :: a -> a -> a
-- Defined in ‘GHC.Classes’
instance Ord a => Ord (Maybe a) -- Defined in ‘Data.Maybe’
instance (Ord a, Ord b) => Ord (Either a b)
-- Defined in ‘Data.Either’
instance Ord Integer -- Defined in ‘integer-gmp:GHC.Integer.Type’
instance Ord a => Ord [a] -- Defined in ‘GHC.Classes’
...
Ord is not a type, but a typeclass. It does not have a type, but a kind:
Prelude> :k Ord
Ord :: * -> Constraint
Typeclasses are one of the wonderful things about Haskell. Check 'em out :-)
Not quite. You can impose type constraints, so Ord a => a is a type, but Ord a isn't. Ord a => a means "any type a with the constraint that it is an instance of Ord".
The error is because :t expects an expression. When GHCi tries to interpret Ord as an expression, the closest it can get to is a data constructor, since these are the only functions in Haskell that can start with capital letters.

How can I find out which (concrete) types satisfy a set of typeclass constraints?

Given a number of typeclass constraints:
{-# LANGUAGE ConstraintKinds, MultiParamTypeClasses #-}
import Data.Array.Unboxed(Ix,IArray,UArray)
type IntLike a = (Ord a, Num a, Enum a, Show a, Ix a, IArray UArray a)
How can I find out which types satisfy IntLike, i.e. all the mentioned constraints jointly?
I can puzzle together the information needed from the output of ghci's :info command, and then doublecheck my work by calling (or having ghci typecheck)
isIntLike :: IntLike -> Bool
isIntLike = const True
at various types, e.g. isIntLike (3::Int).
Is there a way to get ghci to do this for me?
I'm currently interested in concrete types, but wouldn't mind having a more general solution which also does clever stuff with unifying contexts!
Community Wiki answer based on the comments:
You can do this using template haskell.
main = print $(reify ''Show >>= stringE . show).
This won't work for type synonyms - rather, reify returns the AST representing the type synonym itself, without expanding it. You can check for type synonyms which are constraints, extract the constraints of which that type synonym consists, and continue reifying those.

Associated Parameter Restriction using Functional Dependency

The function f below, for a given type 'a', takes a parameter of type 'c'. For different types 'a', 'c' is restricted in different ways. Concretely, when 'a' is any Integral type, 'c' should be allowed to be any 'Real' type. When 'a' is Float, 'c' can ONLY be Float.
One attempt is:
{-# LANGUAGE
MultiParamTypeClasses,
FlexibleInstances,
FunctionalDependencies,
UndecidableInstances #-}
class AllowedParamType a c | a -> c
class Foo a where
f :: (AllowedParamType a c) => c -> a
fIntegral :: (Integral a, Real c) => c -> a
fIntegral = error "implementation elided"
instance (Integral i, AllowedParamType i d, Real d) => Foo i where
f = fIntegral
For some reason, GHC 7.4.1 complains that it "could not deduce (Real c) arising from a use of fIntegral". It seems to me that the functional dependency should allow this deduction. In the instance, a is unified with i, so by the functional dependency, d should be unified with c, which in the instance is declared to be 'Real'. What am I missing here?
Functional dependencies aside, will this approach be expressive enough to enforce the restrictions above, or is there a better way? We are only working with a few different values for 'a', so there will be instances like:
instance (Integral i, Real c) => AllowedParamType i c
instance AllowedParamType Float Float
Thanks
A possibly better way, is to use constraint kinds and type families (GHC extensions, requires GHC 7.4, I think). This allows you to specify the constraint as part of the class instance.
{-# LANGUAGE ConstraintKinds, TypeFamilies, FlexibleInstances, UndecidableInstances #-}
import GHC.Exts (Constraint)
class Foo a where
type ParamConstraint a b :: Constraint
f :: ParamConstraint a b => b -> a
instance Integral i => Foo i where
type ParamConstraint i b = Real b
f = fIntegral
EDIT: Upon further experimentation, there are some subtleties that mean that this doesn't work as expected, specifically, type ParamConstraint i b = Real b is too general. I don't know a solution (or if one exists) right now.
OK, this one's been nagging at me. given the wide variety of instances,
let's go the whole hog and get rid of any relationship between the
source and target type other than the presence of an instance:
{-# LANGUAGE OverlappingInstances, FlexibleInstances,TypeSynonymInstances,MultiParamTypeClasses #-}
class Foo a b where f :: a -> b
Now we can match up pairs of types with an f between them however we like, for example:
instance Foo Int Int where f = (+1)
instance Foo Int Integer where f = toInteger.((7::Int) -)
instance Foo Integer Int where f = fromInteger.(^ (2::Integer))
instance Foo Integer Integer where f = (*100)
instance Foo Char Char where f = id
instance Foo Char String where f = (:[]) -- requires TypeSynonymInstances
instance (Foo a b,Functor f) => Foo (f a) (f b) where f = fmap f -- requires FlexibleInstances
instance Foo Float Int where f = round
instance Foo Integer Char where f n = head $ show n
This does mean a lot of explicit type annotation to avoid No instance for... and Ambiguous type error messages.
For example, you can't do main = print (f 6), but you can do main = print (f (6::Int)::Int)
You could list all of the instances with the standard types that you want,
which could lead to an awful lot of repetition, our you could light the blue touchpaper and do:
instance Integral i => Foo Double i where f = round -- requires FlexibleInstances
instance Real r => Foo Integer r where f = fromInteger -- requires FlexibleInstances
Beware: this does not mean "Hey, if you've got an integral type i,
you can have an instance Foo Double i for free using this handy round function",
it means: "every time you have any type i, it's definitely an instance
Foo Double i. By the way, I'm using round for this, so unless your type i is Integral,
we're going to fall out." That's a big issue for the Foo Integer Char instance, for example.
This can easily break your other instances, so if you now type f (5::Integer) :: Integer you get
Overlapping instances for Foo Integer Integer
arising from a use of `f'
Matching instances:
instance Foo Integer Integer
instance Real r => Foo Integer r
You can change your pragmas to include OverlappingInstances:
{-# LANGUAGE OverlappingInstances, FlexibleInstances,TypeSynonymInstances,MultiParamTypeClasses #-}
So now f (5::Integer) :: Integer returns 500, so clearly it's using the more specific Foo Integer Integer instance.
I think this sort of approach might work for you, defining many instances by hand, carefully considering when to go completely wild
making instances out of standard type classes. (Alternatively, there aren't all that many standard types, and as we all know, notMany choose 2 = notIntractablyMany, so you could just list them all.)
Here's a suggestion to solve a more general problem, not yours specifically (I need more detail yet first - I promise to check later). I'm writing it in case other people are searching for a solution to a similar problem to you, I certainly was in the past, before I discovered SO. SO is especially great when it helps you try a radically new approach.
I used to have the work habit:
Introduce a multi-parameter type class (Types hanging out all over the place, so...)
Introduce functional dependencies (Should tidy it up but then I end up needing...)
Add FlexibleInstances (Alarm bells start ringing. There's a reason the compiler has this off by default...)
Add UndecidableInstances (GHC is telling you you're on your own, because it's not convinced it's up to the challenge you're setting it.)
Everything blows up. Refactor somehow.
Then I discovered the joys of type families (functional programming for types (hooray) - multi-parameter type classes are (a bit like) logic programming for types). My workflow changed to:
Introduce a type class including an associated type, i.e. replace
class MyProblematicClass a b | a -> b where
thing :: a -> b
thang :: b -> a -> b
with
class MyJustWorksClass a where
type Thing a :: * -- Thing a is a type (*), not a type constructor (* -> *)
thing :: a -> Thing a
thang :: Thing a -> a -> Thing a
Nervously add FlexibleInstances. Nothing goes wrong at all.
Sometimes fix things by using constraints like (MyJustWorksClass j,j~a)=> instead of (MyJustWorksClass a)=> or (Show t,t ~ Thing a,...)=> instead of (Show (Thing a),...) => to help ghc out. (~ essentially means 'is the same type as')
Nervously add FlexibleContexts. Nothing goes wrong at all.
Everything works.
The reason "Nothing goes wrong at all" is that ghc calculates the type Thing a using my type function Thang rather than trying to deduce it using a merely a bunch of assertions that there's a function there and it ought to be able to work it out.
Give it a go! Read Fun with Type Functions before reading the manual!

Polymorphic signature for non-polymorphic function: why not?

As an example, consider the trivial function
f :: (Integral b) => a -> b
f x = 3 :: Int
GHC complains that it cannot deduce (b ~ Int). The definition matches the signature in the sense that it returns something that is Integral (namely an Int). Why would/should GHC force me to use a more specific type signature?
Thanks
Type variables in Haskell are universally quantified, so Integral b => b doesn't just mean some Integral type, it means any Integral type. In other words, the caller gets to pick which concrete types should be used. Therefore, it is obviously a type error for the function to always return an Int when the type signature says I should be able to choose any Integral type, e.g. Integer or Word64.
There are extensions which allow you to use existentially quantified type variables, but they are more cumbersome to work with, since they require a wrapper type (in order to store the type class dictionary). Most of the time, it is best to avoid them. But if you did want to use existential types, it would look something like this:
{-# LANGUAGE ExistentialQuantification #-}
data SomeIntegral = forall a. Integral a => SomeIntegral a
f :: a -> SomeIntegral
f x = SomeIntegral (3 :: Int)
Code using this function would then have to be polymorphic enough to work with any Integral type. We also have to pattern match using case instead of let to keep GHC's brain from exploding.
> case f True of SomeIntegral x -> toInteger x
3
> :t toInteger
toInteger :: Integral a => a -> Integer
In the above example, you can think of x as having the type exists b. Integral b => b, i.e. some unknown Integral type.
The most general type of your function is
f :: a -> Int
With a type annotation, you can only demand that you want a more specific type, for example
f :: Bool -> Int
but you cannot declare a less specific type.
The Haskell type system does not allow you to make promises that are not warranted by your code.
As others have said, in Haskell if a function returns a result of type x, that means that the caller gets to decide what the actual type is. Not the function itself. In other words, the function must be able to return any possible type matching the signature.
This is different to most OOP languages, where a signature like this would mean that the function gets to choose what it returns. Apparently this confuses a few people...

Resources