Understanding multiple types/typeclasses in Haskell declarations - haskell

I'm trying to learn Haskell with Learn You A Haskell... but I got impatient and wanted to implement a favorite algorithm of mine to see if I could.
I'm working on the tortoise/hare algorithm (Floyd's algorithm) for cycle detection.
Here's the code I have so far:
idx :: (Eq a) => (a -> a) -> a -> a -> a
idx f tortoise hare
| (f tortoise) == (f (f hare)) = (f f hare)
| otherwise = (idx f) (f tortoise) (f f hare)
mu :: (Eq a) => (a -> a) -> a -> a -> Integer -> (Integer, a)
mu f tortoise hare cntr
| (f tortoise) == (f hare) = (cntr+1, f tortoise)
| otherwise = (mu f) (f tortoise) (f hare) (cntr+1)
lam :: (Eq a) => (a -> a) -> a -> a -> Integer -> Integer
lam f tortoise hare cntr
| tortoise == hare = cntr+1
| otherwise = (lam f) tortoise (f hare) (cntr+1)
floyd :: (Eq a) => (a -> a) -> a -> (Integer, Integer)
floyd f x0 =
let z = (idx f) x0 x0
(y1, t) = (mu f) x0 z 0
y2 = (lam f) t (f t) 0
in (y1, y2)
tester :: (Integer a) => a -> a
tester a
| a == 0 = 2
| a == 2 = 6
| a == 6 = 1
| a == 1 = 3
| a == 3 = 6
| a == 4 = 0
| a == 5 = 1
| otherwise = error "Input must be between 0 and 6"
(floyd tester) 0
This tries to break the logic up into three steps. First get the index where f_idx == f_{2*idx}, then move from the start to get the parameter mu (distance from first element to start of the cycle), then move until you hit a repeat (length of the cycle).
The function floyd is my hacky attempt to put these together.
Aside from this being somewhat un-functional, I am also having issues loading the module and I'm not sure why:
Prelude> :load M:\papers\programming\floyds.hs
[1 of 1] Compiling Main ( M:\papers\programming\floyds.hs, interpreted )
M:\papers\programming\floyds.hs:23:12:
`Integer' is applied to too many type arguments
In the type signature for `tester': tester :: Integer a => a -> a
Failed, modules loaded: none.
Changing all occurrences of Integer to Int or Num don't make it any better.
I'm not understanding the mis-application of Int. Following along in the tutorial, most type declarations for functions always have the form
function_name :: (Some_Type a) => <stuff involving a and possibly other types>
But when I replace the (Eq a) with (Num a) or (Int a) I get a similar error (type applied to too many arguments).
I tried reading this, but it disagrees with the tutorial's notation (e.g. almost every function defined in these examples).
I must be badly misunderstanding Types vs. TypeClasses, but that's precisely what I thought I did understand to lead me to make the type declarations as in my code above.
A follow up might be: what is the syntax for have multiple TypeClasses in the function type declaration? Something like:
mu :: (Eq a, Int b) => (a -> a) -> a -> a -> b -> (b, a)
(but this also gave compile errors saying Int was applied to too many arguments).
Added
Cleaned up and with changes based on the answer, the code below appears to be working:
idx :: (Eq a) => (a -> a) -> a -> a -> a
idx f tortoise hare
| (f tortoise) == (f (f hare)) = (f (f hare))
| otherwise = (idx f) (f tortoise) (f (f hare))
mu :: (Eq a) => (a -> a) -> a -> a -> Integer -> (Integer, a)
mu f tortoise hare cntr
| (f tortoise) == (f hare) = (cntr+1, (f tortoise))
| otherwise = (mu f) (f tortoise) (f hare) (cntr+1)
lam :: (Eq a) => (a -> a) -> a -> a -> Integer -> Integer
lam f tortoise hare cntr
| tortoise == hare = cntr+1
| otherwise = (lam f) tortoise (f hare) (cntr+1)
floyd :: (Eq a) => (a -> a) -> a -> (Integer, Integer)
floyd f x0 =
let z = (idx f) x0 x0
(y1, t) = (mu f) x0 z 0
y2 = (lam f) t (f t) 0
in (y1, y2)
tester :: (Integral a) => a -> a
tester a
| a == 0 = 2
| a == 2 = 6
| a == 6 = 1
| a == 1 = 3
| a == 3 = 6
| a == 4 = 0
| a == 5 = 1
| otherwise = error "Input must be between 0 and 6"
Then I see
*Main> floyd tester 2
(1,3)
and given this test function (essentially like the one from the Wikipedia example), this makes sense. If you start a x0 = 2 then the sequence is 2 -> 6 -> 1 -> 3 -> 6..., so mu is 1 (you have to move in one element to hit the start of the sequence) and lam is 3 (the sequence repeats every three entries).
I suppose there's some question about whether to always consider the first point as burn-in before you can possibly "repeat".
If anyone has advice on this, I'd be grateful. In particular, my cntr construct seems un-functional to me.. it's a way of counting how many repeated calls are made. I'm not sure if there's a better/different way that's less like saving the state of a variable.

You can't say Integer a or Int a. You probably mean Integral a. Integral encompasses all types that are integers of some kind, including Integer and Int.
The thing before => is not a type but a type class. SomeTypeClass a => a means "any type a that is a member of the type class SomeTypeClass".
You can do this:
function :: Int -> String
which is a function that takes an Int and returns a String. You can also do this:
function :: Integer -> String
which is a function that takes an Integer and returns a String. You can also do this:
function :: Integral i => i -> String
which is a function that takes either an Int, or an Integer, or any other integer-like type and returns a String.
About your second question, your guess is right. You coud do
mu :: (Eq a, Integral b) => (a -> a) -> a -> a -> b -> (b, a)
Your commented questions:
1. what do you do if you want to ensure something has a Type that is a member of multiple TypeClasses?
You could do something like
function :: (Show a, Integral a) => a -> String
That will restrict a to be any type that is both a member of Show and Integral.
2. Suppose you only want to restrict the Type to reside in a TypeClass for some of the arguments, and you want other arguments to be of specific Types?
Then you just write out the other arguments as specific types. You could do
function :: (Integral a) -> a -> Int -> String
which takes any integer-like type a, and then an Int and returns a String.

The general form of a (Rank-1) type declaration is
x :: [forall a b … . ] Cᴏɴꜱᴛʀᴀɪɴᴛ(a, b, …) => Sɪɢɴᴀᴛᴜʀᴇ(a, b, …)
where
The forall a b … brings type variables in scope. This is usually omitted because Haskell98 implicitly uses all lowercase symbols in type-level expressions as type variables. Type variables are kind of like implicit parameters to a function: the caller gets to choose what particular type will be used, though they'll have to obey the...
Cᴏɴꜱᴛʀᴀɪɴᴛ(a, b, …). This is most often either
a type class identifier together with some of the type variables (e.g. Integral a) which means "the caller has to make sure we can use a as some kind of integral number – add other numbers to it etc.",
a tuple of such type class constraints, e.g. (Eq a, Show a), which basically means constraint-level and: all of the constraint need to be fulfilled, the caller needs to make sure the variables are members of all the required type classes.
Sɪɢɴᴀᴛᴜʀᴇ(a, b, …) is often some sort of function expression where the type variables may turn up on either side of an arrow. There can also be fixed types: much like you can mix literals and variables in (value-level) Haskell code, you can mix built-in types with local type variables. For example,
showInParens :: Show a => a -> String
showInParens x = "(" ++ show x ++ ")"
These are by far not the most general forms, though. In terms of modern Haskell,
Cᴏɴꜱᴛʀᴀɪɴᴛ(a, b, …) is any type-level expression of kind Constraint, wherein the type variables may turn up, but also any suitable type constructors.
Sɪɢɴᴀᴛᴜʀᴇ(a, b, …) is, quite similarly, any type-level expression of kind * (the kind of actual types), wherein the type variables may turn up, but also any suitable type constructors.
Now what is a type constructor? It's a lot like what values and functions are on the value level, but obviously on the type level. For instance,
GHCi> :k MaybeMaybe :: * -> *
which basically means: Maybe acts as a type-level function. It has the kind of a function which takes a type (*) and spits out another one (*), so, since Int is a type, Maybe Int is also a type.
This is a very general concept, and though it may take some time to fully grasp it I think the following explains quite well everything that might still need to be said:
GHCi> :k (->)
(->) :: * -> * -> *
GHCi> :k (,)
(,) :: * -> * -> *
GHCi> :k Eq
Eq :: * -> Constraint

Related

Haskell function with type (num -> num) -> num

I am struggling with an exercise in R. Bird's functional programming book that asks for an example of a function with type (num -> num) -> num
The best I can come up with is a polymorphic type
func1 f = f 3
:t func1
func1 :: Num t1 => (t1 -> t2) -> t2
The problem I am having is that I can't specify the return type of f, so the type remains (num -> t2) -> t2.
My attempt to force the return type of f is as follows:
square x = x * x
:t func1 square
func1 square :: Num t2 => t2 -> t2
Because of course if I try to find the type of func1 ∘ square it will just be num -> num
If it is enough to give a function which can be assigned that type, then yours is already enough. That is, the following type-checks just fine:
func1 :: Num a => (a -> a) -> a
func1 f = f 3
If, on the other hand, you want a function which is inferred to have that type, then you need to do some trickery. What we want to do here is to specify that the result of f 3 and the 3 that we fed in have the same type. The standard way to force two terms to have the same type is to use asTypeOf, which is implemented this way:
asTypeOf :: a -> a -> a
asTypeOf x _ = x
So let's try:
> :t \f -> f 3 `asTypeOf` 3
(Num a, Num t) => (t -> a) -> a
Unfortunately for us, this doesn't work, because the 3 in f 3 and the standalone 3 are inferred to be using potentially different instances of Num. Still, it is a bit closer than \f -> f 3 was -- note the new Num a constraint on the output that we didn't have before. An obvious next idea is to let-bind a variable to 3 and reuse that variable as the argument to both f and asTypeOf; surely then GHC will get the picture that f's argument and result have the same type, right?
> :t \f -> let x = 3 in f x `asTypeOf` x
(Num a, Num t) => (t -> a) -> a
Drat. Turns out that lets do what's called "let generalization"; the x will be just as polymorphic as the 3 was, and can be specialized to different types at different use sites. Usually this is a nice feature, but because we're doing an unnatural exercise we need to do unnatural things...
Okay, next idea: some lambda calculi do not include a let, and when you need one, instead of writing let a = b in c, you write (\a -> c) b. This is especially interesting for us because Haskell uses a specially-restricted kind of polymorphism that means that inside c, the type of a is monomorphic. So:
> :t \f -> (\x -> f x `asTypeOf` x) 3
Num a => (a -> a) -> a
And now you complain that asTypeOf is cheating, because it uses a type declaration that doesn't match its inferred type, and the whole point of the exercise was to get the right type through inference alone. (If we were okay with using type declarations that don't match the inferred type, we could have stopped at func1 :: Num a => (a -> a) -> a; func1 f = f 3 from way back at the beginning!) Okay, no problem: there's another standardish way to force the types of two expressions to unify, namely, by putting them in a list together. So:
> :t \f -> (\x -> head [f x, x]) 3
Num a => (a -> a) -> a
Phew, now we're finally at a place where we could in principle build, from the ground up, all the tools needed to get a term of the right type without any type declarations.
func1 f = let x = f x in x This is a partial function, it technically has the type you want and you should be aware of what they are and how they work in haskell.

Pattern matching on types

Is there nice way to write the following "x is of type t" parts? (I suspect I should be using Data.Type.Equality but I'm not sure exactly how)
f :: a -> Int
g :: b -> Int
h :: Typeable t => t -> Maybe Int
h x = case x of
(x is of type a) -> Just (f x)
(x is of type b) -> Just (g x)
_ -> Nothing
Follow up question
This is a job for the "type safe cast" bits of Data.Typeable. cast :: (Typeable a, Typeable b) => a -> Maybe b pulls the runtime type information out of the Typeable dictionaries for a and b and compares them; if a and b are the same type then it returns Just its argument, otherwise it fails.
So, with cast and Maybe's Alternative instance in hand, we have:
h x = f <$> cast x
<|> g <$> cast x
As far as I know, there's no way to avoid the repetitious calls to cast, since they occur at different types.

What are the values of a polymorphically encoded recursive algebraic data type?

The following question relates to Recursive algebraic data types via polymorphism in Haskell.
Recursive algebraic data types can be realized in any language with the capabilities of System F using universal parametric polymorphism. For example, the type of natural numbers can be introduced (in Haskell) as
newtype Nat = Nat { runNat :: forall t. (t -> (t -> t) -> t) }
with the 'usual' natural number n being realized as
\ x0 f -> f(f(...(f x0)...))
with n iterations of f used.
Similarly, the type of Booleans can be introduced as
newtype Bool = Bool { runBool :: forall t. t -> t -> t }
with the expected values 'true' and 'false' being realized as
true = \ t f -> t
false = \ t f -> f
Q: Are all terms of type Bool or Nat or any other potentially recursive algebraic data type (encoded in this way) of the above form, up to some reduction rules of operational semantics?
Example 1 (Natural numbers): Is any term of type forall t. t -> (t -> t) -> t 'equivalent' in some sense to a term of the form \ x0 f -> f (f ( ... (f x0) ... ))?
Example 2 (Booleans): Is any term of type forall t. t -> t -> t 'equivalent' in some sense to either \ t f -> t or \ t f -> f?
Addendum (internal version): In case the language under consideration is even capable of expressing propositional equality, this meta-mathematical question could be internalized as follows, and I would be very happy if someone would come up with a solution for it:
For any functor m we can define the universal module and some decoding-encoding function on it as follows:
type ModStr m t = m t -> t
UnivMod m = UnivMod { univProp :: forall t. (ModStr m t) -> t }
classifyingMap :: forall m. forall t. (ModStr m t) -> (UnivMod m -> t)
classifyingMap f = \ x -> (univProp x) f
univModStr :: (Functor m) => ModStr m (UnivMod m)
univModStr = \ f -> UnivMod $ \ g -> g (fmap (classifyingMap g) f)
dec_enc :: (Functor m) => UnivMod m -> UnivMod m
dec_enc x = (univProp x) univModStr
Q: In case the language is capable of expressing this: is the equality type dec_enc = id inhabited?
In System F (AKA λ2), all inhabitants of ∀α.α→α→α are indeed λ-equal to K or K*.
First, if M : ∀α.α→α→α then it has normal form N (since System F is normalizing) and by subject reduction theorem (see Barendregt: Lambda calculi with types) also N : ∀α.α→α→α.
Let's examine how these normal forms can look like. (We'll be using Generation lemma for λ2, see the Barendregt's book for formal details.)
If N is a normal form, that N (or any its subexpression) must be in head normal form, that is an expression of the form λx1 ... xn. y P1 ... Pk, where n and/or k can be also 0.
For the case of N, there must be at least one λ, because initially we don't have any variable bound in the typing context that would take the place of y. So N = λx.U and x:α |- U:α→α.
Now again there must be at least one λ in the case of U, because if U were just y P1 ... Pk then y would have a function type (even for k=0 we'd need y:α→α), but we have just x:α in the context. So N = λxy.V and x:α, y:α |- V:α.
But V can't be λ.., because then it'd have function type τ→σ. So V must be just of the form z P1 ... Pk, but since we don't have any variable of function type in the context, k must be 0 and therefore V can be only x or y.
So there are only two terms in normal form of type ∀α.α→α→α: λxy.x and λxy.y and all other terms of this type are β-equal to one of these.
Using similar reasoning we can prove that all inhabitants of ∀α.α→(α→α)→α are β-equal to a Church numeral. (And I think that for type ∀α.(α→α)→α→α the situation is slightly worse; we also need η-equality, as λf.f and λfx.fx correspond to 1, but aren't β-equal, just βη-equal.)
If we disregard bottoms and unsafe stuff, then the only thing you can do universally with functions a -> a is compose them. However, that doesn't quite stop us at finite f (f ( ... (f x0) ... )) expressions: we also have the infinite composition infty x f = f $ infty x f.
Similarly, the only non-recursive boolean values are indeed \t _ -> t and \_ f -> f, but you can also tie knots here, like
blarg t f = blarg (blarg t f) (blarg f t)

Understanding parameters in Haskell

I am new to Haskell and am trying to call a function which I got from:
http://www.haskell.org/haskellwiki/Functional_differentiation
derive :: (Fractional a) => a -> (a -> a) -> (a -> a)
derive h f x = (f (x+h) - f x) / h
I am having trouble understanding the parameters of the method and what h f x correspond to.
From what I understand:
h is a fractional
f is a function which takes in a fractional and returns a fractional
x ?? where does that come from?
however when I type in GHCi:
Prelude> let derive h f x = (f (x+h) - f x) / h
Prelude> :t derive
derive :: Fractional a => a -> (a -> a) -> a -> a
Prelude>
I get a different type out of it.
What is going on? Is this some kind of currying?
It is indeed currying. (Fractional a) => a -> (a -> a) -> (a -> a) and Fractional a => a -> (a -> a) -> a -> a are the same type because -> is right associative.
take add x y = x + y. Its type is Int -> Int -> Int ~ Int -> (Int -> Int). So add 5 is a function which takes an Int and adds 5 to it.
The reason that one might write the first form may be to put the emphasis on the usage of the curried form of a function.
Because -> is right associative, the type of derive could be written as
derive :: (Fractional a) => a -> (a -> a) -> a -> a
In other words,
derive :: (Fractional a) => a -> (a -> a) -> (a -> a)
equals
derive :: (Fractional a) => a -> (a -> a) -> a -> a
I think it makes what x means quite clear :-)
Ok, so the differentiation can be approximated as:
df(x)/dx = (f(x+h) - f(x)) / h , in the limit of h -> 0 at point x
where h is a small number. In Haskell, f(x) is written as f x. It takes and x and returns a number, just like f(x) takes a number and returns another. Your function for derivative is a direct translation. Here, f is the function you want to derive at point x, with the small number h.
So for the derivative, you provide the small number h, the function f and the point at which you want to calculate the derivative x. In Haskell,
derive h f x = ...
Not exactly. h is of type a, which could be anything, but it needs an instance Fractional. Fractional by itself is not a type, but a type class, i.e., interface the type must support.
f is a function that takes something of type a and returns something of the same type a. It should be the same a as before. Not some other instance of Fractional; the same one.

How to express existential types using higher rank (rank-N) type polymorphism?

We're used to having universally quantified types for polymorphic functions. Existentially quantified types are used much less often. How can we express existentially quantified types using universal type quantifiers?
It turns out that existential types are just a special case of Σ-types (sigma types). What are they?
Sigma types
Just as Π-types (pi types) generalise our ordinary function types, allowing the resulting type to depend on the value of its argument, Σ-types generalise pairs, allowing the type of second component to depend on the value of the first one.
In a made-up Haskell-like syntax, Σ-type would look like this:
data Sigma (a :: *) (b :: a -> *)
= SigmaIntro
{ fst :: a
, snd :: b fst
}
-- special case is a non-dependent pair
type Pair a b = Sigma a (\_ -> b)
Assuming * :: * (i.e. the inconsistent Set : Set), we can define exists a. a as:
Sigma * (\a -> a)
The first component is a type and the second one is a value of that type. Some examples:
foo, bar :: Sigma * (\a -> a)
foo = SigmaIntro Int 4
bar = SigmaIntro Char 'a'
exists a. a is fairly useless - we have no idea what type is inside, so the only operations that can work with it are type-agnostic functions such as id or const. Let's extend it to exists a. F a or even exists a. Show a => F a. Given F :: * -> *, the first case is:
Sigma * F -- or Sigma * (\a -> F a)
The second one is a bit trickier. We cannot just take a Show a type class instance and put it somewhere inside. However, if we are given a Show a dictionary (of type ShowDictionary a), we can pack it with the actual value:
Sigma * (\a -> (ShowDictionary a, F a))
-- inside is a pair of "F a" and "Show a" dictionary
This is a bit inconvenient to work with and assumes that we have a Show dictionary around, but it works. Packing the dictionary along is actually what GHC does when compiling existential types, so we could define a shortcut to have it more convenient, but that's another story. As we will learn soon enough, the encoding doesn't actually suffer from this problem.
Digression: thanks to constraint kinds, it's possible to reify the type class into concrete data type. First, we need some language pragmas and one import:
{-# LANGUAGE ConstraintKinds, GADTs, KindSignatures #-}
import GHC.Exts -- for Constraint
GADTs already give us the option to pack a type class along with the constructor, for example:
data BST a where
Nil :: BST a
Node :: Ord a => a -> BST a -> BST a -> BST a
However, we can go one step further:
data Dict :: Constraint -> * where
D :: ctx => Dict ctx
It works much like the BST example above: pattern matching on D :: Dict ctx gives us access to the whole context ctx:
show' :: Dict (Show a) -> a -> String
show' D = show
(.+) :: Dict (Num a) -> a -> a -> a
(.+) D = (+)
We also get quite natural generalisation for existential types that quantify over more type variables, such as exists a b. F a b.
Sigma * (\a -> Sigma * (\b -> F a b))
-- or we could use Sigma just once
Sigma (*, *) (\(a, b) -> F a b)
-- though this looks a bit strange
The encoding
Now, the question is: can we encode Σ-types with just Π-types? If yes, then the existential type encoding is just a special case. In all glory, I present you the actual encoding:
newtype SigmaEncoded (a :: *) (b :: a -> *)
= SigmaEncoded (forall r. ((x :: a) -> b x -> r) -> r)
There are some interesting parallels. Since dependent pairs represent existential quantification and from classical logic we know that:
(∃x)R(x) ⇔ ¬(∀x)¬R(x) ⇔ (∀x)(R(x) → ⊥) → ⊥
forall r. r is almost ⊥, so with a bit of rewriting we get:
(∀x)(R(x) → r) → r
And finally, representing universal quantification as a dependent function:
forall r. ((x :: a) -> R x -> r) -> r
Also, let's take a look at the type of Church-encoded pairs. We get a very similar looking type:
Pair a b ~ forall r. (a -> b -> r) -> r
We just have to express the fact that b may depend on the value of a, which we can do by using dependent function. And again, we get the same type.
The corresponding encoding/decoding functions are:
encode :: Sigma a b -> SigmaEncoded a b
encode (SigmaIntro a b) = SigmaEncoded (\f -> f a b)
decode :: SigmaEncoded a b -> Sigma a b
decode (SigmaEncoded f) = f SigmaIntro
-- recall that SigmaIntro is a constructor
The special case actually simplifies things enough that it becomes expressible in Haskell, let's take a look:
newtype ExistsEncoded (F :: * -> *)
= ExistsEncoded (forall r. ((x :: *) -> (ShowDictionary x, F x) -> r) -> r)
-- simplify a bit
= ExistsEncoded (forall r. (forall x. (ShowDictionary x, F x) -> r) -> r)
-- curry (ShowDictionary x, F x) -> r
= ExistsEncoded (forall r. (forall x. ShowDictionary x -> F x -> r) -> r)
-- and use the actual type class
= ExistsEncoded (forall r. (forall x. Show x => F x -> r) -> r)
Note that we can view f :: (x :: *) -> x -> x as f :: forall x. x -> x. That is, a function with extra * argument behaves as a polymorphic function.
And some examples:
showEx :: ExistsEncoded [] -> String
showEx (ExistsEncoded f) = f show
someList :: ExistsEncoded []
someList = ExistsEncoded $ \f -> f [1]
showEx someList == "[1]"
Notice that someList is actually constructed via encode, but we dropped the a argument. That's because Haskell will infer what x in the forall x. part you actually mean.
From Π to Σ?
Strangely enough (although out of the scope of this question), you can encode Π-types via Σ-types and regular function types:
newtype PiEncoded (a :: *) (b :: a -> *)
= PiEncoded (forall r. Sigma a (\x -> b x -> r) -> r)
-- \x -> is lambda introduction, b x -> r is a function type
-- a bit confusing, I know
encode :: ((x :: a) -> b x) -> PiEncoded a b
encode f = PiEncoded $ \sigma -> case sigma of
SigmaIntro a bToR -> bToR (f a)
decode :: PiEncoded a b -> (x :: a) -> b x
decode (PiEncoded f) x = f (SigmaIntro x (\b -> b))
I found an anwer in Proofs and Types by Jean-Yves Girard, Yves Lafont and Paul Taylor.
Imagine we have some one-argument type t :: * -> * and construct an existential type that holds t a for some a: exists a. t a. What can we do with such a type? In order to compute something out of it we need a function that can accept t a for arbitrary a, that means a function of type forall a. t a -> b. Knowing this, we can encode an existential type simply as a function that takes functions of type forall a. t a -> b, supplies the existential value to them and returns the result b:
{-# LANGUAGE RankNTypes #-}
newtype Exists t = Exists (forall b. (forall a. t a -> b) -> b)
Creating an existential value is now easy:
exists :: t a -> Exists t
exists x = Exists (\f -> f x)
And if we want to unpack the existential value, we just apply its content to a function that produces the result:
unexists :: (forall a. t a -> b) -> Exists t -> b
unexists f (Exists e) = e f
However, purely existential types are of very little use. We cannot do anything reasonable with a value we know nothing about. More often we need an existential type with a type class constraint. The procedure is just the same, we just add a type class constraint for a. For example:
newtype ExistsShow t = ExistsShow (forall b. (forall a. Show a => t a -> b) -> b)
existsShow :: Show a => t a -> ExistsShow t
existsShow x = ExistsShow (\f -> f x)
unexistsShow :: (forall a. Show a => t a -> b) -> ExistsShow t -> b
unexistsShow f (ExistsShow e) = e f
Note: Using existential quantification in functional programs is often considered a code-smell. It can indicate that we haven't liberated ourselves from OO thinking.

Resources