We can have a polymorphic function f :: a -> b implemented for different pairs of a and b. How can we make
twice :: (a -> b) -> a -> c
twice f x = f (f x)
type check? i.e. how can I write a function which applies a polymorphic function twice?
With Rank2Types we can get a bit closer but not quite there:
{-# LANGUAGE Rank2Types #-}
twice1 :: (forall a. a -> (m a)) -> b -> (m (m b))
twice1 f = f . f
twice2 :: (forall a. m a -> a) -> m (m b) -> b
twice2 f = f . f
so then some polymorphic functions can be applied twice:
\> twice1 (:[]) 1
[[1]]
\> twice2 head [[1]]
1
Can we go further?
The question was asked over Haskell cafe 10 years ago but wasn't quite answered (with type classes it becomes a lot of boilerplate).
{-# LANGUAGE TypeFamilies, RankNTypes, UnicodeSyntax #-}
type family Fundep a :: *
type instance Fundep Bool = Int
type instance Fundep Int = String
...
twice :: ∀ a . (∀ c . c -> Fundep c) -> a -> Fundep (Fundep a)
twice f = f . f
Now, that won't be much use actually because you can't define a (meaningful) polymorphic function that works with any c. One possibility is to toss in a class constraint, like
class Showy a where
type Fundep a :: *
showish :: a -> Fundep a
instance Showy Bool where
type Fundep Bool = Int
showish = fromEnum
instance Showy Int where
type Fundep Int = String
showish = show
twice :: ∀ a b . (Showy a, b ~ Fundep a, Showy b) =>
(∀ c . Showy c => c -> Fundep c) -> a -> Fundep b
twice f = f . f
main = print $ twice showish False
You can't make twice generic enough even in a dependently typed setting, but it's possible with intersection types:
twice :: (a -> b /\ b -> c) -> a -> c
twice f x = f (f x)
Now whenever f :: a -> b and f :: b -> c typecheck, twice will typecheck too.
There is also a beautiful spell in Benjamin Pierce's thesis (I changed the syntax slightly):
self : (A /\ A -> B) -> B
self f = f f
So self-application is typeable with intersection types as well.
Related
In general, I'm wondering if there's a way to write a generic fold that generalizes a function that applies a forall type like:
f :: forall a. Data (D a) => D a -> b
given some datatype D for which instance Data (D a) (possibly with constraints on a). To be concrete, consider something even as simple as False `mkQ` isJust, or generally, a query on the constructor of a higher-kinded datatype. Similarly, consider a transformation mkT (const Nothing) that only affects one particular higher-kinded type.
Without explicit type signatures, they fail with No instance for Typeable a0, which is probably the monomorphism restriction at work. Fair enough. However, if we add explicit type signatures:
t :: GenericT
t = mkT (const Nothing :: forall a. Data a => Maybe a -> Maybe a)
q :: GenericQ Bool
q = False `mkQ` (isJust :: forall a. Data a => Maybe a -> Bool)
instead we are told that the forall type of the outer signatures are ambiguous:
Could not deduce (Typeable a0)
arising from a use of ‘mkT’
from the context: Data a
bound by the type signature for:
t :: GenericT
The type variable ‘a0’ is ambiguous
I can't wrap my head around this. If I'm really understanding correctly that a0 is the variable in t :: forall a0. Data a0 => a0 -> a0, how is it any more ambiguous than in say mkT not? If anything, I would've expected mkT to complain because it is the one that interacts with isJust. Additionally, these functions are more polymorphic than the branching on concrete types.
I'm curious to know if this is a limitation of proving the inner constraint isJust :: Data a => ... — my understanding is that any type of instance Data inhabited with Maybe a must also have Data a to be valid by the instance constraint instance Data a => Data (Maybe a).
tldr: You need to create a different function.
mkT has the following signature:
mkT :: (Typeable a, Typeable b) => (a -> a) -> (b -> b)
And you want to apply it to a polymorphic function of type (forall x. Maybe x -> Maybe x). It is not possible: there is no way to instantiate a in (a -> a) to obtain (forall x. Maybe x -> Maybe x).
It's not just a limitation of the type system, the implementation of mkT wouldn't support such an instantiation either.
mkT simply compares concrete types a and b for equality at run time. But what you want is to be able to test whether b is equal to Maybe x for some x. The logic this requires is fundamentally more involved. But it is certainly still possible.
Below, mkT1 first matches the type b against the App pattern to know whether b is some type application g y, and then tests equality of g and f:
{-# LANGUAGE ScopedTypeVariables, RankNTypes, TypeApplications, GADTs #-}
import Type.Reflection
-- N.B.: You can add constraints on (f x), but you must do the same for b.
mkT1 :: forall f b. (Typeable f, Typeable b) => (forall x. f x -> f x) -> (b -> b)
mkT1 h =
case typeRep #b of
App g y ->
case eqTypeRep g (typeRep #f) of
Just HRefl -> h
_ -> id
_ -> id
Compilable example with mkQ1 as well:
{-# LANGUAGE ScopedTypeVariables, RankNTypes, TypeApplications, GADTs #-}
import Type.Reflection
mkT1 :: forall f b. (Typeable f, Typeable b) => (forall x. f x -> f x) -> (b -> b)
mkT1 h =
case typeRep #b of
App g y ->
case eqTypeRep g (typeRep #f) of
Just HRefl -> h
_ -> id
_ -> id
mkQ1 :: forall f b q. (Typeable f, Typeable b) => (forall x. f x -> q) -> (b -> q) -> (b -> q)
mkQ1 h =
case typeRep #b of
App g y ->
case eqTypeRep g (typeRep #f) of
Just HRefl -> const h
_ -> id
_ -> id
f :: Maybe x -> String
f _ = "matches"
main :: IO ()
main = do
print (mkQ1 f (\_ -> "doesn't match") (Just 3 :: Maybe Int)) -- matches
print (mkQ1 f (\_ -> "doesn't match") (3 :: Int)) -- doesn't match
Data.Constraint.Forall provides some quantification over constraints, however I fail to see how it can be used. Consider the following:
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE MultiParamTypeClasses #-}
module Forall where
import Prelude
import Data.Constraint.Forall
class Monoid (f a) => MonoidalFunctor f a
testfun :: Forall (MonoidalFunctor f) => (a -> f a) -> [a] -> f a
testfun = foldMap
testfun' :: Monoid (f a) => (a -> f a) -> [a] -> f a
testfun' = foldMap
I thought testfun would typecheck because Forall (MetaMonoid f) would work like forall a. Metamonoid f a, implying forall a. Monoid (f a) because of superclass constraint, but it doesn't.
Why does it not work and is there any workaround? I want to avoid having to write a lot of constraints like MyClass (f MyData) for different MyData types in my function where I know that any useful f will have instances for any f MyData anyway.
Use inst
inst :: forall p a. Forall p a :- p a
inst witness that, if you have forall a. p a, then you can set a to whatever you please and get p a out.
An entailment (:-) is
newtype a :- b = Sub (a => Dict b)
data Dict a = a => Dict
so, by pattern-matching on it, you can reveal the instance within it:
testfun :: forall f a. Forall (MonoidalFunctor f) => (a -> f a) -> [a] -> f a
testfun = case inst #(MonoidalFunctor f) #a of Sub Dict -> foldMap
(type applications/signatures necessary) or you can use (\\):
(\\) :: a => (b => r) -> a :- b -> r
testfun = foldMap \\ inst #(MonoidalFunctor f) #a
which reads "given that a is true and a value of r that also needs b to be true, followed by a value that can prove that b is true given a, make an r. If you rearrange a bit
(\\) :: (b => c) -> (a :- b) -> (a => c)
it looks quite a bit like function composition.
The reason for this dance is simply because it's beyond the reach of GHC to infer that Forall c means it can derive c a for any a; after all, that's why constraints exists. So, you have to be a bit more explicit about it.
I am an intermediate schemer, but only a haskell beginner. Here is my problem:
Suppose you have an associative binary operation, says (>>=). Is there a polyvariadic function p such that p (>>=) h g f e = h >>= g >>= f >>= e?
I am asking this question because this question says it is possible if the binary operation takes inputs of the same type. I wonder if this can be generalized.
EDIT-1: I try to modify the code in http://okmij.org/ftp/Haskell/vararg-fn.lhs (the section of Variable number of variably typed arguments) with little progress.
EDIT-2: Simplify the code a bit.
{-# LANGUAGE FunctionalDependencies, FlexibleInstances #-}
module Main where
class Lfold f a b | b -> a where
lfold :: (a -> (f a) -> (f a)) -> (f a) -> a -> b
instance Lfold f a (f a) where
lfold op rid x = op x rid
instance Lfold f a b => Lfold f a (a -> b) where
lfold op rid x y = lfold op (op x rid) y
test :: [String]
test = lfold (:) [] "a" "b" "c"
main :: IO ()
main = putStrLn $ show test
Yes, you can create such a function. It is very ugly however, and you will need to explicitly type every argument you are going to pass to make the compiler find the correct instance.
Starting from the polyvariadic function template you linked, I arrived at
{-# LANGUAGE FlexibleInstances, InstanceSigs, MultiParamTypeClasses #-}
class ImplicitChain m a r where
p :: m a -> r
instance Monad m => ImplicitChain m a (m a) where
p :: m a -> m a
p x = x
instance (Monad m, ImplicitChain m b r) => ImplicitChain m a ((a -> m b) -> r) where
p :: m a -> (a -> m b) -> r
p x f = p (x >>= f)
h :: Int -> [Int]
h = replicate 2
g :: Int -> [Int]
g = (:[])
f :: Int -> [Int]
f = flip enumFromTo 2
test :: [Int]
test = p [1::Int] h g f
But you were asking whether we can do more generic, so that the binary operation is an argument as well. Yes:
{-# LANGUAGE FlexibleInstances, InstanceSigs, MultiParamTypeClasses, UndecidableInstances #-}
class ImplicitVariadic a b r where
p :: (a -> b -> a) -> r
instance ImplicitVariadic a b (a -> a) where
p :: (a -> b -> a) -> a -> a
p _ x = x
instance (ImplicitVariadic a b (a -> r)) => ImplicitVariadic a b (a -> b -> r) where
p :: (a -> b -> a) -> a -> b -> r
p f x y = p f (f x y)
You can't (at least, not easily), because you need to know how many arguments you are getting ahead of time. Because all functions in Haskell are automatically curried, every function takes exactly one argument and returns one value. Even a simple binary operator takes one argument (the first operand) and returns a function that takes one argument (the second operand) and returns a result. That is,
a + b == (+) a b
== ((+) a) b
There is no way for your imaginary function p to know from its first argument how many other arguments are going to be given. That is, what should the type of p be?
p :: (a -> a -> a) -> a -- zero arguments?
p :: (a -> a -> a) -> a -> a -- one argument?
p :: (a -> a -> a) -> a -> a -> a -- two arguments?
p :: (a -> a -> a) -> a -> a -> a -> a -- three arguments?
Instead, the best you can do is use a fold, which takes an operation and a list of operands.
foldr (+) 0 [h, g, f, e] == h + g + f + e + 0 -- explicit first argument of 0
foldr1 (+) [h, g, f, e] == h + g + f + e -- assumes a list of at least one value
To see what I mean by "not easily", look at the implementation of printf in the Text.Printf module. Even that is not a good example, because the first argument carries information (the number of placeholders in the format string) that a binary operation alone does not.
I have defined a simple algebraic (concrete) data type, MyType :
data MyTpe = MyBool Bool | MyInt Int
... and I am trying to find a way to "convert" arbitrary functions (a->b), where a and b are either Bool or Int, into the associated (MyType->MyType) functions.
This does the job, it converts (a->b) into Maybe (MyType->MyType) (see [1] below) :
import Data.Typeable
data MyTpe = MyBool Bool | MyInt Int deriving Show
liftMyType :: (Typeable a, Typeable b) => (a -> b) -> Maybe (MyTpe -> MyTpe)
liftMyType f = case castIntInt f of
Just g -> Just $ liftIntInt g
Nothing ->
case castIntBool f of
Just g -> Just $ liftIntBool g
Nothing ->
case castBoolInt f of
Just g -> Just $ liftBoolInt g
Nothing ->
case castBoolBool f of
Just g -> Just $ liftBoolBool g
Nothing -> Nothing
castIntInt :: (Typeable a, Typeable b) => (a -> b) -> Maybe (Int -> Int)
castIntInt f = cast f :: Maybe (Int -> Int)
castIntBool :: (Typeable a, Typeable b) => (a -> b) -> Maybe (Int -> Bool)
castIntBool f = cast f :: Maybe (Int -> Bool)
castBoolInt :: (Typeable a, Typeable b) => (a -> b) -> Maybe (Bool -> Int)
castBoolInt f = cast f :: Maybe (Bool -> Int)
castBoolBool :: (Typeable a, Typeable b) => (a -> b) -> Maybe (Bool -> Bool)
castBoolBool f = cast f :: Maybe (Bool -> Bool)
liftIntInt :: (Int -> Int) -> (MyTpe -> MyTpe)
liftIntInt f (MyInt x) = MyInt (f x)
liftIntBool :: (Int -> Bool) -> (MyTpe -> MyTpe)
liftIntBool f (MyInt x) = MyBool (f x)
liftBoolInt :: (Bool -> Int) -> (MyTpe -> MyTpe)
liftBoolInt f (MyBool x) = MyInt (f x)
liftBoolBool :: (Bool -> Bool) -> (MyTpe -> MyTpe)
liftBoolBool f (MyBool x) = MyBool (f x)
However that's quite ugly and does not scale well : what if I want to extend MyType that way?
data MyTpe = MyBool Bool | MyInt Int | MyString String
... Or what if I also want to convert (a1 -> a2 -> b), where a1,a2 and b are Bool or Int, into the associated (MyType->MyType->MyType) function?...
My question : is there a simple, more elegant and more Haskell-like way to handle this issue?
[1]: liftIntInt function and the like are not defined over all MyType elements (eg liftIntInt is not defined for (MyBool x) element). The code is just a reduced case example and in real life I handle this properly.
You're looking for a type
goal :: (a -> b) -> (MyType -> MyType)
for some "suitable" choices of a and b. These "suitable" choices are known statically as the definition of MyType is known statically.
What you're looking for is a typeclass. In particular, we'll want the MultiParamTypeClasses pragma
{-# LANGUAGE MultiParamTypeClasses #-}
class MapMyType a b where
liftMyType :: (a -> b) -> (MyType -> MyType)
so now the full type for liftMyType is
liftMyType :: MapMyType a b => (a -> b) -> (MyType -> MyType)
and we can use the typeclass machinery to store the various instantiations of liftMyType having it be usable only and exactly when a and b can be resolved to be types where liftMyType is inhabited.
instance MapMyType Int Int where liftMyType f (MyInt x) = MyInt (f x)
instance MapMyType Int Bool where liftMyType f (MyInt x) = MyBool (f x)
instance MapMyType Bool Int where liftMyType f (MyBool x) = MyInt (f x)
instance MapMyType Bool Bool where liftMyType f (MyBool x) = MyBool (f x)
-- (as a side note, this is a dangerous function to instantiate since it
-- has incomplete pattern matching on its `MyType` typed argument)
Now, it's worth mentioning that MultiParamTypeClasses often damages inference when used like this. In particular, if we're looking at a fragment of code liftMyType a b we have to be able to infer the type of a and b on their own (e.g., without help from hints being passed "down" from the call to liftMyType) otherwise we'll get an ambiguous instance compilation failure. Actually, what makes this especially bad, is that we'll get that compilation failure if either a or b cannot be directly inferred.
In many circumstances, you would want to control this issue using FunctionalDependencies allowing a little more inference to "flow" between the two parameters and making ambiguity errors less common.
But in this case, I'd consider it to be a code smell. While the code above works (with caveat to the commented note) it has the feeling of a fragile solution.
To answer your question: "is there a simple, more elegant and more Haskell-like way to handle this issue?" There is no elegant or Haskell-like way to solve this problem. Haskell is not a dynamically typed language, and while the designers have managed to fake dynamic typing, you really should avoid it. This question makes it seem like you are trying to fix bad design somewhere with dynamic typing.
You can, however, write a simplified version of your code which is also extensible, using generics:
{-# LANGUAGE DeriveGeneric #-}
import GHC.Generics
import Data.Typeable
liftFun :: forall a b c . (Generic c, GLiftFun (Rep c), Typeable a, Typeable b)
=> (a -> b) -> c -> Maybe c
liftFun f x = do
a <- gGet (from x)
b <- gPut (f a)
return (to b)
class GLiftFun f where
gPut :: Typeable a => a -> Maybe (f q)
gGet :: Typeable a => f q -> Maybe a
instance Typeable a => GLiftFun (K1 i a) where
gPut = fmap K1 . cast
gGet = cast . unK1
instance GLiftFun f => GLiftFun (M1 i c f) where
gPut = fmap M1 . gPut
gGet = gGet . unM1
instance (GLiftFun f, GLiftFun g) => GLiftFun (f :+: g) where
gPut a | Just r <- gPut a = Just (L1 r)
| Just r <- gPut a = Just (R1 r)
| otherwise = Nothing
gGet (L1 a) = gGet a
gGet (R1 a) = gGet a
liftFun will work for any type which is a simple sum type, like Either or any type you define which is isomorphic to a series of nested Eithers. It probably has a sensible extension to product types as well. For example any of the following will work:
data MyType = MyBool Bool | MyInt Int deriving (Show, Generic)
data MyType2 = B2 Bool | I2 Int | S2 String deriving (Show, Generic)
type MyType3 = Either String Int
Here's how you could do it in a scalable way:
{-# LANGUAGE DeriveDataTypeable #-}
import Data.Typeable
data MyTpe = MyBool Bool | MyInt Int deriving (Show,Typeable)
d :: (Typeable a, Typeable b) => (a->b) -> Maybe (a -> MyTpe)
d f = case (cast f :: (Typeable a) => Maybe (a->Int)) of
Just f -> Just $ MyInt . f
_ -> case (cast f :: (Typeable a) => Maybe (a->Bool)) of
Just f -> Just $ MyBool . f
_ -> Nothing -- add more constructor matching here
e :: (Typeable a, Typeable b) => a -> Maybe (b->MyTpe) -> Maybe MyTpe
e x = (>>= \f -> fmap ($ x) (cast f :: (Typeable a => Maybe (a->MyTpe))))
liftMyType :: (Typeable a, Typeable b) => (a->b) -> MyTpe -> Maybe MyTpe
liftMyType f (MyInt x) = e x $ d f
liftMyType f (MyBool x) = e x $ d f
-- add more constructor matching here
...
> liftMyType ((+1) :: Int->Int) (MyInt 100)
> Just (MyInt 101)
You could even get the type you wanted - i.e. Maybe (MyTpe->MyTpe) - you don't need to pattern-match on the argument, just then you won't get a total function MyTpe -> MyTpe even if it is Just.
liftMyType = fmap h . d where
h g = case (cast g :: Maybe (Int->MyTpe)) of
Just g -> (\(MyInt x)) -> g x
_ -> case (cast g :: Maybe (Bool->MyTpe)) of
Just g -> (\(MyBool x)) -> g x
_ -> Nothing -- add more type matching here
...
> fmap ($ MyInt 100) $ liftMyType ((+1) :: Int->Int)
> Just (MyInt 101)
We're used to having universally quantified types for polymorphic functions. Existentially quantified types are used much less often. How can we express existentially quantified types using universal type quantifiers?
It turns out that existential types are just a special case of Σ-types (sigma types). What are they?
Sigma types
Just as Π-types (pi types) generalise our ordinary function types, allowing the resulting type to depend on the value of its argument, Σ-types generalise pairs, allowing the type of second component to depend on the value of the first one.
In a made-up Haskell-like syntax, Σ-type would look like this:
data Sigma (a :: *) (b :: a -> *)
= SigmaIntro
{ fst :: a
, snd :: b fst
}
-- special case is a non-dependent pair
type Pair a b = Sigma a (\_ -> b)
Assuming * :: * (i.e. the inconsistent Set : Set), we can define exists a. a as:
Sigma * (\a -> a)
The first component is a type and the second one is a value of that type. Some examples:
foo, bar :: Sigma * (\a -> a)
foo = SigmaIntro Int 4
bar = SigmaIntro Char 'a'
exists a. a is fairly useless - we have no idea what type is inside, so the only operations that can work with it are type-agnostic functions such as id or const. Let's extend it to exists a. F a or even exists a. Show a => F a. Given F :: * -> *, the first case is:
Sigma * F -- or Sigma * (\a -> F a)
The second one is a bit trickier. We cannot just take a Show a type class instance and put it somewhere inside. However, if we are given a Show a dictionary (of type ShowDictionary a), we can pack it with the actual value:
Sigma * (\a -> (ShowDictionary a, F a))
-- inside is a pair of "F a" and "Show a" dictionary
This is a bit inconvenient to work with and assumes that we have a Show dictionary around, but it works. Packing the dictionary along is actually what GHC does when compiling existential types, so we could define a shortcut to have it more convenient, but that's another story. As we will learn soon enough, the encoding doesn't actually suffer from this problem.
Digression: thanks to constraint kinds, it's possible to reify the type class into concrete data type. First, we need some language pragmas and one import:
{-# LANGUAGE ConstraintKinds, GADTs, KindSignatures #-}
import GHC.Exts -- for Constraint
GADTs already give us the option to pack a type class along with the constructor, for example:
data BST a where
Nil :: BST a
Node :: Ord a => a -> BST a -> BST a -> BST a
However, we can go one step further:
data Dict :: Constraint -> * where
D :: ctx => Dict ctx
It works much like the BST example above: pattern matching on D :: Dict ctx gives us access to the whole context ctx:
show' :: Dict (Show a) -> a -> String
show' D = show
(.+) :: Dict (Num a) -> a -> a -> a
(.+) D = (+)
We also get quite natural generalisation for existential types that quantify over more type variables, such as exists a b. F a b.
Sigma * (\a -> Sigma * (\b -> F a b))
-- or we could use Sigma just once
Sigma (*, *) (\(a, b) -> F a b)
-- though this looks a bit strange
The encoding
Now, the question is: can we encode Σ-types with just Π-types? If yes, then the existential type encoding is just a special case. In all glory, I present you the actual encoding:
newtype SigmaEncoded (a :: *) (b :: a -> *)
= SigmaEncoded (forall r. ((x :: a) -> b x -> r) -> r)
There are some interesting parallels. Since dependent pairs represent existential quantification and from classical logic we know that:
(∃x)R(x) ⇔ ¬(∀x)¬R(x) ⇔ (∀x)(R(x) → ⊥) → ⊥
forall r. r is almost ⊥, so with a bit of rewriting we get:
(∀x)(R(x) → r) → r
And finally, representing universal quantification as a dependent function:
forall r. ((x :: a) -> R x -> r) -> r
Also, let's take a look at the type of Church-encoded pairs. We get a very similar looking type:
Pair a b ~ forall r. (a -> b -> r) -> r
We just have to express the fact that b may depend on the value of a, which we can do by using dependent function. And again, we get the same type.
The corresponding encoding/decoding functions are:
encode :: Sigma a b -> SigmaEncoded a b
encode (SigmaIntro a b) = SigmaEncoded (\f -> f a b)
decode :: SigmaEncoded a b -> Sigma a b
decode (SigmaEncoded f) = f SigmaIntro
-- recall that SigmaIntro is a constructor
The special case actually simplifies things enough that it becomes expressible in Haskell, let's take a look:
newtype ExistsEncoded (F :: * -> *)
= ExistsEncoded (forall r. ((x :: *) -> (ShowDictionary x, F x) -> r) -> r)
-- simplify a bit
= ExistsEncoded (forall r. (forall x. (ShowDictionary x, F x) -> r) -> r)
-- curry (ShowDictionary x, F x) -> r
= ExistsEncoded (forall r. (forall x. ShowDictionary x -> F x -> r) -> r)
-- and use the actual type class
= ExistsEncoded (forall r. (forall x. Show x => F x -> r) -> r)
Note that we can view f :: (x :: *) -> x -> x as f :: forall x. x -> x. That is, a function with extra * argument behaves as a polymorphic function.
And some examples:
showEx :: ExistsEncoded [] -> String
showEx (ExistsEncoded f) = f show
someList :: ExistsEncoded []
someList = ExistsEncoded $ \f -> f [1]
showEx someList == "[1]"
Notice that someList is actually constructed via encode, but we dropped the a argument. That's because Haskell will infer what x in the forall x. part you actually mean.
From Π to Σ?
Strangely enough (although out of the scope of this question), you can encode Π-types via Σ-types and regular function types:
newtype PiEncoded (a :: *) (b :: a -> *)
= PiEncoded (forall r. Sigma a (\x -> b x -> r) -> r)
-- \x -> is lambda introduction, b x -> r is a function type
-- a bit confusing, I know
encode :: ((x :: a) -> b x) -> PiEncoded a b
encode f = PiEncoded $ \sigma -> case sigma of
SigmaIntro a bToR -> bToR (f a)
decode :: PiEncoded a b -> (x :: a) -> b x
decode (PiEncoded f) x = f (SigmaIntro x (\b -> b))
I found an anwer in Proofs and Types by Jean-Yves Girard, Yves Lafont and Paul Taylor.
Imagine we have some one-argument type t :: * -> * and construct an existential type that holds t a for some a: exists a. t a. What can we do with such a type? In order to compute something out of it we need a function that can accept t a for arbitrary a, that means a function of type forall a. t a -> b. Knowing this, we can encode an existential type simply as a function that takes functions of type forall a. t a -> b, supplies the existential value to them and returns the result b:
{-# LANGUAGE RankNTypes #-}
newtype Exists t = Exists (forall b. (forall a. t a -> b) -> b)
Creating an existential value is now easy:
exists :: t a -> Exists t
exists x = Exists (\f -> f x)
And if we want to unpack the existential value, we just apply its content to a function that produces the result:
unexists :: (forall a. t a -> b) -> Exists t -> b
unexists f (Exists e) = e f
However, purely existential types are of very little use. We cannot do anything reasonable with a value we know nothing about. More often we need an existential type with a type class constraint. The procedure is just the same, we just add a type class constraint for a. For example:
newtype ExistsShow t = ExistsShow (forall b. (forall a. Show a => t a -> b) -> b)
existsShow :: Show a => t a -> ExistsShow t
existsShow x = ExistsShow (\f -> f x)
unexistsShow :: (forall a. Show a => t a -> b) -> ExistsShow t -> b
unexistsShow f (ExistsShow e) = e f
Note: Using existential quantification in functional programs is often considered a code-smell. It can indicate that we haven't liberated ourselves from OO thinking.