How to encode the axiom of choice in Haskell/Functional programming? - haskell

> {-# LANGUAGE RankNTypes #-}
I was wondering if there was a way to represent the axiom of choice in haskell and/or some other functional programming language.
As we know, false is represented by the type with no values (Void in haskell).
> import Data.Void
We can represent negation like so
> type Not a = a -> Void
We can express the law of excluded middle for a type a like so
> type LEM a = Either a (Not a)
This means we can make classical logic into a Reader monad
> type Classical a = (forall r. LEM r) -> a
We can, for example, do double negation elimination in it
> doubleneg :: Classical (Not (Not a) -> a)
> doubleneg = \lem nna -> either id (absurd . nna) lem
We can also have a monad where the law of excluded middle fails
> type AntiClassical a = Not (forall r. LEM r) -> a
Now the question is, how can we make a type that represents the axiom of choice? The axiom of choice talks about sets of sets. This implies we would need types of types or something. Is there something equivalent to the axiom of choice that could be encoded? (If you can encode the negation, just combine it with the law of excluded middle). Maybe trickery would allow us to have types of types.
Note: Ideally, it should be a version of the axiom of choice that works with Diaconescu's theorem.

This is just a hint.
The axiom of choice can be expressed as:
If for every x : A there's a y : B such that the property P x y holds, then there's a choice function f : A -> B such that, for all x : A we have P x (f x).
More precisely
choice : {A B : Set} (P : A -> B -> Set) ->
((x : A) -> Σ B (λ y -> P x y)) ->
Σ (A -> B) (λ f -> (x : A) -> P x (f x))
choice P h = ?
given
data Σ (A : Set) (P : A -> Set) : Set where
_,_ : (x : A) -> P x -> Σ A P
Above, choice is indeed provable. Indeed, h assigns to each x a (dependent) pair whose first component y is an element of A and the second component is a proof that the first indeed satisfies P x y. Instead, the f in the thesis must assign to x only y, without any proof.
As you see, obtaining the choice function f from h is just a matter of discarding the proof component in the pair.
There's no need to extend Agda with LEM or any other axiom to prove this. The above proof is entirely constructive.
If we were using Coq, note that Coq forbids to eliminate a proof (such as h : ... -> Prop) to construct a non-proof (f), so translating this into Coq directly fails. (This is to allow program extraction.) However, if we avoid the Prop sort of Coq and use Type directly, then the above can be translated.
You may want to use the following projections for this exercise:
pr1 : ∀ {A : Set} {P} -> Σ A P -> A
pr1 (x , _) = x
pr2 : ∀ {A : Set} {P} -> (p : Σ A P) -> P (pr1 p)
pr2 (_ , y) = y

Related

Can one simplify the Codensity monad on Maybe?

The codensity monad on a type constructor f is defined by:
newtype C f a = C { unC ∷ forall r. (a → f r) → f r }
It is well known that C f is a monad for any type constructor f (not necessarily covariant). The codensity monad is useful in several ways but it is a complicated type that contains a higher-order function under a universal type quantifier.
My question is, for what f can one show that C f is equivalent to a simpler monad that is defined without type quantifiers?
Some examples where a simplification is possible:
f a = a (the identity functor), for which C f a = a.
f a = r -> a (the Reader monad), for which C f is the State monad (C f a = r -> (a, r)).
f a = (w, a) (the Writer monad), for which C f a = ((a -> w) -> a, (a -> w) -> w)
f a = a -> s (a contravariant functor) and then C f a = (a -> s) -> s (the continuation monad).
f a = a -> a (neither covariant nor contravariant) and the C f a = List a
In the first four of those cases, the type equivalence can be derived from the Yoneda identity: forall r. (a -> r) -> F r = F a when F is a covariant functor. The last case is derived via the Church encoding of the inductive type List.
I looked at some other examples and found that in most cases C f does not seem to be equivalent to anything simpler.
Even if we just take f a = Maybe a the resulting type does not seem to be equivalent to a simpler type expression:
newtype CMaybe a = CMaybe { unCMaybe ∷ forall r. (a → Maybe r) → Maybe r }
The Yoneda identity cannot be used here. My best guess (I have no proof so far) is that CMaybe a = (a -> Bool) -> Bool with some additional laws imposed on the functions of that type. Imposing equations on values can be adequately expressed only within a dependently-typed language.
Can one simplify the codensity monad on Maybe?
Are there other examples of type constructors f where C f can be simplified to a type without type quantifiers?
As mentioned in the comments, a function C Maybe a returns a bit more information than a boolean because the r it returns identifies a single input in the callback, so f k chooses an x such that k x is Just.
Simplifying the callback type from a -> Maybe r to a -> Bool, we obtain the following dependent function type, written in Agda and in Coq respectively for reference:
-- Agda
(∀ {r} → (a → Maybe r) → Maybe r)
≡
((k : a → Bool) → Maybe (∃[ x ] k x ≡ true))
(* Coq *)
(forall r, (a -> option r) -> option r)
=
(forall (k : a -> bool), option { x : a | k x = true })
Proof of equivalence in Agda: https://gist.github.com/Lysxia/79846cce777f0394a6f69d84576a325b
This proves the equivalence of ∀ {r} → (a → Maybe r) → Maybe r and a type without a quantifier: ((f : a → Bool) → Maybe (∃[ x ] f x ≡ true)), which is equivalent to q:: (a → Bool) → Maybe a with the restriction that q(p) equals Just x only if p(x) = true.
Note that if a is finite, then C Maybe a is also finite. One approach to the problem then is to compute the corresponding cardinality function.
You can reinterpret the expression of the cardinality as a type, giving a solution to your problem for types of the form Finite a -> C f a.
You can look it up on the online encyclopedia of integer sequences, to find alternative combinatorial interpretations. Sadly, the relevant sequence doesn't have much information.
Product_{j=1..n} j^C(n-1,j-1)
-- https://oeis.org/A064320
If you could find a simpler type for C f a, with only sums, products (not indexed by the cardinality of a), and exponentials, this may correspond to a non-trivial combinatorial identity. Conversely, difficulty in finding such a combinatorial identity provides compelling evidence for the non-existence of simple solutions. It also gives a quick way to test a candidate simplification for validity, by comparing its cardinality with the expected one.

Why won't this simple Morte program typecheck?

I'm trying to better understand the Calculus of Constructions through Morte. My first attempt was to call the identity function itself. However,
(
λ (idType : *) →
λ (id : idType) →
(id idType))
(∀(t : *) → ∀(x : t) → t)
(λ(a : *) → λ(x : a) → x)
That program fails to compile with the error:
Context:
idType : *
id : idType
Expression: id idType
Error: Only functions may be applied to values
That doesn't make sense to me, since id is the function (λ(a : *) → λ(x : a) → x), of type idType == (∀(t : *) → t → t). Why I'm getting this error?
Your
T = (λ (idType : *) →
λ (id : idType) →
(id idType))
is ill-typed. Otherwise T nat 4 would also type check (pretending we have naturals to help intuition).
If you want to write an application function (like Haskell's $) you can use
apply =
(λ (a b : *) →
λ (f : a -> b) →
λ (x : a) →
f x)
Note that the above only applies to non-dependent fs. In the dependent case, b can depend on the actual value of type a, making things quite more complex, since now b is a function.
applyDep =
(λ (a : *) →
λ (b : a -> *) →
λ (f : ∀(x : a) -> b x) →
λ (x : a) →
f x)
An example (simplified syntax):
applyDep
Bool
(λ (x : Bool) -> if x then Int else Char)
(λ (x : Bool) -> if x then 4 else 'd')
True
Above I am quite sloppy on the dependent function (the last lambda), since the if is ill-typed (different types for the branches), but you might get the rough idea. To write it more precisely, I would need something like the dependent match/case Coq has (or to rely to a dependent eliminator for Bool):
fun x: Bool =>
match x as y return (if y then Int else Char) with
| True => 3
| False => 'a'
end
In the above "if", I had to make it clear that the type of the two branches is different (Int vs Char), yet it can be typed if we take that as the result of g x, where g = fun y => if y then Int else Char. Basically, the result type is now dependent the x value.
The problem here is that with Church-style typing (here is a nice blogpost and some discussion) everything must be well-typed from the beginning: if you have a well-typed f and a well-typed x, then you can apply f to x (if types match). If f is not well-typed, then it's not a legal term and you have an error, even if it is possible to assign f x a type.
Your λ (idType : *) → λ (id : idType) → (id idType) is not well-typed: id is a term of type idType and it's not a function that receives *, so you can't apply it to idType.

Generalization of Exponential Type

How (if at all) does the exponential interpretation of (->) (a -> b as ba) generalize to categories other than Hask/Set? For example it would appear that the interpretation for the category of non-deterministic functions is roughly Kliesli [] a b as 2a * b (a -> b -> Bool).
The notion of exponential can be defined in general terms, beyond Hask/Set. A category with exponentials and products is called a cartesian closed category. This is a key notion in theoretical computer science since each c.c. category is essentially a model of the typed lambda calculus.
Roughly, in a cartesian closed category for any pair of objects a,b there exist:
a product object (a * b), and
an exponential object (b^ab)
with morphisms
eval : (b^a)*a -> b (in Haskell: \(f,x) -> f x, AKA apply)
for any f : (a*b)->c, there exists Lf : a -> (c^b) (in Haskell: curry f)
satisfying the equation "they enjoy in the lambda calculus", i.e., if f : (a*b)->c, then:
f = (Lf * id_a) ; eval
In Haskell, the last equation is:
f = \(x :: (a,b), y :: a) -> apply (curry f x, id y) where apply (g,z) = g z
or, using arrows,
f = (curry f *** id) >>> apply where apply (g,z) = g z

Type signatures that never make sense

Consider
(a->a) -> [a] -> Bool
Is there any meaningful definition for this signature? That is, a definition that not simply ignores the argument?
x -> [a] -> Bool
It seems there are many such signatures that can be ruled out immediately.
Carsten König suggested in a comment to use the free theorem. Let's try that.
Prime the cannon
We start by generating the free theorem corresponding to the type (a->a) -> [a] -> Bool. This is a property that every function with that type must satisfy, as established by the famous Wadler's paper Theorems for free!.
forall t1,t2 in TYPES, R in REL(t1,t2).
forall p :: t1 -> t1.
forall q :: t2 -> t2.
(forall (x, y) in R. (p x, q y) in R)
==> (forall (z, v) in lift{[]}(R). f_{t1} p z = f_{t2} q v)
lift{[]}(R)
= {([], [])}
u {(x : xs, y : ys) | ((x, y) in R) && ((xs, ys) in lift{[]}(R))}
An example
To better understand the theorem above, let's run over a concrete example. To use the theorem, we need to take any two types t1,t2, so we can pick t1=Bool and t2=Int.
Then we need to choose a function p :: Bool -> Bool (say p=not), and a function q :: Int -> Int (say q = \x -> 1-x).
Now, we need to define a relation R between Bools and Ints. Let's take the standard boolean
<->integer correspondence, i.e.:
R = {(False,0),(True,1)}
(the above is a one-one correspondence, but it does not have to be, in general).
Now we need to check that (forall (x, y) in R. (p x, q y) in R). We only have two cases to check for (x,y) in R:
Case (x,y) = (False,0): we verify that (not False, 1-0) = (True, 1) in R (ok!)
Case (x,y) = (True ,1): we verify that (not True , 1-1) = (False,0) in R (ok!)
So far so good. Now we need to "lift" the relation so to work on lists: e.g.
[True,False,False,False] is in relation with [1,0,0,0]
This extended relation is the one named lift{[]}(R) above.
Finally, the theorem states that, for any function f :: (a->a) -> [a] -> Bool we must have
f_Bool not [True,False,False,False] = f_Int (\x->1-x) [1,0,0,0]
where above f_Bool simply makes it explicit that f is used in the specialised case in which a=Bool.
The power of this lies in that we do not know what the code of f actually is. We are deducing what f must satisfy by only looking at its polymorphic type.
Since we get types from type inference, and we can turn types into theorems, we really get "theorems for free!".
Back to the original goal
We want to prove that f does not use its first argument, and that it does not care about its second list argument, either, except for its length.
For this, take R be the universally true relation. Then, lift{[]}(R) is a relation which relates two lists iff they have the same length.
The theorem then implies:
forall t1,t2 in TYPES.
forall p :: t1 -> t1.
forall q :: t2 -> t2.
forall z :: [t1].
forall v :: [t2].
length z = length v ==> f_{t1} p z = f_{t2} q v
Hence, f ignores the first argument and only cares about the length of the second one.
QED
You can't do anything interesting with x on it's own.
You can do stuff with [x]; for example, you can count how many nodes are in the list. So, for example,
foo :: (a -> a) -> [a] -> Bool
foo _ [] = True
foo _ (_:_) = False
bar :: x -> [a] -> Bool
bar _ [] = True
bar _ (_:_) = False
If you have an x and a function that turns an x into something else, you can do interesting stuff:
big :: (x -> Int) -> x -> Bool
big f n = if f n > 10 then True else False
If x belongs to some type class, then you can use all the methods of that class on it. (This is really a special-case of the previous one.)
double :: Num x => x -> x
double = (2*)
On the other hand, there are plenty of type signatures for which no valid functions exist:
magic :: x -> y
magic = -- erm... good luck with that!
I read somewhere that the type signatures involving only variables for which a real function exists are exactly the logical theorems that are true. (I don't know the name for this property, but it's quite interesting.)
f1 :: (x -> y) -> x -> y
-- Given that X implies Y, and given that X is true, then Y is true.
-- Well, duh.
f2 :: Either (x -> y) (x -> z) -> x -> Either y z
-- Given that X implies Y or X implies Z, and given X, then either Y or Z is true.
-- Again, duh.
f3 :: x -> y
-- Given that X is true, then any Y is true.
-- Erm, no. Just... no.

What are the values of a polymorphically encoded recursive algebraic data type?

The following question relates to Recursive algebraic data types via polymorphism in Haskell.
Recursive algebraic data types can be realized in any language with the capabilities of System F using universal parametric polymorphism. For example, the type of natural numbers can be introduced (in Haskell) as
newtype Nat = Nat { runNat :: forall t. (t -> (t -> t) -> t) }
with the 'usual' natural number n being realized as
\ x0 f -> f(f(...(f x0)...))
with n iterations of f used.
Similarly, the type of Booleans can be introduced as
newtype Bool = Bool { runBool :: forall t. t -> t -> t }
with the expected values 'true' and 'false' being realized as
true = \ t f -> t
false = \ t f -> f
Q: Are all terms of type Bool or Nat or any other potentially recursive algebraic data type (encoded in this way) of the above form, up to some reduction rules of operational semantics?
Example 1 (Natural numbers): Is any term of type forall t. t -> (t -> t) -> t 'equivalent' in some sense to a term of the form \ x0 f -> f (f ( ... (f x0) ... ))?
Example 2 (Booleans): Is any term of type forall t. t -> t -> t 'equivalent' in some sense to either \ t f -> t or \ t f -> f?
Addendum (internal version): In case the language under consideration is even capable of expressing propositional equality, this meta-mathematical question could be internalized as follows, and I would be very happy if someone would come up with a solution for it:
For any functor m we can define the universal module and some decoding-encoding function on it as follows:
type ModStr m t = m t -> t
UnivMod m = UnivMod { univProp :: forall t. (ModStr m t) -> t }
classifyingMap :: forall m. forall t. (ModStr m t) -> (UnivMod m -> t)
classifyingMap f = \ x -> (univProp x) f
univModStr :: (Functor m) => ModStr m (UnivMod m)
univModStr = \ f -> UnivMod $ \ g -> g (fmap (classifyingMap g) f)
dec_enc :: (Functor m) => UnivMod m -> UnivMod m
dec_enc x = (univProp x) univModStr
Q: In case the language is capable of expressing this: is the equality type dec_enc = id inhabited?
In System F (AKA λ2), all inhabitants of ∀α.α→α→α are indeed λ-equal to K or K*.
First, if M : ∀α.α→α→α then it has normal form N (since System F is normalizing) and by subject reduction theorem (see Barendregt: Lambda calculi with types) also N : ∀α.α→α→α.
Let's examine how these normal forms can look like. (We'll be using Generation lemma for λ2, see the Barendregt's book for formal details.)
If N is a normal form, that N (or any its subexpression) must be in head normal form, that is an expression of the form λx1 ... xn. y P1 ... Pk, where n and/or k can be also 0.
For the case of N, there must be at least one λ, because initially we don't have any variable bound in the typing context that would take the place of y. So N = λx.U and x:α |- U:α→α.
Now again there must be at least one λ in the case of U, because if U were just y P1 ... Pk then y would have a function type (even for k=0 we'd need y:α→α), but we have just x:α in the context. So N = λxy.V and x:α, y:α |- V:α.
But V can't be λ.., because then it'd have function type τ→σ. So V must be just of the form z P1 ... Pk, but since we don't have any variable of function type in the context, k must be 0 and therefore V can be only x or y.
So there are only two terms in normal form of type ∀α.α→α→α: λxy.x and λxy.y and all other terms of this type are β-equal to one of these.
Using similar reasoning we can prove that all inhabitants of ∀α.α→(α→α)→α are β-equal to a Church numeral. (And I think that for type ∀α.(α→α)→α→α the situation is slightly worse; we also need η-equality, as λf.f and λfx.fx correspond to 1, but aren't β-equal, just βη-equal.)
If we disregard bottoms and unsafe stuff, then the only thing you can do universally with functions a -> a is compose them. However, that doesn't quite stop us at finite f (f ( ... (f x0) ... )) expressions: we also have the infinite composition infty x f = f $ infty x f.
Similarly, the only non-recursive boolean values are indeed \t _ -> t and \_ f -> f, but you can also tie knots here, like
blarg t f = blarg (blarg t f) (blarg f t)

Resources