Coq - Rewriting a FMap Within a Relation - programming-languages

I am new to Coq, and was hoping that someone with more experience could help me with a problem I am facing.
I have defined a relation to represent the evaluation of a program in an imaginary programming language. The goal of the language is unify function calls and a constrained subset of macro invocations under a single semantics. Here is the definition of the relation, with its first constructor (I am omitting the rest to save space and avoid unnecessary details).
Inductive EvalExpr:
store -> (* Store, mapping L-values to R-values *)
environment -> (* Local environment, mapping function-local variables names to L-values *)
environment -> (* Global environment, mapping global variables names to L-values *)
function_table ->(* Mapping function names to function definitions *)
macro_table -> (* Mapping macro names to macro definitions *)
expr -> (* The expression to evaluate *)
Z -> (* The value the expression terminates to *)
store -> (* The final state of the program store after evaluation *)
Prop :=
(* Numerals evaluate to their integer representation and do not
change the store *)
| E_Num : forall S E G F M z,
EvalExpr S E G F M (Num z) z S
...
The mappings are defined as follows:
Module Import NatMap := FMapList.Make(OrderedTypeEx.Nat_as_OT).
Module Import StringMap := FMapList.Make(OrderedTypeEx.String_as_OT).
Definition store : Type := NatMap.t Z.
Definition environment : Type := StringMap.t nat.
Definition function_table : Type := StringMap.t function_definition.
Definition macro_table : Type := StringMap.t macro_definition.
I do not think the definitions of the other types are relevant to this question, but I can add them if needed.
Now when trying to prove the following lemma, which seems intuitively obvious, I get stuck:
Lemma S_Equal_EvalExpr_EvalExpr : forall S1 S2,
NatMap.Equal S1 S2 ->
forall E G F M e v S',
EvalExpr S1 E G F M e v S' <-> EvalExpr S2 E G F M e v S'.
Proof.
intros. split.
(* -> *)
- intros. induction H0.
+ (* Num *)
Fail constructor.
Abort.
If I were able to rewrite S2 for S1 in the goal, the proof would be trivial; however, if I try to do this, I get the following error:
H : NatMap.Equal S S2
(* Other premises *)
---------------------
EvalExpr S2 E G F M (Num z) z S
rewrite <- H.
Found no subterm matching "NatMap.find (elt:=Z) ?M2433 S2" in the current goal.
I think this has to do with finite mappings being abstract types, and thus not being rewritable like concrete types are. However, I noticed that I can rewrite mappings within other equations/relations found in Coq.FSets.FMapFacts. How would I tell Coq to let me rewrite mapping types inside my EvalExpr relation?
Update: Here is a gist containing a minimal working example of my problem. The definitions of some of the mapping types have been altered for brevity, but the problem is the same.

The issue here is that the relation NatMap.Equal, which says that two maps have the same bindings, is not the same as the notion of equality in Coq's logic, =. While it is always possible to rewrite with =, rewriting with some other relation R is only possible if you can prove that the property you are trying to show is compatible with it. This is already done for the relations in FMap, which is why rewriting there works.
You have two options:
Replace FMap with an implementation for which the intended map equality coincides with =, a property usually known as extensionality. There are many libraries that provide such data structures, including my own extructures, but also finmap and std++. Then, you never need to worry about a custom equality relation; all the important properties of maps work with =.
Keep FMap, but use the generalized rewriting mechanism to allow rewriting with FMap.Equal. To do this, you probably need to modify the definition of your execution relation so that it is compatible with FMap.Equal. Unfortunately, I believe the only way to do this is by explicitly adding equality hypotheses everywhere, e.g.
Definition EvalExpr' S E G F M e v S' :=
exists S0 S0', NatMap.Equal S S0 /\
NatMap.Equal S' S0' /\
EvalExpr S0 E G F M e v S0'.
Since this will pollute your definitions, I would not recommend this approach.

Arthur's answer explains the problem very well.
One other (?) way to do it could be to modify your Inductive definition of EvalExpr to explicitly use the equality that you care about (NatMap.Equal instead of Eq). You will have to say in each rule that it is enough for two maps to be Equal.
For example:
| E_Num : forall S E G F M z,
EvalExpr S E G F M (Num z) z S
becomes
| E_Num : forall S1 S2 E G F M z,
NatMap.Equal S1 S2 ->
EvalExpr S1 E G F M (Num z) z S2
Then when you want to prove your Lemma and apply the constructor, you will have to provide a proof that S1 and S2 are equal. (you'll have to reason a little using that NatMap.Equal is an equivalence relation).

Related

Free theorem for fmap

Consider the following wrapper:
newtype F a = Wrap { unwrap :: Int }
I want to disprove (as an exercise to wrap my head around this interesting post) that there’s a legitimate Functor F instance which allows us to apply functions of Int -> Int type to the actual contents and to ~ignore~ all other functions (i. e. fmap nonIntInt = id).
I believe this should be done with a free theorem for fmap (which I read here):
for given f, g, h and k, such that g . f = k . h: $map g . fmap f = fmap k . $map h, where $map is the natural map for the given constructor.
What defines a natural map? Am I right to assume that it is a simple flip const for F?
As far as I get it: $map f is what we denote as Ff in category theory. Thus, in a categorical sense, we simply want something among the lines of the following diagram to commute:
Yet, I do not know what to put instead of ???s (that is, what functor do we apply to get such a diagram and how do we denote this almost-fmap?).
So, what is a natural map in general, and for F? What is the proper diagram for fmap's free theorem?
Where am I going with this?
Consider:
f = const 42
g = id
h = const ()
k () = 42
It is easy to see that f . g is h . k. And yet, the non-existant fmap will execute only f, not k, giving different results. If my intuition about the naturality is correct, such a proof would work. That's what I am trying to figure out.
#leftaroundabout proposed a simpler piece of proof: fmap show . fmap (+1) alters the contents, unlike fmap $ show . (+1). It is a nice piece of proof, and yet I would still like to work with free theorems as an exercise.
So we are entertaining a function m :: forall a b . (a->b) -> F a -> F b such that (among other things)
m (1 +) (Wrap x) = (Wrap (1+x))
m (show) (Wrap x) = (Wrap x)
There are two somewhat related questions here.
Can a well-behaved fmap do this?
Can a parametric function do this?
The answer to both questions is "no".
A well-behaved fmap can't do this because fmap has to obey the axioms of Functor. Whether our environment is parametric or not is irrelevant. The axiom of Functor says that for all functions a and b, fmap (a . b) = fmap a . fmap b must hold, and this fails for a = show and b = (1 +). So m cannot be a well-behaved fmap.
A parametric function can't do this because that is what the parametricity theorem says. When viewing types as relations between terms, related functions take related arguments to related results. It is easy to see that m fails parametricity, but it is slightly easier to look at m': forall a b. (a -> b) -> (Int -> Int) (the two can be trivially converted to each other). (1 +) is related to show because m' is polymorphic in its argument, so different values of the argument can be related by any relation. Functions are relations, and there exists a function that sends (1 +) to show. However, the result type of m' has no type variables, so it corresponds to the constant relation (its values are only related to themselves). Since every value including m' is related to itself, it follows that all parametric functions m :: forall a b. (a -> b) -> (Int -> Int) must obey m f = m g, i.e. they must ignore their first argument. Which is intuitively obvious since there is nothing to apply it to.
One can in fact deduce the first statement from the second by observing that a well-behaved fmap must be parametric. So even if the language allows non-parametricity, fmap cannot make any non-trivial use of it.

Polymorphic reasoning

I am learning Haskell and in internet I've found is paper from Philip Wadler.
I read it and did not understand at all, but it somehow connects to polymorphic function.
For example:
polyfunc :: a -> a -> a
It is a polymorphic function of any type.
What is the free theorem in connection of the example polyfunc?
I feel like if I actually understood that paper then any code I wrote would be coauthored by God.
My best guess for this problem though is that all polyfunc can do is either always return the first argument or always return the second argument. So there are actually only two implementations of polyfunc,
polyfuncA a _ = a
polyfuncB _ b = b
The paper gives you a way to prove that claim.
This is a very important concept. For example, I've been involved in data quality research previously. This free theorem says that there is no function which can select the best data from two arbitrary pieces of data. We have to know something more. Its actually a no-brainer that I was surprised to find some people willing to overlook.
I've never really understood the algorithm laid out in that paper either, so I thought I would try to figure it out.
(1) Type of function in question
f :: a -> a -> a
(2) Rephrasing as a relation
f : ∀X. X -> X -> X
(3) By parametricity
(f, f) ∈ ∀X. X -> X -> X
(4) By definition of ∀ on relations
for all Q : A <=> A',
(fA, fA') ∈ Q -> Q -> Q
(5) Applying definition of -> on relations to the first -> in (4)
for all Q : A <=> A',
for all (x, x') ∈ Q,
(fA x, fA' x') ∈ Q -> Q
(6) Applying definition of -> on relations to (5)
for all Q : A <=> A',
for all (x, x') ∈ Q,
for all (y, y') ∈ Q,
(fA x y, fA' x' y') ∈ Q
At this point I was done expanding the relational definition, but wasn't sure how to get this back from relations into terms of functions and types, so I went and found a webapp that will automatically derive the free theorem for a type. I won't spoil (yet) what result it gives, but looking at it did help me figure out the next step in my proof.
The next step is to get back into function-land from relation-land, by noting that Q can be the type of any function at all and this will still hold.
(7) Specializing Q to a function g :: p -> q
for all p, q
for all g :: p -> q
where g x = x'
and g y = y'
g (f x y) = f x' y'
(8) By definitions of x' and y'
for all p, q
for all g :: p -> q
g (f x y) = f (g x) (g y)
That looks true, right? It is equivalent to use g to transform both elements and then let f choose between them, or to let f choose an element and then transform it with g. By parametricity, f can't change whether it chooses the left or right element based on anything that g does.
Of course the claim given in trevor cook's answer is true: f must either always choose its first argument, or always choose its second. I'm not sure whether the free theorem I derived is equivalent to that, or is a weaker version of it.
And incidentally, this is a special case of something that is already covered explicitly in the paper. It gives the theorem for
k :: x -> y -> x
which of course is the same as your function f, where x ~ a and y ~ a. The result that it gives is the same as the one I described:
for all a, b, x, y
a (k x y) = k (a x) (b y)
if we choose b=a to make the two results equivalent.
Wadler's "Theorems for free" (TFF) paper is not a good reference for learning about relational parametricity. The TFF paper is focused on abstract theory but, if you need a practical algorithm, the paper does not actually give it to you. The explanations in TFF have a number of important omissions that will confuse anyone who does not already know a great deal about relational parametricity.
The first thing TFF does not explain is that a function will satisfy a "free theorem" only if the code of the function is fully parametric (restricted in a number of ways).
"Fully parametric" code is a function with type parameters, whose arguments are typed using only those type parameters, and whose code is purely functional and does not try to examine at run time what types are assigned to the type parameters. The code must treat all types as totally unknown, arbitrary type parameters. The code must work with all types in the same way.
With these restrictions, the code will satisfy a certain law, in most cases this will be the "naturality" law, but in some cases the law will be more complicated. The paper "Theorems for free" shows many examples of such laws but does not explain a general algorithm for deriving those theorems.
To prove that those laws always hold, one uses the technique of relational parametricity. This is a complicated and powerful technique where one replaces functions (viewed as many-to-one binary relations) by arbitrary (many-to-many) binary relations and then reformulates the naturality law in terms of relations. The result is a "relational naturality law". At the end, one replaces relations again by functions and tries to derive an equation.
I recently recorded a tutorial about relational parametricity, with code examples in Scala. https://www.youtube.com/watch?v=Jf2VFB90Q0s&list=PLcoadSpY7rHUO0I1zdcbu9EeByYbwPSQ6
My tutorial does not follow Wadler's TFF paper but instead explains a simple and straightforward approach focused on practical results: how to derive the free theorem and reason about relations effectively. In this approach, it becomes easier to derive the "free theorem" for a given type and also to prove the "parametricity theorem": fully parametric functions will always satisfy one "free theorem" per type parameter.
Of course, for practical usage you don't necessarily need to go through the proof of the parametricity theorem, but you do need to be able to write the free theorem itself.
The key element of my tutorial is the idea of "lifting a relation to a type constructor". If you have a relation r: A <=> B and a type constructor F A then you can lift r to a relation of type F A <=> F B.
This lifting operation is denoted by rmap_F. The relation rmap_F r has type F A <=> F B.
The lifting operation rmap_F is defined by induction on the type structure of F. The details of that definition are somewhat technical (and are not adequately explained in the TFF paper!). The most important step in learning about relational parametricity is to understand the practical technique for lifting relations to type constructors. This is explained in my tutorial, and it's too long to write it here. The definition explains how to to lift r to a trivial type constructor F A = C where C is a fixed type, to F A = A, to F A = Either (G A) (H A), to F A = (G A, H A), to F A = G A -> H A, etc.
The definition of rmap is analogous to the functor lifting fmap that lifts a function of type A -> B to a function of type F A -> F B. However, the functor lifting works only for covariant F, while the relational lifting works for any F, even if it is neither covariant nor contravariant, such as F A = A -> A -> A. This is the crucial feature that shows why relational technique is useful at all.
Let us apply the relational technique to the type forall A. A -> A -> A.
We define F A = A -> A -> A. We take an arbitrary fully parametric function t of type forall A. F A. The relational naturality law says: for any relation r: A <=> B between any types A and B the function t must be in the relation (t, t) ∈ rmap_F r.
Now we need to do two things: 1) select r to be the graph relation of some function f: A -> B, denoted r = graph f, and 2) use the definition of rmap_F to compute rmap_F (graph f) explicitly.
The definition of rmap_F gives:
(t1, t2) ∈ rmap_F r ===
for all a1: A, a2: A, b1: B, b2: B,
if (a1, b1) ∈ r and (a2, b2) ∈ r
then (t a1 a2, t b1 b2) ∈ r
Translating this with r = graph f, we get:
(a1, b1) ∈ r === b1 = f a1
(a2, b2) ∈ r === b2 = f a2
(t a1 a2, t b1 b2) ∈ r === t b1 b2 = f (t a1 a2)
So, we obtain the following law:
for all a1: A, a2: A,
t (f a1) (f a2) = f (t a1 a2)
This is actually a naturality law. This is the "free theorem" satisfied by t.

How can a function be "transparently augmented" in Haskell?

Situation
I have function f, which I want to augment with function g, resulting in function named h.
Definitions
By "augment", in the general case, I mean: transform either input (one or more arguments) or output (return value) of function f.
By "augment", in the specific case, (specific to my current situation) I mean: transform only the output (return value) of function f while leaving all the arguments intact.
By "transparent", in the context of "augmentation", (both the general case and the specific case) I mean: To couple g's implementation as loosely to f's implementation as possible.
Specific case
In my current situation, this is what I need to do:
h a b c = g $ f a b c
I am interested in rewriting it to something like this:
h = g . f -- Doesn't type-check.
Because from the perspective of h and g, it doesn't matter what arguments f take, they only care about the return value, hence it would be tight coupling to mention the arguments in any way. For instance, if f's argument count changes in the future, h will also need to be changed.
So far
I asked lambdabot on the #haskell IRC channel: #pl h a b c = g $ f a b c to which I got the response:
h = ((g .) .) . f
Which is still not good enough since the number of (.)'s is dependent on the number of f's arguments.
General case
I haven't done much research in this direction, but erisco on #haskell pointed me towards http://matt.immute.net/content/pointless-fun which hints to me that a solution for the general case could be possible.
So far
Using the functions defined by Luke Palmer in the above article this seems to be an equivalent of what we have discussed so far:
h = f $. id ~> id ~> id ~> g
However, it seems that this method sadly also suffers from being dependent on the number of arguments of f if we want to transform the return value of f -- just as the previous methods.
Working example
In JavaScript, for instance, it is possible to achieve transparent augmentation like this:
function h () { return g(f.apply(this, arguments)) }
Question
How can a function be "transparently augmented" in Haskell?
I am mainly interested in the specific case, but it would be also nice to know how to handle the general case.
You can sort-of do it, but since there is no way to specify a behavior for everything that isn't a function, you'll need a lot of trivial instances for all the other types you care about.
{-# LANGUAGE TypeFamilies, DefaultSignatures #-}
class Augment a where
type Result a
type Result a = a
type Augmented a r
type Augmented a r = r
augment :: (Result a -> r) -> a -> Augmented a r
default augment :: (a -> r) -> a -> r
augment g x = g x
instance Augment b => Augment (a -> b) where
type Result (a -> b) = Result b
type Augmented (a -> b) r = a -> Augmented b r
augment g f x = augment g (f x)
instance Augment Bool
instance Augment Char
instance Augment Integer
instance Augment [a]
-- and so on for every result type of every function you want to augment...
Example:
> let g n x ys = replicate n x ++ ys
> g 2 'a' "bc"
"aabc"
> let g' = augment length g
> g' 2 'a' "bc"
4
> :t g
g :: Int -> a -> [a] -> [a]
> :t g'
g' :: Int -> a -> [a] -> Int
Well, technically, with just enough IncoherentInstances you can do pretty much anything:
{-# LANGUAGE MultiParamTypeClasses, TypeFamilies,
FlexibleInstances, UndecidableInstances, IncoherentInstances #-}
class Augment a b f h where
augment :: (a -> b) -> f -> h
instance (a ~ c, h ~ b) => Augment a b c h where
augment = ($)
instance (Augment a b d h', h ~ (c -> h')) => Augment a b (c -> d) h where
augment g f = augment g . f
-- Usage
t1 = augment not not
r1 = t1 True
t2 = augment (+1) (+)
r2 = t2 2 3
t3 = augment (+1) foldr
r3 = t3 (+) 0 [2,3]
The problem is that the real return value of something like a -> b -> c isn't
c, but b -> c. What you want require some kind of test that tells you if a type isn't
a function type. You could enumerate the types you are interested in, but that's not so
nice. I think HList solve this problem somehow, look at the paper. I managed to understand a bit of the solution with overlapping instances, but the rest goes a bit over my head I'm afraid.
JavaScript works, because its arguments are a sequence, or a list, so there is just one argument, really. In that sense it is the same as a curried version of the functions with a tuple representing the collection of arguments.
In a strongly typed language you need a lot more information to do that "transparently" for a function type - for example, dependent types can express this idea, but require the functions to be of specific types, not a arbitrary function type.
I think I saw a workaround in Haskell that can do this, too, but, again, that works only for specific types, which capture the arity of the function, not any function.

Do Hask or Agda have equalisers?

I was somewhat undecided as to whether this was a math.SE question or an SO one, but I suspect that mathematicians in general are fairly unlikely to know or care much about this category in particular, whereas Haskell programmers might well do.
So, we know that Hask has products, more or less (I'm working with idealised-Hask, here, of course). I'm interested in whether or not it has equalisers (in which case it would have all finite limits).
Intuitively it seems not, since you can't do separation like you can on sets, and so subobjects seem hard to construct in general. But for any specific case you'd like to come up with, it seems like you'd be able to hack it by working out the equaliser in Set and counting it (since after all, every Haskell type is countable and every countable set is isomorphic either to a finite type or the naturals, both of which Haskell has). So I can't see how I'd go about finding a counterexample.
Now, Agda seems a bit more promising: there it is relatively easy to form subobjects. Is the obvious sigma type Σ A (λ x → f x == g x) an equaliser? If the details don't work, is it morally an equaliser?
tl;dr the proposed candidate is not quite an equaliser, but its irrelevant counterpart is
The candidate for an equaliser in Agda looks good. So let's just try it. We'll need some basic kit. Here are my refusenik ASCII dependent pair type and homogeneous intensional equality.
record Sg (S : Set)(T : S -> Set) : Set where
constructor _,_
field
fst : S
snd : T fst
open Sg
data _==_ {X : Set}(x : X) : X -> Set where
refl : x == x
Here's your candidate for an equaliser for two functions
Q : {S T : Set}(f g : S -> T) -> Set
Q {S}{T} f g = Sg S \ s -> f s == g s
with the fst projection sending Q f g into S.
What it says: an element of Q f g is an element s of the source type, together with a proof that f s == g s. But is this an equaliser? Let's try to make it so.
To say what an equaliser is, I should define function composition.
_o_ : {R S T : Set} -> (S -> T) -> (R -> S) -> R -> T
(f o g) x = f (g x)
So now I need to show that any h : R -> S which identifies f o h and g o h must factor through the candidate fst : Q f g -> S. I need to deliver both the other component, u : R -> Q f g and the proof that indeed h factors as fst o u. Here's the picture: (Q f g , fst) is an equalizer if whenever the diagram commutes without u, there is a unique way to add u with the diagram still commuting.
Here goes existence of the mediating u.
mediator : {R S T : Set}(f g : S -> T)(h : R -> S) ->
(q : (f o h) == (g o h)) ->
Sg (R -> Q f g) \ u -> h == (fst o u)
Clearly, I should pick the same element of S that h picks.
mediator f g h q = (\ r -> (h r , ?0)) , ?1
leaving me with two proof obligations
?0 : f (h r) == g (h r)
?1 : h == (\ r -> h r)
Now, ?1 can just be refl as Agda's definitional equality has the eta-law for functions. For ?0, we are blessed by q. Equal functions respect application
funq : {S T : Set}{f g : S -> T} -> f == g -> (s : S) -> f s == g s
funq refl s = refl
so we may take ?0 = funq q r.
But let us not celebrate prematurely, for the existence of a mediating morphism is not sufficient. We require also its uniqueness. And here the wheel is likely to go wonky, because == is intensional, so uniqueness means there's only ever one way to implement the mediating map. But then, our assumptions are also intensional...
Here's our proof obligation. We must show that any other mediating morphism is equal to the one chosen by mediator.
mediatorUnique :
{R S T : Set}(f g : S -> T)(h : R -> S) ->
(qh : (f o h) == (g o h)) ->
(m : R -> Q f g) ->
(qm : h == (fst o m)) ->
m == fst (mediator f g h qh)
We can immediately substitute via qm and get
mediatorUnique f g .(fst o m) qh m refl = ?
? : m == (\ r -> (fst (m r) , funq qh r))
which looks good, because Agda has eta laws for records, so we know that
m == (\ r -> (fst (m r) , snd (m r)))
but when we try to make ? = refl, we get the complaint
snd (m _) != funq qh _ of type f (fst (m _)) == g (fst (m _))
which is annoying, because identity proofs are unique (in the standard configuration). Now, you can get out of this by postulating extensionality and using a few other facts about equality
postulate ext : {S T : Set}{f g : S -> T} -> ((s : S) -> f s == g s) -> f == g
sndq : {S : Set}{T : S -> Set}{s : S}{t t' : T s} ->
t == t' -> _==_ {Sg S T} (s , t) (s , t')
sndq refl = refl
uip : {X : Set}{x y : X}{q q' : x == y} -> q == q'
uip {q = refl}{q' = refl} = refl
? = ext (\ s -> sndq uip)
but that's overkill, because the only problem is the annoying equality proof mismatch: the computable parts of the implementations match on the nose. So the fix is to work with irrelevance. I replace Sg by the Existential quantifier, whose second component is marked as irrelevant with a dot. Now it matters not which proof we use that the witness is good.
record Ex (S : Set)(T : S -> Set) : Set where
constructor _,_
field
fst : S
.snd : T fst
open Ex
and the new candidate equaliser is
Q : {S T : Set}(f g : S -> T) -> Set
Q {S}{T} f g = Ex S \ s -> f s == g s
The entire construction goes through as before, except that in the last obligation
? = refl
is accepted!
So yes, even in the intensional setting, eta laws and the ability to mark fields as irrelevant give us equalisers.
No undecidable typechecking was involved in this construction.
Hask
Hask doesn't have equalizers. An important thing to remember is that thinking about a type (or the objects in any category) and their isomorphism classes really requires thinking about the arrows. What you say about the underlying sets is true, but types with isomorphic underlying sets certainly aren't necessarily isomorphic. One difference between Hask and Set as that Hask's arrows must be computable, and in fact for idealized Hask, they must be total.
I spent a while trying to come up with a real defensible counterexample, and found some references suggesting it cannot be done, but without proofs. However, I do have some "moral" counterexamples if you will; I cannot prove that no equalizer exists in Haskell, but it certainly seems impossible!
Example 1
f, g: ([Int], Int) -> Int
f (p,v) = treat p as a polynomial with given coefficients, and evaluate p(v).
g _ = 0
The equalizer "should" be the type of all pairs (p,n) where p(n) = 0, along with a function injecting these pairs into ([Int], Int). By Hilbert's 10th problem, this set is undecidable. It seems to me that this should exclude the possibility of it being a Haskell type, but I can't prove that (is it possible that there's some bizarre way to construct this type that nobody has discovered?). It maybe I haven't connected a dot or two -- perhaps proving this is impossible isn't hard?
Example 2
Say you have a programing language. You have a compiler that takes the source code and an input and produces a function, for which the fixed point of the function is the output. (While we don't have compilers like this, specifying semantics sort of like this isn't unheard of). So, you have
compiler : String -> Int -> (Int -> Int)
(Un)curry that into a function
compiler' : (String, Int, Int) -> Int
and add a function
id' : (String, Int, Int) -> Int
id' (_,_,x) = x
Then the equalizer of compiler', id' would be the collection of triplets of source program, input, output -- and this is uncomputable because the programing language is fully general.
More Examples
Pick your favorite undecidable problem: it generally involves deciding if an object is the member of some set. You often have a total function that can be used to check this property for a particular object. You can use this function to create an equalizer where the type should be all the items in your undecidable set. That's where the first two examples came from, and there are tons more.
Agda
I'm not as familiar with Agda. My intuition is that your sigma-type should be an equalizer: you can write the type down, along with the necessary injection function, and it looks like it satisfies the definition entirely. However, as someone who doesn't use Agda, I don't think I'm really qualified to check the details.
The real practical issue though is that typechecking that sigma type won't always be computable, so it's not always useful to do this. In all the examples above, you can write down the Sigma type you provided, but you won't be able to readily check if something is a member of that type without a proof.
Incidentally, this is why Haskell shouldn't be able to have equalizers: if it did, the typechecking would be undecidable! Dependent types is what makes everything tick. They should be able to express interesting mathematical structures in its types, while Haskell can't since its typesystem is decideable. So, I would naturally expect idealized Agda to have all finite limits (I would be disappointed otherwise). The same goes for other dependently typed languages; Coq, for example, should definitely have all limits.

Algebraically interpreting polymorphism

So I understand the basic algebraic interpretation of types:
Either a b ~ a + b
(a, b) ~ a * b
a -> b ~ b^a
() ~ 1
Void ~ 0 -- from Data.Void
... and that these relations are true for concrete types, like Bool, as opposed to polymorphic types like a. I also know how to translate type signatures with polymorphic types into their concrete type representations by just translating the Church encoding according to the following isomorphism:
(forall r . (a -> r) -> r) ~ a
So if I have:
id :: forall a . a -> a
I know that it does not mean id ~ a^a, but it actually means:
id :: forall a . (() -> a) -> a
id ~ ()
~ 1
Similarly:
pair :: forall r . (a -> b -> r) -> r
pair ~ ((a, b) -> r) - > r
~ (a, b)
~ a * b
Which brings me to my question. What is the "algebraic" interpretation of this rule:
(forall r . (a -> r) -> r) ~ a
For every concrete type isomorphism I can point to an equivalent algebraic rule, such as:
(a, (b, c)) ~ ((a, b), c)
a * (b * c) = (a * b) * c
a -> (b -> c) ~ (a, b) -> c
(c^b)^a = c^(b * a)
But I don't understand the algebraic equality that is analogous to:
(forall r . (a -> r) -> r) ~ a
This is the famous Yoneda lemma for the identity functor.
Check this post for a readable introduction, and any category theory textbook for more.
Briefly, given f :: forall r. (a -> r) -> r you can apply f id to get an a, and conversely, given x :: a you can take ($x) to get forall r. (a -> r) -> r.
These operations are mutually inverse. Proof:
Obviously ($x) id == x. I will show that
($(f id)) == f,
since functions are equal when they are equal on all arguments, let's take x :: a -> r and show that
($(f id)) x == f x i.e.
x (f id) == f x.
Since f is polymorphic, it works as a natural transformation; this is the naturality diagram for f:
f_A
Hom(A, A) → A
(x.) ↓ ↓ x
Hom(A, R) → R
f_R
So x . f == f . (x.).
Plugging identity, (x . f) id == f x. QED
(Rewritten for clarity)
There seem to be two parts to your question. One is implied and is asking what the algebraic interpretation of forall is, and the other is asking about the cont/Yoneda transformation, which sdcvvc's answer already covered pretty well.
I'll try to address the algebraic interpretation of forall for you. You mention that A -> B is B^A but I'd like to take that a step further and expand it out to B * B * B * ... * B (|A| times). Although we do have exponentiation as a notation for repeated multiplication like that, there's a more flexible notation, ∏ (uppercase Pi) representing arbitrary indexed products. There are two components to a Pi: the range of values we want to multiply over, and the expression that we're multiplying out. For example, at the value level, you might express the factorial function as fact i = ∏ [1..i] (λx -> x).
Going back to the world of types, we can view the exponentiation operator in the A -> B ~ B^A correspondence as a Pi: B^A ~ ∏ A (λ_ -> B). This says that we're defining an A-ary product of Bs, such that the Bs cannot depend on the particular A we've chosen. Sure, it's equivalent to plain exponentiation, but it lets us move up to cases in which there is a dependence.
In the most general case, we get dependent types, like what you see in Agda or Coq: in Agda syntax, replicate : Bool -> ((n : Nat) -> Vec Bool n) is one possible application of a Pi type, which could be expressed more explicitly as replicate : Bool -> ∏ Nat (Vec Bool), or further as replicate : ∏ Bool (λ_ -> ∏ Nat (Vec Bool)).
Note that as you might expect from the underlying algebra, you can fuse both of the ∏s in the definition of replicate above into a single ∏ ranging over the cartesian product of the domains: ∏ Bool (\_ -> ∏ Nat (Vec Bool)) is equivalent to ∏ (Bool, Nat) (λ(_, n) -> Vec Bool n) just like it would be at the "value level". This is simply uncurrying from the perspective of type theory.
I do realize your question was about polymorphism, so I'll stop going on about dependent types, but they are relevant: forall in Haskell is roughly equivalent to a ∏ with a domain over the type (kind) of types, *. Indeed, the function-like behavior of polymorphism can be observed directly in GHC core, which types them as capital lambdas (Λ). As such, a polymorphic type like forall a. a -> a is actually just ∏ * (Λ a -> (a -> a)) (using the Λ notation now that we distinguish between types and values), which can be expanded out to the infinite product (Bool -> Bool, Int -> Int, () -> (), (Int -> Bool) -> (Int -> Bool), ...) for every possible type. Instantiation of the type variable is simply projecting out the suitable element from the *-ary product (or applying the type function).
Now, for the big piece I missed in my original version of this answer: parametricity. Parametricity can be described in several different ways, but none of the ones I know of (viewing types as relations, or (di)naturality in category theory) really has a very algebraic interpretation. For our purposes, though, it boils down to something fairly simple: you can't pattern-match on *. I know that GHC lets you do that at the type level with type families, but you can only cover a finite chunk of * when doing that, so there are necessarily always points at which your type family is undefined.
What this means, from the point of view of polymorphism, is that any type function F we write in ∏ * F must either be constant (i.e., completely ignore the type it was polymorphic over) or pass the type through unchanged. Thus, ∏ * (Λ _ -> B) is valid because it ignores its argument, and corresponds to forall a. B. The other case is something like ∏ * (Λ x -> Maybe x), which corresponds to forall a. Maybe a, which doesn't ignore the type argument, but only "passes it through". As such, a ∏ A that has an irrelevant domain A (such as when A = *) can be seen as more of an A-ary indexed intersection (picking the common elements across all instantiations of the index), rather than a product.
Crucially, at the value level, the rules of parametricity prevent any funny behavior that might suggest the types are larger than they really are. Because we don't have typecase, we can't construct a value of type forall a. B that does something different based on what a was instantiated to. Thus, although the type is technically a function * -> B, it is always a constant function, and is thus equivalent to a single value of B. Using the ∏ interpretation, it is indeed equivalent to an infinite *-ary product of Bs, but those B values must always be identical, so the infinite product is effectively as big as a single B.
Similarly, although ∏ * (Λ x -> (x -> x)) (a.k.a., forall a. a -> a) is technically equivalent to an infinite product of functions, none of those functions can inspect the type, so all are constrained to only return their input value and not do any funny business like (+1) : Int -> Int when instantiated to Int. Because there is only one (assuming a total language) function that can't inspect the type of its argument but must return a value of that same type, the infinite product is thus just as large as a single value.
Now, about your direct question on (forall r . (a -> r) -> r) ~ a. First, let's express your ~ operator more formally. It's really isomorphism, so we need two functions going back and forth, and an argument that they're inverses.
data Iso a b = Iso
{ to :: a -> b
, from :: b -> a
-- proof1 :: forall x. to (from x) == x
-- proof2 :: forall x. from (to x) == x
}
and now we express your original question in more formal terms. Your question amounts to constructing a term of the following (impredicative, so GHC has trouble with it, but we'll survive) type:
forall a. Iso (forall r. (a -> r) -> r) a
Which, using my earlier terminology, amounts to ∏ * (Λ a -> Iso (∏ * (Λ r -> ((a -> r) -> r))) a). Once again we have an infinite product that can't inspect its type argument. By handwaving, we can argue that the only possible values considering the parametricity rules (the other two proofs are respected automatically) for to and from are ($ id) and flip id.
If this feels unsatisfying, it's probably because the algebraic interpretation of forall didn't really add anything to the proof. It's really just plain old type theory, but I hope I was able to provide something that feels a little less categorical than the Yoneda form of it. It's worth noting that we don't actually need to use parametricity to write proof1 and proof2 above, though. Parametricity only enters the picture when we want to state that ($ id) and flip id are our only options for to and from (which we can't prove in Agda or Coq, for that reason).
To (attempt to) answer the actual question (which is less interesting than the answers to the broader issues raised), the question is ill formed because of a "type error"
Either ~ (+)
(,) ~ (*)
(->) b ~ flip (^)
() ~ 1
Void ~ 0
These all map types to integers, and type constructors to functions on naturals. In a sense, you have a functor from the category of types to the category of naturals. In the other direction, you "forget" stuff, since the types preserve algebraic structure while the naturals throw it away. I.e. given Either () () you can get a unique natural, but given that natural, you can get many types.
But this is different:
(forall r . (a -> r) -> r) ~ a
It maps a type to another type! It is not part of the above functor. It's just an isomorphism within the category of types. So let's give that a different symbol, <=>
Now we have
(forall r . (a -> r) -> r) <=> a
Now you note that we can not only send types to nats and arrows to arrows, but also some isomorphisms to other isomorphisms:
(a, (b, c)) <=> ((a, b), c) ~ a * (b * c) = (a * b) * c
But something subtle is going on here. In a sense, the latter isomorphism on pairs is true because the algebraic identity is true. This is to say that the "isomorphism" in the latter simply means that the two types are equivalent under the image of our functor to the nats.
The former isomorphism we need to prove directly, which is where we start to get to the underlying question -- is given our functor to the nats, what does forall r. map to? But the answer is that forall r. is neither a type, nor a meaningful arrow between types.
By introducing forall, we have moved away from first order types. There's no reason to expect that forall should fit in our above Functor, and indeed, it doesn't.
So we can explore, as others have above, why the isomorphism holds (which is itself very interesting) -- but in doing so we've abandoned the algebraic core of the question. A question which can be answered, I think, is, given the category of higher-order types and constructors as arrows between them, what is there meaningful Functor to?
Edit:
So now I have another approach which shows why adding polymorphism makes things go nuts. We start by asking a simpler question -- does a given polymorphic type have zero or more than zero inhabitants? This is the type inhabitation problem, and winds up being, via Curry-Howard, a problem in modified realizability, since it's the same thing as asking if a formula in some logic is realizable in an appropriate computational model. Now as that page explains, this is decidable in the simply typed lambda calculus but is PSPACE-complete. But once we move to anything more complicated, by adding polymorphism for example and going to System F, then it goes to undecidable!
So, if we can't decide if an arbitrary type is inhabited at all, then we clearly can't decide how many inhabitants it has!
It's an interesting question. I don't have a full answer, but this was too long for a comment.
The type signature (forall r. (a -> r) -> r) can be expressed as me saying
For any type r that you care to name, if you give me a function that takes a and produces an r, then I will give you back an r.
Now, this has to work for any type r, but it can be a specific type a. So the way for me to pull of this neat trick is to have an a sitting around somewhere, that I feed to the function (which produces an r for me) and then I hand that r back to you.
But if I have an a sitting around, I could give it to you:
If you give me a 1, I'll give you an a.
which corresponds to the type signature 1 -> a or simply a. By this informal argument we have
(forall r. (a -> r) -> r) ~ a
The next step would be to generate the corresponding algebraic expression, but I'm not clear on how the algebraic quantities interact with the universal quantification. We may need to wait for an expert!
A few links to the nLab:
Universal quantifier, corresponds to dependent product.
Existential quantifier, corresponds to dependent sum (dependent coproduct).
Thus, in settings of category theory:
Type | Modeled¹ as | In category
-------------------+---------------------------+-------------
Unit | Terminal object | CCC
Bottom | Initial object |
Record | Product |
Union | Sum (coproduct) |
Function | Exponential |
-------------------+---------------------------+-------------
Dependent product² | Right adjoint to pullback | LCCC
Dependent sum | Left adjoint to pullback |
¹) in appropriate category ─ CCC for total and non-polymorphic subset of Haskell (link), CPO for non-total traits of Haskell (link), LCCC for dependently typed languages.
²) forall quantification is a special case of dependent product:
∀(x :: *). y[x] ~ ∏(x : Set)y[x]
where Set is the universe of all small types.

Resources