Interesting operators in Haskell that obey modal axioms - haskell

I was just looking at the type of map :: (a -> b) -> [a] -> [b] and just the shape of this function made me wonder whether we could see the list forming operator [ ] as obeying various axioms common to normal modal logics (e.g, T, S4, S5, B), since we seem to have at least the K-axiom of normal modal logics, with [(a -> b)] -> [a] -> [b].
This leads to my question: are there familiar, interesting operators or functors in Haskell which have the syntax of modal operators of a certain kind, and which obey the axioms common to normal modal logics (i.e, K, T, S4, S5 and B)?
This question can be sharpened and made more specific. Consider an operator L, and its dual M. Now the question becomes: are there any familiar, interesting operators in Haskell with some of the following properties:
(1) L(a -> b) -> La -> Lb
(2) La -> a
(3) Ma -> L(M a)
(4) La -> L(L a)
(5) a -> L(M a)
It would be very interesting to see some nice examples.
I've thought of a potential example, but it would be good to know whether I am correct: the double negation translation with L as not not and M as not. This translation takes every formula a to its double negation translation (a -> ⊥) -> ⊥ and, crucially, validates axioms (1)-(4), but not axiom (5). I asked a question here https://math.stackexchange.com/questions/2347437/continuations-in-mathematics-nice-examples and it seems the double negation translation can be simulated via the continuation monad, the endofunctor taking every formula a to its double negation translation (a -> ⊥) -> ⊥. There Derek Elkins notes the existence of a couple of double negation translations corresponding, via the Curry-Howard isomorphism, to different continuation-passing style transforms, e.g. Kolmogorov's corresponds to the call-by-name CPS transform.
Perhaps there are other operations that can be done in the continuation monad via Haskell which can validate axioms (1)-(5).
(And just to eliminate one example: there are clear relations between so-called Lax logic https://www.sciencedirect.com/science/article/pii/S0890540197926274 and Monads in Haskell, with the return operation obeying the laws of the modal operator of this logic (which is an endofunctor). I am not interested so much in those examples, but in examples of Haskell operators which obey some of the axioms of modal operators in classical normal modal logics)

Preliminary note: I apologise for spending a good chunk of this answer talking about Propositional Lax Logic, a topic you are very familiar with and not too interested in as far as this question is concerned. In any case, I do feel this theme is worthy of broader exposure -- and thanks for making me aware of it!
The modal operator in propositional lax logic (PLL) is the Curry-Howard counterpart of Monad type constructors. Note the correspondence between its axioms...
DT: x -> D x
D4: D (D x) -> D x
DF: (x -> y) -> D x -> D y
... and the types of return, join and fmap, respectively.
There are a number of papers by Valeria de Paiva discussing intuitionistic modal logics and, in particular, PLL. The remarks about PLL here are largely based on Alechina et. al., Categorical and Kripke Semantics for Constructive S4 Modal Logic (2001). Interestingly, that paper makes a case for the PLL being less weird than it might seem at first (cf. Fairtlough and Mendler, Propositional Lax Logic (1997): "As a modal logic it is special because it features a single modal operator [...] that has a flavour both of possibility and necessity"). Starting with CS4, a version of intuitionistic S4 without distribution of possibility over disjunction...
B stands for box, and D for diamond
BK: B (x -> y) -> (B x -> B y)
BT: B x -> x
B4: B x -> B (B x)
DK: B (x -> y) -> (D x -> D y)
DT: x -> D x
D4: B (B x) -> B x
... and adding x -> B x to it causes B to become trivial (or, in Haskell parlance, Identity), simplifying the logic to PLL. That being so, PLL can be regarded as a special case of a variant of intuitionistic S4. Furthermore, it frames PLL's D as a possibility-like operator. That is intuitively appealing if we take D as the counterpart to Haskell Monads, which often do have a possibility flavour (consider Maybe Integer -- "There might be an Integer here" -- or IO Integer -- "I will get an Integer when the program is executed").
A few other possibilities:
At a glance, it seems the symmetrical move of making D trivial leads us to something very much like ComonadApply. I say "very much like" largely due to the functorial strength of Haskell Functors, the matter being that x /\ B y -> B (x /\ y) is an awkward thing to have if you are looking for a conventional necessity modality.
Kenneth Foner's Functional Pearl: Getting a Quick Fix on Comonads (thanks to dfeuer for the reference) works towards expressing intuitionistic K4 in Haskell, covering some of the difficulties along the way (including the functorial strength issue mentioned above).
Matt Parsons' Distributed Modal Logic offers a Haskell-oriented presentation of intuitionistc S5 and an interpretation of it, originally by Tom Murphy VII, in terms of distributed computing: B x as a x-producing computation that can be run anywhere on a network, and D x as an address to a x somewhere on it.
Temporal logics can be related via Curry-Howard to functional reactive programming (FRP). Suggestions of jumping-off points include de Paiva and Eades III, Constructive Temporal Logic, Categorically (2017), as well as this blog post by Philip Schuster alongside this interesting /r/haskell thread about it.

Related

How to use a refutation to direct the type checker in Haskell?

Does filling the hole in the following program necessarily require non-constructive means? If yes, is it still the case if x :~: y decidable?
More generally, how do I use a refutation to guide the type checker?
(I am aware that I can work around the problem by defining Choose as a GADT, I'm asking specifically for type families)
{-# LANGUAGE DataKinds #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE TypeOperators #-}
module PropositionalDisequality where
import Data.Type.Equality
import Data.Void
type family Choose x y where
Choose x x = 1
Choose x _ = 2
lem :: (x :~: y -> Void) -> Choose x y :~: 2
lem refutation = _
If you try hard enough to implement a function, you can convince yourself
that it is not possible. If you're not convinced, the argument can be made
more formal: we enumerate programs exhaustively to find that none is possible. It turns out there's only half a dozen of meaningful cases to consider.
I wonder why this argument is not made more often.
Totally not accurate summary:
Act I: proof search is easy.
Act II: dependent types too.
Act III: Haskell is still fine for writing dependently typed programs.
I. The proof search game
First we define the search space.
We can reduce any Haskell definition to one of the form
lem = (exp)
for some expression (exp). Now we only need to find a single expression.
Look at all possible ways of making an expression in Haskell:
https://www.haskell.org/onlinereport/haskell2010/haskellch3.html#x8-220003
(this doesn't account for extensions, exercise for the reader).
It fits a page in a single column, so it's not that big to start with.
Moreover, most of them are sugar for some form of function application or
pattern-matching; we can also desugar away type classes with dictionary
passing, so we're left with a ridiculously small lambda calculus:
lambdas, \x -> ...
pattern-matching, case ... of ...
function application, f x
constructors, C (including integer literals)
constants, c (for primitives that cannot be written in terms of the constructs above, so various built-ins (seq) and maybe FFI if that counts)
variables (bound by lambdas and cases)
We can exclude every constant on the grounds that I think the question is
really about pure lambda calculus (or the reader can enumerate the constants,
to exclude black-magic constants like undefined, unsafeCoerce,
unsafePerformIO that make everything collapse (any type is inhabited and, for
some of those, the type system is unsound), and to be left with white-magic
constants to which the present theoretical argument can be generalized via a
well-funded thesis).
We can also reasonably assume that we want a solution with no recursion involved
(to get rid of noise like lem = lem, and fix if you felt like you couldn't
part with it before), and which actually has a normal form, or preferably, a
canonical form with respect to βη-equivalence. In other words, we refine and
examine the set of possible solutions as follows.
lem :: _ -> _ has a function type, so we can assume WLOG that its definition starts with a lambda:
-- Any solution
lem = (exp)
-- is η-equivalent to a lambda
lem = \refutation -> (exp) refutation
-- so let's assume we do have a lambda
lem = \refutation -> _hole
Now enumerate what could be under the lambda.
It could be a constructor,
which then has to be Refl, but there is no proof that Choose x y ~ 2 in
the context (here we could formalize and enumerate the type equalities the
typechecker knows about and can derive, or make the syntax of coercions
(proofs of equalities) explicit and keep playing this proof search game
with them), so this doesn't type check:
lem = \refutation -> Refl
Maybe there is some way of constructing that equality proof, but then the
expression would start with something else, which is going to be another
case of the proof.
It could be some application of a constructor C x1 x2 ..., or the
variable refutation (applied or not); but there's no possible way that's
well-typed, it has to somehow produce a (:~:), and Refl is really the
only way.
Or it could be a case. WLOG, there is no nested case on the left, nor any
constructor, because the expression could be simplified in both cases:
-- Any left-nested case expression
case (case (e) of { C x1 x2 -> (f) }) { D y1 y2 -> (g) }
-- is equivalent to a right-nested case
case (e) of { C x1 x2 -> case (f) of { D y1 y2 -> (g) } }
-- Any case expression with a nested constructor
case (C u v) of { C x1 x2 -> f x1 x2 }
-- reduces to
f u v
So the last subcase is the variable case:
lem = \refutation -> case refutation (_hole :: x :~: y) of {}
and we have to construct a x :~: y. We enumerate ways of filling the
_hole again. It's either Refl, but no proof is available, or
(skipping some steps) case refutation (_anotherHole :: x :~: y) of {},
and we have an infinite descent on our hands, which is also absurd.
A different possible argument here is that we can pull out the case
from the application, to remove this case from consideration WLOG.
-- Any application to a case
f (case e of C x1 x2 -> g x1 x2)
-- is equivalent to a case with the application inside
case e of C x1 x2 -> f (g x1 x2)
There are no more cases. The search is complete, and we didn't find an
implementation of (x :~: y -> Void) -> Choose x y :~: 2. QED.
To read more on this topic, I guess a course/book about lambda calculus up
until the normalization proof of the simply-typed lambda calculus should give
you the basic tools to start with. The following thesis contains an
introduction on the subject in its first part, but admittedly I'm a poor judge
of the difficulty of such material: Which types have a unique inhabitant?
Focusing on pure program equivalence,
by Gabriel Scherer.
Feel free to suggest more adequate resources and literature.
II. Fixing the proposition and proving it with dependent types
Your initial intuition that this should encode a valid proposition
is definitely valid. How might we fix it to make it provable?
Technically, the type we are looking at is quantified with forall:
forall x y. (x :~: y -> Void) -> Choose x y :~: 2
An important feature of forall is that it is an irrelevant quantifier.
The variables it introduces cannot be used "directly" in a term of this type. Although that aspect becomes more prominent in the presence of dependent types,
it still pervades Haskell today, providing another intuition for why this (and many other examples) is not "provable" in Haskell: if you think about why you
think that proposition is valid, you will naturally start with a case split about whether x is equal to y, but to even do such a case split you need a way to
decide which side you're on, which will of course have to look at x and y,
so they cannot be irrelevant. forall in Haskell is not at all like what most people mean with "for all".
Some discussion on the matter of relevance can be found in the thesis Dependent Types in Haskell, by Richard Eisenberg (in particular, Section 3.1.1.5 for an initial example, Section 4.3 for relevance in Dependent Haskell and Section 8.7 for comparison with other languages with dependent types).
Dependent Haskell will need a relevant quantifier to complement forall, and
which would get us closer to proving this:
foreach x y. (x :~: y -> Void) -> Choose x y :~: 2
Then we could probably write this:
lem :: foreach x y. (x :~: y -> Void) -> Choose x y :~: 2
lem x y p = case x ==? u of
Left r -> absurd (p r) -- x ~ y, that's absurd!
Right Irrefl -> Refl -- x /~ y, so Choose x y = 2
That also assumes a first-class notion of disequality /~, complementing ~,
to help Choose reduce when it is in the context and a decision function
(==?) :: foreach x y. Either (x :~: y) (x :/~: y).
Actually, that machinery isn't necessary, that just makes for a shorter
answer.
At this point I'm making stuff up because Dependent Haskell does not exist yet,
but that is easily doable in related dependently typed languages (Coq, Agda,
Idris, Lean), modulo an adequate replacement of the type family Choose
(type families are in some sense too powerful to be translated as mere
functions, so may be cheating, but I digress).
Here is a comparable program in Coq, showing also that lem applied to 1 and 2
and a suitable proof does reduce to a proof by reflexivity of choose 1 2 = 2.
https://gist.github.com/Lysxia/5a9b6996a3aae741688e7bf83903887b
III. Without dependent types
A critical source of difficulty here is that Choose is a closed type
family with overlapping instances. It is problematic because there is
no proper way to express the fact that x and y are not equal in Haskell,
to know that the first clause Choose x x does not apply.
A more fruitful avenue if you're into Pseudo-Dependent Haskell is to use a
boolean type equality:
-- In the base library
import Data.Type.Bool (If)
import Data.Type.Equality (type (==))
type Choose x y = If (x == y) 1 2
An alternative encoding of equality constraints becomes useful for this style:
type x ~~ y = ((x == y) ~ 'True)
type x /~ y = ((x == y) ~ 'False)
with that, we can get another version of the type-proposition above,
expressible in current Haskell (where SBool is the singleton type of Bool),
which essentially can be read as adding the assumption that the equality of x
and y is decidable. This does not contradict the earlier claim about "irrelevance" of forall, the function is inspecting a boolean (or rather an SBool), which postpones the inspection of x and y to whoever calls lem.
lem :: forall x y. SBool (x == y) -> ((x ~~ y) => Void) -> Choose x y :~: 2
lem decideEq p = case decideEq of
STrue -> absurd p
SFalse -> Refl

How to think about the lack of laws

I am a mathematician who works a lot with category theory, and I've been using Haskell for a while to perform certain computations etc., but I am definitely not a programmer. I really love Haskell and want to become much more fluent in it, and the type system is something that I find especially great to have in place when writing programs.
However, I've recently been trying to implement category theoretic things, and am running into problems concerning the fact that you seemingly can't have class method laws in Haskell. In case my terminology here is wrong, what I mean is that I can write
class Monoid c where
id :: c -> c
m :: c -> c -> c
but I can't write some law along the lines of
m (m x y) z == m x $ m y z
From what I gather, this is due to the lack of dependent types in Haskell, but I'm not sure how exactly this is the case (having now read a bit about dependent types). It also seems that the convention is just to include laws like this in comments and hope that you don't accidentally cook up some instance that doesn't satisfy them.
How should I change my approach to Haskell to deal with this problem? Is there a nice mathematical/type-theoretic solution (for example, require the existence of an associator that is an isomorphism (though then the question is, how do we encode isomorphisms without a law?)); is there some 'hack' (using extensions such as DataKinds); should I be drastic and switch to using something like Idris instead; or is the best response to just change the way I think about using Haskell (i.e. accept that these laws can't be implemented in a Haskelly way)?
(bonus) How exactly does the lack of laws come from not supporting dependent types?
You want to require that:
m (m x y) z = m x (m y z) -- (1)
But to require this you need a way to check it. So you, or your compiler (or proof assistant), need to construct a proof of this. And the question is, what type is a proof of (1)?
One could imagine some Proof type but then maybe you could just construct a proof that 0 = 0 instead of a proof of (1) and both would have type Proof. So you’d need a more general type. I can’t decide how to break up the rest of the question so I’ll go for a super brief explanation of the Curry-Howard isomorphism followed by an explanation of how to prove two things are equal and then how dependent types are relevant.
The Curry-Howard isomorphism says that propositions are isomorphic to types and proofs are isomorphic to programs: a type corresponds to a proposition and a proof of that proposition corresponds to a program constructing a value inhabiting that type. Ignoring how many propositions might be expressed as types, an example would be that the type A * B (written (A, B) in Haskell) corresponds to the proposition “A and B,” while the type A + B (written Either A B in Haskell) corresponds to the proposition “A or B.” Finally the type A -> B corresponds to “A implies B,” as a proof of this is a program which takes evidence of A and gives you evidence of B. One should note that there isn’t a way to express not A but one could imagine adding a type Not A with builtins of type Either a (Not a) for the law of the excluded middle as well as Not (Not a) -> a, and a * Not a -> Void (where Void is a type which cannot be inhabited and therefore corresponds to false), but then one can’t really run these programs to get constructivist proofs.
Now we will ignore some realities of Haskell and imagine that there aren’t ways round these rules (in particular undefined :: a says everything is true, and unsafeCoerce :: a -> b says that anything implies anything else, or just other functions that don’t return where their existence does not imply the corresponding proof).
So we know how to combine propositions but what might a proposition be? Well one could be to say that two types are equal. In Haskell this corresponds to the GADT
data Eq a b where Refl :: Eq c c
Where this constructor corresponds to the reflexive property of equality.
[side note: if you’re still interested so far, you may be interested to look up Voevodsky’s univalent foundations, depending on how much the idea of “Homotopy type theory” interests you]
So can we prove something now? How about the transitive property of equality:
trans :: Eq a b -> Eq b c -> Eq a c
trans x y =
case x of
Refl -> -- by this match being successful, the compiler now knows that a = b
case y of
Refl -> -- and now b = c and so the compiler knows a = c
Refl -- the compiler knows that this is of type Eq d d, and as it knows a = c, this typechecks as Eq a c
This feels like one hasn’t really proven anything (especially as this mainly relies on the compiler knowing the transitive and symmetric properties), but one gets a similar feeling when proving simple things in logic as well.
So now how might you prove the original proposition (1)? Well let’s imagine we want a type c to be a monoid then we should also prove that $\forall x,y,z:c, m (m x y) z = m x (m y z).$ So we need a way to express m (m x y) z as a type. Strictly speaking this isn’t dependent types (this can be done with DataKinds to promote values and type families instead of functions). But you do need dependent types to have types depend on values. Specifically if you have a type Nat of natural numbers and a type family Vec :: Nat -> * (* is the kind (read type) of all types) of fixed length vectors, you could define a dependently typed function mkVec :: (n::Nat) -> Vec n. Observe how the type of the output depends on the value of the input.
So your law needs to have functions promoted to type level (skipping the questions about how one defines type equality and value equality), as well as dependent types (made up syntax):
class Monoid c where
e :: c
(*) :: c -> c -> c
idl :: (x::c) -> Eq x (e * x)
idr :: (x::c) -> Eq x (x * e)
assoc :: (x::c) -> (y::c) -> (z::c) -> Eq ((x * y) * z) (x * (y * z))
Observe how types tend to become large with dependent types and proofs. In a language missing typeclasses one could put such values into a record.
Final note on the theory of dependent types and how these correspond to the curry Howard isomorphism.
Dependent types can be considered an answer to the question: what types correspond to the propositions $\forall x\in S\quad P(x)$ and $\exists y\in T\quad Q(y)?$
The answer is that you create new ways to make types: the dependent product and the dependent sum (coproduct). The dependent product expresses “for all values $x$ of type $S,$ there is a value of type $P(x).$” A normal product would be a dependent product with $S=2,$ a type inhabited by two values. A dependent product might be written (x:T) -> P x. A dependent sum says “some value $y$ of type $T$, paired with a value of type $Q(y).$” this might be written (y:T) * Q y.
One can think of these as a generalisation of arbitrarily indexed (co)products from Set to general categories, where one might sensibly write e.g. $\prod_\Lambda X(\lambda),$ and sometimes such notation is used in type theory.

How are free objects constructed?

So I understand that a free object is defined as being the left-hand side of an adjunction. But how does that lead you to the Haskell definition of such objects?
More concretely: given a "forgetful functor" from the category of monads to the category of endofunctors,
newtype Forget m a = Forget (m a)
instance Monad m => Functor (Forget m) where
fmap f (Forget x) = Forget (liftM f x)
then the free monad Free :: (* -> *) -> (* -> *) is a type admitting (a Monad instance and) the following isomorphism:
type f ~> g = forall x. f x -> g x
fwd :: (Functor f, Monad m) => (f ~> Forget m) -> (Free f ~> m)
bwd :: (Functor f, Monad m) => (Free f ~> m) -> (f ~> Forget m)
fwd . bwd = id = bwd . fwd
If we drop the Forgets, for the free monad in Control.Monad.Free we have fwd = foldFree and bwd = (. liftF) (I think?)
But how do these laws lead to the construction found in Control.Monad.Free? How do you come up with data Free f a = Return a | Free (f (Free f a))? Surely you don't just guess until you come up with something that satisfies the laws? Same question goes for the free category of a graph, the free monoid of a set, and any other free object you care to name.
I don't think the notion of "free" is as well-defined as you seem to believe. While I do think the general consensus is that it is indeed a left adjoint of a forgetful functor, the issue lies in what "forgetful" means. There are clear definitions in some broad-ranging cases, particularly for concrete categories.
Universal algebra provides a broad ranging approach which covers almost all "algebraic" structures (over sets). The upshot is given a "signature" which consists of sorts, operations, and equations, you build a term algebra (i.e. an AST) of the operations and then quotient it by the equivalence relation generated by the equations. This is the free algebra generated from that signature. For example, we usually talk about monoids as being a set equipped with an associative multiplication and unit. In code, the free algebra before quotienting would be:
data PreFreeMonoid a
= Unit
| Var a
| Mul (PreFreeMonid a) (PreFreeMonoid a)
We would then quotient by the equivalence relation generated from the equations:
Mul Unit x = x
Mul x Unit = x
Mul (Mul x y) z = Mul x (Mul y z)
But you can show that the resulting quotient type is isomorphic to lists. In the multi-sorted case, we'd have a family of term algebras, one for each sort.
One way to recast this categorically is to use the notion of a (slightly generalized) Lawvere theory. Given a signature with a set of sorts, S, we can build a small category, call it T, whose objects are lists of elements of S. These small categories will be called theories in general. Operations get mapped to arrows whose source and target correspond to the appropriate arities. We freely add "tupling" and "projection" arrows so that e.g. [A,B,A] becomes the product [A]×[B]×[A]. Finally, we add commutative diagrams (i.e. equations between arrows) correspond to each equation in the signature. At this point, T essentially represents the term algebra(s). In fact, an actual interpretation or model of this term algebra is just a finite product preserving functor T → Set, write Mod(T) for the category of finite product preserving functors from T → Set. In the single sorted case, we'd have an underlying set functor, but in general we get a S-indexed family of sets, i.e. we have a functor U : Mod(T) → SetS where we're viewing S as a discrete category here. U is simply U(m)(s) = m([s]). We can actually calculate the left adjoint. First, we have a family of sets indexed by elements of S, call it G. Then we need to build a finite product preserving functor T → Set, but any functor into Set (i.e. copresheaf) is a colimit of representables which, in this case, means it's a quotient of the following (dependent) sum type:
Free(G)(s) = Σt:T.T(t,s)×Free(G)(t)
If Free(G) is finite product preserving then in the t = [A,B] case, for example, we'd have:
T([A,B],s)×Free(G)([A,B]) = T([A,B],s)×Free(G)([A])×Free(G)([B])
and we simply define Free(G)([A]) = G(A) for each A in S producing:
T([A,B],s)×Free(G)([A])×Free(G)([B]) = T([A,B],s)×G(A)×G(B)
Altogether this says that, an element of Free(G)([A]) consists of an arrow of T into [A] and a list of elements of the appropriate sets corresponding to the source of that arrow, i.e. the arity of the term modulo equations that make it behave sensibly and obey the equations from the signature but which I'm not going to elaborate on. For the multiplication of a monoid, we'd have an arrow m : [A,A] → [A] and this would lead to a tuples (m, x, y) where x and y are elements of G(A) corresponding to an term like m(x, y). Recasting this definition of as a recursive one takes looking at the equations we're quotienting by.
There are other things to verify to show that Free ⊣ U but it isn't too hard. Once that's done, U∘Free is a monad on SetS.
The nice thing about the Lawvere theory approach is that it is easy to generalize in multiple ways. One straightforward way is to replace Set by some other topos E. It's actually the case that the category of directed multigraphs form a topos, but I don't believe you can (easily) view categories as theories over Graph. A different direction to extend Lawvere theories is to consider doctrines other than finite product preserving functors, in particular finite limit preserving functors aka left exact or lex functors is an interesting point. Both small categories and directed multigraphs (which categorists sometimes call quivers) can be viewed as models of a category with finite limits. There's a straightforward inclusion of the theory of directed multigraphs into the theory of small categories. This, contravariantly, induces a functor cat → Graph simply by precomposition. The left adjoint of this is then (almost) the left Kan extensions along that inclusion. These left Kan extensions will occur in Set so ultimately they are just colimits which are just quotients of (dependent) sum types. (Technically, you need to verify that the resulting Kan extensions are finite limit preserving. We're also helped by the fact that the models of the theory of graphs are essentially arbitrary functors from the theory of graphs. This happens because the theory of graphs consists only of unary operations.)
None of this helps for free monads though. However, it turns out that one construction subsumes all of these including free monads. Returning to universal algebra, it's the case that every signature with no equations gives rise to a (polynomial) functor whose initial algebra is the free term algebra. Lambek's lemma suggests and it's easy to prove that the initial algebra is just colimit of repeated applications of the functor. The above general result is based on a similar approach and the relevant case for free monads is the unpointed endofunctor case, in which you start to see the definition of Free that you gave, but actually working it out fully requires unfolding many constructions.
Frankly, though, what I'm pretty sure actually happened in the FP world is the following. If you look at PreFreeMonoid, it's actually a free monad. PreFreeMonoid Void is the initial algebra for the functor the monoid signature (minus the equations) would give rise to. If you are familiar with using functors for initial algebras and you even start thinking about universal algebra, you are almost certainly going to end up defining a type like data Term f a = Var a | Op (f (Term f a)). It's easy to verify this is a monad once you think to ask the question. If you're even vaguely familiar with the relationship monads have to algebraic structures or to term substitution, then you may ask the question quite quickly. The same construction can be stumbled upon from a programming language implementation perspective. If you just directly set your goal to be deriving the free monad construction in Haskell, there are several intuitive ways to arrive at the right definition especially combined with some equational/parametricity-driven reasoning. In fact, the "monoid object in the category of endofunctors" one is quite suggestive.
('really wish this StackExchange had MathJax support.)

Curry Howard correspondence and equality

A while ago I read that the function type a -> b corresponds to the relation a ≤ b, or is it a ≥ b? This makes sense to me because two types are isomorphic if we have a bijection between them (i.e. (a ≈ b) ≡ (a -> b, b -> a)). Similarly, (a = b) ≡ (a ≤ b) ∧ (a ≥ b).
I know that this is not the Curry-Howard-Lambek correspondence (i.e. the correspondence between type theory, logic and category theory). It's the correspondence between type theory and something else. I want to know learn more about this correspondence. Could somebody point me in the right direction?
I know that this doesn't seem like a programming question but it is related to programming and I'm hoping that some functional programmer knows more about it and can point me in the right direction.
Every pre-ordered set forms a category. Let (S, «) be a pre-ordered set. Define a category C whose objects are the elements of S and with Hom(a, b) inhabited by (a, b) if a « b and uninhabited otherwise. Define composition the only way you possibly can. The category laws follow immediately from the transitivity and reflexivity of the pre-order.
A lattice, in particular, will form a category admitting finite products and coproducts. A bounded lattice will form one with initial and final objects.
Types and functions in a sufficiently well-behaved functional language also form a category with finite products and coproducts, and initial and final objects. So if you squint out to a categorical blur, these things will start to look vaguely similar.
(This is more a comment than an answer, but I need more space.)
The type a -> b corresponds to a <= b. This is useful, e.g., to speak about fixed points at the type level, which are needed to properly define recursive types (lists, trees, ...).
Recall how recursion is solved, without categories. In domain theory, given a function f :: a -> a we look for a least x satisfying f x = x (least fixed point). This turns out to also be the least x satisfying f x <= x (least prefixed point). We then get the induction principle
f y <= y ==> fix f <= y
which basically states that, if we have any prefixed point y, then the least (pre)fixed point fix f must be less than y -- indeed, it is the least!
Now, let's sprinkle some category powder over that. Implication becomes a -> arrow, and <= also becomes ->. We get
(f y -> y) -> fix f -> y
Looks familiar, where did I see that...? Ah!
newtype Fix f = Fix { unFix :: f (Fix f) }
cata :: Functor f => (f y -> y) -> Fix f -> y
cata g = g . fmap (cata g) . unFix
Hence, the cata general eliminator/catamorphism is just a category-empowered version of the good old induction principle.
Note how domain points y are now object in our category (i.e. types). Also, functions f must be applicable to y, so these are not morphisms in our category (which would be function values :: A -> B, from some type to some type), but correspond to functors int the category of types (mapping types to types :: * -> *).

Is Milner let polymorphism a rank 2 feature?

let a = b in c can be thought as a syntactic sugar for (\a -> c) b, but in a typed setting in general it's not the case. For example, in the Milner calculus let a = \x -> x in (a True, a 1) is typable, but seemingly equivalent (\a -> (a True, a 1)) (\x -> x) is not.
However, the latter is typable in System F with a rank 2 type for the first lambda.
My questions are:
Is let polymorphism a rank 2 feature that sneaked secretly in the otherwise rank 1 world of Milner calculus?
The purpose of having of separate let construct seems to specify which types should be generalized by type checker, and which are not. Does it serve any other purposes? Are there any reasons to extend more powerful systems e.g. System F with separate let which is not sugar? Are there any papers on the rationale behind the design of the Milner calculus which no longer seems obvious to me?
Is there the most general type for \a -> (a True, a 1) in System F?
Are there type systems closed under beta expansion? I.e. if P is typable and M N = P then M is typable?
Some clarifications:
By equivalence I mean equivalence modulo type annotations. Is 'System F a la Church' the correct term for that?
I know that in general the principal typing property doesn't hold in F, but a principal type could exist for my particular term.
By let I mean the non-recursive flavour of let. Extension of system F with recursive let is obviously useful as it allows for non-termination.
W.r.t. to the four questions asked:
A key insight in this matter is that rather than just typing a
lambda-abstraction with a potentially polymorphic argument type, we
are typing a (sugared) abstraction that is (1) applied exactly once
and, moreover, that is (2) applied to a statically known argument.
That is, we can first subject the "argument" (i.e. the definiens of
the local definition) to type reconstruction to find its
(polymorphic) type; then assign the found type to the "parameter"
(the definiendum); and then, finally, type the body in the extended
type context.
Note that that is considerably more easy than general rank-2 type
inference.
Note that, strictly speaking, let .. = .. in .. is only syntactic sugar in System F if you demand that the definiendum carries a type annotation: let .. : .. = .. in .. .
Here are two solutions for T in (\a :: T -> (a True, a 1)) in System F: forall b. (forall a. a -> b) -> (b, b) and forall c d. (forall a b. a -> b) -> (c, d). Note that neither one of them is more general than the other. In general, System F does not admit principal types.
I suppose this holds for the simply typed lambda-calculus?
Types are not preserved under beta-expansion in any calculus that can express the concept of "dead code". You can probably figure out how to write something similar to this in any usable language:
if True then something typable else utter nonsense
For example, let M = (\x y -> x) (something typable) and N = (utter nonsense) and P = (something typable), so that M N = P, and P is typable, but M N isn't.
...rereading your question, I see that you only demand that M be typable, but that seems like a very strange meaning to give to "preserved under beta-expansion" to me. Anyway, I don't see why some argument like the above couldn't apply: simply let M have some untypable dead code in it.
You could type (\a -> (a True, a 1)) (\x -> x) if instead of generalizing only let expressions, you generalized all lambda abstractions. Having done so, one also needs to instantiate type schemas at every use point, not simply at the point where the binder which refers to them is actually used. I'm don't think there's any problem with this actually, outside of the fact that its vastly less efficient. I recall some discussion of this in TAPL, in fact, making similar points.
I recall many years ago seeing in a book about lambda calculus (possibly Barendregt) a type system preserved by beta expansion. It had no quantification, but it had disjunction to express that a term needed to be of more than one type simultaneously, as well as a special type omega which every term inhabited. As I recall, the latter avoids Daniel Wagner's dead code objection. While every expression was well-typed, restricting the position of omega in the type allowed you to charactize which expressions had (weak?) head normal forms.
Also if I recall correctly, fully normal form expressions had principal types, which did not contain omega.
For example the principal type of \f x -> f (f x) (the Church numeral 2) would be something like ((A -> B) /\ (B -> C)) -> A -> C
Not able to answer all your very specialized questions, but no, its not a rank 2 feature. As you write, it's just that let definitions are being quantified which yields a fully polymorphic rank-1 type unless the definition depends on some monomorphic value ina nouter scope.
Please also note that Haskell let is known as let rec in other languages and allows definition of mutually recursive functions and values. This is something you would not want to code manually with lambda-expressions and Y-combinators.

Resources