Can one simplify the Codensity monad on Maybe? - haskell

The codensity monad on a type constructor f is defined by:
newtype C f a = C { unC ∷ forall r. (a → f r) → f r }
It is well known that C f is a monad for any type constructor f (not necessarily covariant). The codensity monad is useful in several ways but it is a complicated type that contains a higher-order function under a universal type quantifier.
My question is, for what f can one show that C f is equivalent to a simpler monad that is defined without type quantifiers?
Some examples where a simplification is possible:
f a = a (the identity functor), for which C f a = a.
f a = r -> a (the Reader monad), for which C f is the State monad (C f a = r -> (a, r)).
f a = (w, a) (the Writer monad), for which C f a = ((a -> w) -> a, (a -> w) -> w)
f a = a -> s (a contravariant functor) and then C f a = (a -> s) -> s (the continuation monad).
f a = a -> a (neither covariant nor contravariant) and the C f a = List a
In the first four of those cases, the type equivalence can be derived from the Yoneda identity: forall r. (a -> r) -> F r = F a when F is a covariant functor. The last case is derived via the Church encoding of the inductive type List.
I looked at some other examples and found that in most cases C f does not seem to be equivalent to anything simpler.
Even if we just take f a = Maybe a the resulting type does not seem to be equivalent to a simpler type expression:
newtype CMaybe a = CMaybe { unCMaybe ∷ forall r. (a → Maybe r) → Maybe r }
The Yoneda identity cannot be used here. My best guess (I have no proof so far) is that CMaybe a = (a -> Bool) -> Bool with some additional laws imposed on the functions of that type. Imposing equations on values can be adequately expressed only within a dependently-typed language.
Can one simplify the codensity monad on Maybe?
Are there other examples of type constructors f where C f can be simplified to a type without type quantifiers?

As mentioned in the comments, a function C Maybe a returns a bit more information than a boolean because the r it returns identifies a single input in the callback, so f k chooses an x such that k x is Just.
Simplifying the callback type from a -> Maybe r to a -> Bool, we obtain the following dependent function type, written in Agda and in Coq respectively for reference:
-- Agda
(∀ {r} → (a → Maybe r) → Maybe r)
≡
((k : a → Bool) → Maybe (∃[ x ] k x ≡ true))
(* Coq *)
(forall r, (a -> option r) -> option r)
=
(forall (k : a -> bool), option { x : a | k x = true })
Proof of equivalence in Agda: https://gist.github.com/Lysxia/79846cce777f0394a6f69d84576a325b
This proves the equivalence of ∀ {r} → (a → Maybe r) → Maybe r and a type without a quantifier: ((f : a → Bool) → Maybe (∃[ x ] f x ≡ true)), which is equivalent to q:: (a → Bool) → Maybe a with the restriction that q(p) equals Just x only if p(x) = true.
Note that if a is finite, then C Maybe a is also finite. One approach to the problem then is to compute the corresponding cardinality function.
You can reinterpret the expression of the cardinality as a type, giving a solution to your problem for types of the form Finite a -> C f a.
You can look it up on the online encyclopedia of integer sequences, to find alternative combinatorial interpretations. Sadly, the relevant sequence doesn't have much information.
Product_{j=1..n} j^C(n-1,j-1)
-- https://oeis.org/A064320
If you could find a simpler type for C f a, with only sums, products (not indexed by the cardinality of a), and exponentials, this may correspond to a non-trivial combinatorial identity. Conversely, difficulty in finding such a combinatorial identity provides compelling evidence for the non-existence of simple solutions. It also gives a quick way to test a candidate simplification for validity, by comparing its cardinality with the expected one.

Related

How to understand the universal quantification in Yoneda's natural isomorphism?

While learning about the Yoneda lemma, I came across the following encoding of the underlying natural isomorphism in Haskell:
forward :: Functor f => (forall r . (a -> r) -> f r) -> f a
forward f = f id
backward :: Functor f => f a -> (forall r. (a -> r) -> f r)
backward x f = fmap f x
I tried to simplify the implementation of backward to flip fmap but failed as the latter has type f a -> (a -> r) -> f r.
From here on I'm stuck in pinpointing what precisely are the differences between the two implementations. More so as applying either function to a concrete functor yields the same result type:
ghci> :t bw (Just "")
bw (Just "") :: (String -> r) -> Maybe r
ghci> :t (flip fmap) (Just "")
(flip fmap) (Just "") :: (String -> r) -> Maybe r
Questions:
What exactly is the difference between the two?
Is there something I can do with one that can't be done with the other?
Why is the universal quantification needed at all for backward?
In Haskell we write lambdas for the values we pass:
id :: forall a . a->a
id = \x -> x
but when we compile to the lower-level Core language, which is closed to System F, we also see type lambdas. The code becomes more like
id = \ #a (x :: #a) -> x
which means: the caller should choose and pass a type #a, then a value of that type x :: #a and finally receive that value back x.
Now, your issue arises from the two (isomorphic) types
forall a . T -> U a
-- vs
T -> forall a . U a
Here, T does not depend on a. The two types are indeed isomorphic, and at the lower level it's just a matter of "flipping" the term-lambda with the type-lambda:
\ #a (x :: T) -> ... -- has the former type
-- vs
\ (x :: T) #a -> ... -- has the latter
Since in Haskell we do not usually write or see the type-lambdas, it is hard to distinguish between these. Further, when the compiler performs type inference it will (as far as I can understand) infer the first one, where the type-lambdas are all at the beginning.
What exactly is the difference between the two?
Not much. They have isomorphic types, after all. One has the forall a at the topmost level, while the other has it in another position.
Once you apply then to arguments as you did, GHC will automatically choose the right place to add the inferred #a argument, so you won't easily notice the difference. The two calls are of these forms:
f #r x
-- vs
f x #r
On top, GHC can re-generalize the type, hence we obtain
(\#r -> f #r x) :: forall r . ....
(\#r -> f x #r) :: forall r . ....
Is there something I can do with one that can't be done with the other?
Not really.
Why is the universal quantification needed at all for backward?
Usually, when defining an isomorphism between A and B we want
forw :: A -> B
back :: B -> A
If either type has a forall (say, A = forall r . V r) then we get a forall in that position. Note that the type for back in this case is also isormoprhic to
forall r . B -> V r
which is what we would get if we omitted the quantifier (and let Haskell re-add it implicitly at the top-level).
(By contrast, the quantifier in the type of forw :: (forall r . V r) -> B can not be moved to the top level.)

`refold :: Functor s => (a -> s a, a) -> (s b -> b) -> b` as a morphism between universal types

Various recursion scheme boil down to specific instantiation of refold
refold :: Functor s => (s b -> b) -> (a -> s a) -> a -> b
refold f g = go where go a = f (fmap go (g a))
What is the meaningful interpretation of refold ?
The data type data Nu f = forall a. Nu (a -> f a) a and newtype Mu f = Mu {unMu :: forall b. (f b -> b) -> b} can be seen as the colimit and limit of the forget functor from coalgebras and algebras, and refold is a morphism between those, but does it shed light on refold ?
refold' :: forall s. Functor s => Nu s -> Mu s
refold' (Nu g (a :: a)) = Mu mu where
mu :: forall b. (s b -> b) -> b
mu f = go a where
go :: a -> b
go a = f (fmap go (g a))
I guess it depends what you mean by "meaningful interpretation".
If s is a base functor for a recursive data type and a corecursive codata type, like the following functor s ~ ListF e for the recursive list data type [e] (which, in Haskell, is also a corecursive stream codata type):
{-# LANGUAGE DeriveFunctor #-}
data ListF e b = Nil | Cons e b deriving (Show, Functor)
then an s-coalgebra of type a -> s a together with a starting seed a can generate a value of codata type [e] by unfolding from that seed, while an s-algebra of type s b -> b can consume a value of data type [e] by folding into a value of type b. The refold function just combines the operation of unfolding from a and folding into b, without actually creating an intermediate codata/data type.
For example, you can generate the (finite) codata stream [10,9..1] by unfolding from an Integer seed using the starting value / coalgebra pair (a,g) as follows:
a :: Integer
a = 10
g :: Integer -> (ListF Integer) Integer
g 0 = Nil
g n = Cons n (n-1)
and fold a list to calculate its Int length using the algebra:
f :: (ListF Integer) Int -> Int
f Nil = 0
f (Cons _ b) = 1 + b
The refold function just combines these operations:
main = print $ refold f g a
In this particular case, it calculates the length 10 of the stream/list [1..10] without actually creating any intermediate stream/list.
I guess the intuition is that if an operation can be imagined as an F-recursion applied to an F-corecursion for the same functor F, then it's a refold. Or, maybe more practically, if an algorithm has an internal recursive structure that matches the functor F, it can be expressed as a refold. The documentation for refold in recursion-schemes gives the example of quicksort having a recursive structure that matches a binary tree, though you've presumably already seen that example.
Note: What follows is wrong or at best imprecise, but I'll try to think a little more about it.
In practice, refold isn't only used as a morphism between universal data types, but if you have a final s-coalgebra for a codata type C associated with the functor s:
eatC :: C -> ListF Integer C
and an initial s-algebra for a data type D also associated with the functor s:
makeD :: ListF Integer D -> D
then refold makeD eatC should be a natural morphism from codata type C to data type D. That is, it should be the unique morphism satsifying:
fmap h . refold makeD eatC = refold makeD eatC . fmap h
I'm not sure that aspect is tremendously interesting...
A few remarks (which I imagine to be valid - don't hesitate to correct - I am not an expert on semantic) :
non termination allow to write anything, as #chi suggested.
take s the identity functor and refold reads refold :: (b -> b) -> (a -> a) -> a -> b which certainly looks like a paradox. So for any haskell type to be read "logically", we might need hidden, side conditions.
We don't even need recursion to encounter paradox / non termination
-- N. P. Mendler. Recursive types and type constraints in second-order lambda calculus. In LICS, pages 30–36. IEEE Computer Society, 1987
data T a = C (T a -> ())
p :: T a -> (T a ->() )
p (C f) = f
w :: T a -> ()
w x = (p x) x
initial algebra, like monads, and other concepts, happens on two level, one in the semantic of a language, and another explicitly in the program that we write. For instance the semantic of data ListInt = [] | Int * ListInt is an initial algebra. and, in haskell, also, semantically, a final coalgebra. This is "Mu = Nu" that you might hear,and this equation, which is paradoxical, happens is in the semantic of haskell. Mu here has nothing to do with data Mu f = ... The same happen when we write type ListInt = Fix ListIntF where data Fix f = Fix (f (Fix f)) we are emulating that semantic in our program, but this itself is subject to the semantic of Haskell (and indeed, this (initial algebra) is the same as the semantic Mu and Nu and equal to both because haskell equates them). In a way by writing data Mu f = ... and data Nu f = .. we are "stealing" (and obliged to do so) part of haskell semantic, as well as mixing it with our own (properly expressing the universal co-cone Nu f and the universal cone Mu f), attempting at providing an embedding for recursion (just like we do with HOAS where we steal bindings from Haskell). But we can't get away with no paradox, as we are obliged to steal that "Mu = Nu".
This leads to very useful and yet very "illogical" function like refold
By writing fold : Functor s => (f a -> a) -> Fix s -> a we are pretending the initial algebra of f always exists, which might translate to non-termination
Categorically we can view refold in two different ways. It's a bit of a mouthful but here we go :
refold can be seen as a proper function refold :: Functor s => (s b -> b) -> (a -> s a) -> a -> b to be detailed later
refold' can be seen as the carrier refold' :: forall s. Functor s => Nu s -> Mu s of an algebra in Twisted(Hask), whose objects are morphisms in Hask. So refold' is an object, not a morphism, of this category. Now every functor s on a category C (here Hask) induces a functor s' on Twisted(C) by applying to arrows. In the end the morphism in Twisted
`s' refold' -(out,in)-> refold'`
is the initial s' algebra, where out is the "final" coalgebra Nu s -> s (Nu s) and in is the "initial" algebra Mu s -> s (Mu s)
Now the action of the function refold is, given a coalgebra and an algebra (here in Hask but could be elsewhere), to return the unique morphism from the carrier of the coalgebra, followed by refold', followed by the unique morphism from the initial algebra. This is a proper function that comes from selecting the components of the universal (co)cone at the given components.
This explains why when we feed the final coalgebra out and the initial algebra in to refold, we get back refold' itself. the unique morphism to pre and post compose refold' with are the identities.
It's a bit hard to see what's what because we work in Hask where everything is a function. Some morphisms are really about the category we work in (could be other things than Hask) and some morphism really are functions would be even if we chose another category.
Because of non termination the solution to knowing what really is refold has to being truthful to the semantic of haskell, and use complete partial order (or to restrict s in some way).
So I imagine the real meaning of refold can be deduced from the real meaning of refold' which is just a initial algebra, with all the standard caveat coming from haskell semantic threaded.

Can a function be parametrically polymorphic over a non-nullary type constructor?

In Haskell, there are many examples of higher kinded polymorphism when dealing with ad hoc polymorphism, such as Monad and Functor. However, I cannot think of any examples of this for parametric polymorphism.
Is this possible, and if so, can I have an example of one which is useful?
If you still allow typeclass constraints, then the answer is, sure! E.g. I'd still call something like
normalise :: (Foldable f, Functor f, Fractional n) => f n -> f n
normalise v = fmap (/sum v) V
parametric polymorphism. But I suppose that's not what you have in mind.
Another thing that's obviously possible is to just contrain types to have a particular form _ _, like
hmap :: (f a -> f b) -> [f a] -> [f b]
hmap = map
This isn't exactly remarkable, but it could possibly be useful in some applications as a subtle hint to the type checker. In fact, this is one way you can solve the phantom argument problem: instead of
class LengthyList l where minimumLength :: l a -> Int
instance LengthyList [] where minimumLength _ = 0
instance LengthyList NonEmpty where minimumLength _ = 1
you might make the signature
minimumLength :: p (l a) -> Int
aka
minimumLength :: proxy (l a) -> Int
Thereby you still pass in the type-information of l, but guarantee that the implementation cannot try to evaluate the argument at runtime.
The standard way to do this is however
minimumLength :: Tagged (l a) Int
or
minimumLength :: Proxy (l a) -> Int
Generally though, there's nothing you could do with f a that couldn't also be done with fa, so essentially you could rewrite the type of any such higher-order parametric function to a first-order parametrically polymorphic one.
You totally can do this. A type synonym pigworker is fond of:
type f ~> g = forall a . f a -> g a
This, for reasons I don't actually know, represents a natural transformation (whatever exactly that is) between functors f and g. But a function can take an argument of type f ~> g and apply it to as many types f a as it likes. Using non-regular types (specifically, higher-order nested data types, as Ralf Hinze, for example, explored in Numerical Representations as Higher-Order Nested Datatypes), it could be applied to an unbounded number of different types.
A contrived example:
{-# LANGUAGE RankNTypes, TypeOperators #-}
type f ~> g = forall a . f a -> g a
data Two g a = Two (g a) (g a)
data Foo f a = This (f a)
| That (Foo (Two f) a)
hello :: (f ~> g) -> Foo f a -> Foo g a
hello t (This fa) = This (t fa)
hello t (That buh) =
That (hello (\(Two x y) -> Two (t x) (t y)) buh)
hello is polymorphic in the types f and g, each of which has kind * -> *. † I believe that converting this to use only types of kind * could require non-trivial refactoring.
†In fact, if you enable PolyKinds, f and g will each have a polykinded type k -> *.
One example I'm quite fond of is the foldr operator for lists indexed by their length: it is parametrically polymorphic over a predicate p of kind Nat -> * and guarantees that if you apply it to a list of length m then you get back a proof of p m.
This corresponds to this type:
foldr :: forall a (p :: Nat -> *).
(forall n. a -> p n -> p ('Succ n)) ->
p 'Zero ->
forall n. Vec a n -> p n
This extra precision makes it possible to implement e.g. append using foldr rather than having to proceed by pattern-matching.
append :: forall a m n. Vec a m -> Vec a n -> Vec a (m :+: n)
I've uploaded a complete gist with all the right language extensions turned on and the code corresponding to these types in case you want to peek.

How does lifting (in a functional programming context) relate to category theory?

Looking at the Haskell documentation, lifting seems to be basically a generalization of fmap, allowing for the mapping of functions with more than one argument.
The Wikipedia article on lifting gives a different view however, defining a "lift" in terms of a morphism in a category, and how it relates to the other objects and morphisms in the category (I won't give the details here). I suppose that that could be relevant to the Haskell situation if we are considering Cat (the category of categories, thus making our morphisms functors), but I can't see how this category-theoretic notion of a lift relates the the one in Haskell based on the linked article, if it does at all.
If the two concepts aren't really related, and just have a similar name, are the lifts (category theory) used in Haskell at all?
Lifts, and the dual notion of extensions, are absolutely used in Haskell, perhaps most prominently in the guise of comonadic extend and monadic bind. (Confusingly, extend is a lift, not an extension.) A comonad w's extend lets us take a function w a -> b and lift it along extract :: w b -> b to get a map w a -> w b. In ASCII art, given the diagram
w b
|
V
w a ---> b
where the vertical arrow is extract, extend gives us a diagonal arrow (making the diagram commute):
-> w b
/ |
/ V
w a ---> b
More familiar to most Haskellers is the dual notion of bind (>>=) for a monad m. Given a function a -> m b and return :: a -> m a, we can "extend" our function along return to get a function m a -> m b. In ASCII art:
a ---> m b
|
V
m a
gives us
a ---> m b
| __A
V /
m a
(That A is an arrowhead!)
So yes, extend could have been called lift, and bind could have been called extend. As for Haskell's lifts, I have no idea why they're called that!
EDIT: Actually, I think that again, Haskell's lifts are actually extensions. If f is applicative, and we have a function a -> b -> c, we can compose this function with pure :: c -> f c to get a function a -> b -> f c. Uncurrying, this is the same as a function (a, b) -> f c. Now we can also hit (a, b) with pure to get a function (a, b) -> f (a, b). Now, by fmaping fst and snd, we get a functions f (a, b) -> f a and f (a, b) -> f b, which we can combine to get a function f (a, b) -> (f a, f b). Composing with our pure from before gives (a, b) -> (f a, f b). Phew! So to recap, we have the ASCII art diagram
(a, b) ---> f c
|
V
(f a, f b)
Now liftA2 gives us a function (f a, f b) -> f c, which I won't draw because I'm sick of making terrible diagrams. But the point is, the diagram commutes, so liftA2 actually gives us an extension of the horizontal arrow along the vertical one.
"Lifting" comes up many times in functional programming, not only in fmap but in many other contexts. Examples of "liftings" include:
fmap :: (a -> b) -> F a -> F b where F is a functor
cmap :: (b -> a) -> F a -> F b where F is a contrafunctor
bind :: (a -> M b) -> M a -> M b where M is a monad
ap :: F (a -> b) -> F a -> F b where F is an applicative functor
point :: (_ -> a) -> _ -> F a where F is a pointed functor
filtMap :: (a -> Maybe b) -> F a -> F b where F is a filterable functor
extend :: (M a -> b) -> M a -> M b where M is a comonad
Other examples include applicative contrafunctor, filterable contrafunctor, and co-pointed functor.
All these type signatures are similar in one way: they map one kind of function between a and b into another kind of function between a and b.
In these different cases, the function types are not simply a -> b but have some kind of "twisted" types: e.g. a -> M b or F (a -> b) or M a -> b or F a -> F b and so on. However, each time the laws are very similar: twisted function types need to have identity and composition laws, and twisted composition needs to be associative.
For example, for applicative functors, we need to be able to compose functions of type F (a -> b). So we need to define a special "twisted" identity function (pure id :: F (a -> a) ) and a "twisted" composition operation, call it apcomp, with type signature F (a -> b) -> F (b -> c) -> F (a -> c). This operation needs to have identity and associativity laws. The ap operation needs to have identity and composition laws ("twisted identity maps to twisted identity" and "twisted composition maps to twisted composition").
Once we go through all these examples and derive the laws, we can prove that the laws turn out to be the same in all cases, if we formulate the laws via the "twisted" operations.
This is because we can formulate all these operations as functors in the sense of category theory. For example, for the applicative functor, we define two categories: the F-applicative category (objects a, b, ..., morphisms F(a -> b)) and the F-lifted category (objects F a, F b, ..., morphisms F a -> F b). A functor between these two categories requires us to have a lifting of morphisms, ap :: F(a -> b) -> F a -> F b. The laws of ap are completely equivalent to the standard laws of that functor.
Similar arguments hold for other typeclasses. We need to define categories, morphisms, composition operations, identity morphisms, and functors in each case. Once we verify that the laws hold, we will see that each of these typeclasses has an associated pair of categories and a functor between them, such that the laws of the typeclass are equivalent to the laws of these categories and the functor.
What have we gained? We have formulated the laws of many typeclasses in the same way (as the laws of categories and functors). This is a great economy of thought: we don't need to memorize all these laws each time; we can just memorize which categories and which functors need to be written down for each typeclass, as long as the methods of the typeclass can be reduced to some kind of "twisted lifting".
In this way, we can say that "liftings" are important and provide an application of category theory in functional programming.
I have made a presentation about this, https://www.youtube.com/watch?v=Zau8CxsfxOo and I'm writing a new free book where all derivations will be shown. https://github.com/winitzki/sofp

What are the values of a polymorphically encoded recursive algebraic data type?

The following question relates to Recursive algebraic data types via polymorphism in Haskell.
Recursive algebraic data types can be realized in any language with the capabilities of System F using universal parametric polymorphism. For example, the type of natural numbers can be introduced (in Haskell) as
newtype Nat = Nat { runNat :: forall t. (t -> (t -> t) -> t) }
with the 'usual' natural number n being realized as
\ x0 f -> f(f(...(f x0)...))
with n iterations of f used.
Similarly, the type of Booleans can be introduced as
newtype Bool = Bool { runBool :: forall t. t -> t -> t }
with the expected values 'true' and 'false' being realized as
true = \ t f -> t
false = \ t f -> f
Q: Are all terms of type Bool or Nat or any other potentially recursive algebraic data type (encoded in this way) of the above form, up to some reduction rules of operational semantics?
Example 1 (Natural numbers): Is any term of type forall t. t -> (t -> t) -> t 'equivalent' in some sense to a term of the form \ x0 f -> f (f ( ... (f x0) ... ))?
Example 2 (Booleans): Is any term of type forall t. t -> t -> t 'equivalent' in some sense to either \ t f -> t or \ t f -> f?
Addendum (internal version): In case the language under consideration is even capable of expressing propositional equality, this meta-mathematical question could be internalized as follows, and I would be very happy if someone would come up with a solution for it:
For any functor m we can define the universal module and some decoding-encoding function on it as follows:
type ModStr m t = m t -> t
UnivMod m = UnivMod { univProp :: forall t. (ModStr m t) -> t }
classifyingMap :: forall m. forall t. (ModStr m t) -> (UnivMod m -> t)
classifyingMap f = \ x -> (univProp x) f
univModStr :: (Functor m) => ModStr m (UnivMod m)
univModStr = \ f -> UnivMod $ \ g -> g (fmap (classifyingMap g) f)
dec_enc :: (Functor m) => UnivMod m -> UnivMod m
dec_enc x = (univProp x) univModStr
Q: In case the language is capable of expressing this: is the equality type dec_enc = id inhabited?
In System F (AKA λ2), all inhabitants of ∀α.α→α→α are indeed λ-equal to K or K*.
First, if M : ∀α.α→α→α then it has normal form N (since System F is normalizing) and by subject reduction theorem (see Barendregt: Lambda calculi with types) also N : ∀α.α→α→α.
Let's examine how these normal forms can look like. (We'll be using Generation lemma for λ2, see the Barendregt's book for formal details.)
If N is a normal form, that N (or any its subexpression) must be in head normal form, that is an expression of the form λx1 ... xn. y P1 ... Pk, where n and/or k can be also 0.
For the case of N, there must be at least one λ, because initially we don't have any variable bound in the typing context that would take the place of y. So N = λx.U and x:α |- U:α→α.
Now again there must be at least one λ in the case of U, because if U were just y P1 ... Pk then y would have a function type (even for k=0 we'd need y:α→α), but we have just x:α in the context. So N = λxy.V and x:α, y:α |- V:α.
But V can't be λ.., because then it'd have function type τ→σ. So V must be just of the form z P1 ... Pk, but since we don't have any variable of function type in the context, k must be 0 and therefore V can be only x or y.
So there are only two terms in normal form of type ∀α.α→α→α: λxy.x and λxy.y and all other terms of this type are β-equal to one of these.
Using similar reasoning we can prove that all inhabitants of ∀α.α→(α→α)→α are β-equal to a Church numeral. (And I think that for type ∀α.(α→α)→α→α the situation is slightly worse; we also need η-equality, as λf.f and λfx.fx correspond to 1, but aren't β-equal, just βη-equal.)
If we disregard bottoms and unsafe stuff, then the only thing you can do universally with functions a -> a is compose them. However, that doesn't quite stop us at finite f (f ( ... (f x0) ... )) expressions: we also have the infinite composition infty x f = f $ infty x f.
Similarly, the only non-recursive boolean values are indeed \t _ -> t and \_ f -> f, but you can also tie knots here, like
blarg t f = blarg (blarg t f) (blarg f t)

Resources