I tried to write the following code:
{-# LANGUAGE GADTs #-}
module V where
data V a where
V :: (c -> a) -> V a
down :: V (V a) -> V a
down (V f) = V $ \(c,d) -> case f c of
V f' -> f' d
Then GHC answered type variable `c' would escape its scope.
I understand why it doesn't compile: it uses hidden type of existential out of case.
But actually the type is still hidden by V. so essentially function down has no problem I think.
Is there a way to write a compilable down function?
Here's the fundamental problem: f can have a peek at c and use the value of c to determine which type to hide in its existential. For example:
v :: V (V Int)
v = V $ \p -> case p of
False -> V (id :: Int -> Int)
True -> V (fromEnum :: Char -> Int)
So d would need to be both a valid Int and a valid Char if we called down v! To be able to feed an existential that can be so variable, you'll need to ensure that its argument can take on all the types it may demand.
newtype Forall = Forall (forall a. a)
down :: V (V a) -> V a
down (V f) = V $ \(c, d_) -> case f c of
V f' -> case d_ of
Forall d -> f' d
In Haskell, I can't find a simple way to make your code work.
I find it interesting though, that your idea does work in a language with full dependent types like Coq (and likely Agda, Idris, etc.).
The main crux, as Daniel Wagner points out, is that the type resulting from f can depend on the value of c, so the pair (c,d) in the original code should be a dependent pair.
For what it is worth, here's how we can do it in Coq.
Note that this does not involve an uninhabited type like forall a. a.
(* An existential type, under an impredicative encoding *)
Inductive V (A: Type): Type :=
Vk : forall (B: Type), (B -> A) -> V A
.
(* The usual "identity to equivalence" *)
Definition subst {A B: Type} (p: A = B) (x: A): B :=
match p with
| eq_refl => x
end .
(* The main function.
Essentially, we want to turn
Vk B (fun b => Vk C g)
into
Vk (B*C) (fun (b,c) => g c)
but both C and g can depend on (b:B), so (B*C)
should be a Sigma type {b:B & ty b}.
*)
Definition down (A: Type) (x: V (V A)): V A :=
match x with
| Vk B f => let
ty (z: V A): Type := match z with | Vk C g => C end
in Vk A {b:B & ty (f b)} (fun w =>
match w with
| existT b y =>
match f b as o return ty (f b) = ty o-> A with
| Vk C g => fun (h: ty (f b) = C) =>
g (subst h y)
end eq_refl
end )
end .
Thanks for another great answer, chi!
I rewrote the code for Agda and actually it does compile. As an additional note for the above answer, I place my code here.
module down where
open import Level
open import Data.Product
data V {ℓ} (A : Set ℓ) : Set (suc ℓ) where
Vk : {B : Set} → (B → A) → V A
down : ∀ {ℓ} {A : Set ℓ} → V (V A) → V A
down {ℓ} {A} (Vk {B} f) = Vk go where
ty : V A → Set
ty (Vk {C} _) = C
go : Σ B (λ b → ty (f b)) → A
go (b , c) with f b
go (b , c) | Vk {C} g = g c
Related
I'm learning Haskell by taking fp-course exercise. There is a question block my way. I don't know how Haskell infer lift2 (<$>) (,)'s type, and turn out Functor k => (a1 -> k a2) -> a1 -> k (a1, a2).
I have tried out lift2 (<$>)'s type, and verified by GHCI's command :t lift2 (<$>). step as follow.
I know lift2 :: Applicative k => (a -> b -> c) -> k a -> k b -> k c
I also know (<$>) :: Functor f => (m -> n) -> (f m) -> (f n)
Then by lambda calculus's Beta conversion, I can figure out lift2 (<$>)'s type is
(Applicative k, Functor f) => k (m -> n) -> k (f m) -> k (f n) by replacing a with (m -> n), b with (f m), c with (f n)
When I going to figure out lift2 (<$>) (,)'s type, It block me.
I know (,) :: a -> b -> (a,b)
And lift2 (<$>) :: (Applicative k, Functor f) => k (m -> n) -> k (f m) -> k (f n).
How does Haskell apply lift2 (<$>) to (,)?
The first variable of lift2 (<$>) is Applicative k => k (m -> n).
The to be applied value is (,) :: a -> b -> (a, b)
How the k, m, n replace by a, b?
GHCI's answer is lift2 (<$>) (,) :: Functor k => (a1 -> k a2) -> a1 -> k (a1, a2) by typing :t lift2 (<$>) (,). I cannot infer out this answer by myself.
So I have 2 questions.
1.Could someone show me the inference step by step?
2.In this case the conversion seems not be Beta conversion in lambda calculus (May be I am wrong). What the conversion is?
Type derivation is a mechanical affair.(*) The key is that the function arrow -> is actually a binary operator here, associating on the right (while the application / juxtaposition associates on the left).
Thus A -> B -> C is actually A -> (B -> C) is actually (->) A ((->) B C) is actually ((->) A) (((->) B) C). In this form it is clear that it consists of two parts so can match up with e.g. f t, noting the equivalences f ~ ((->) A) and t ~ (((->) B) C) (or in pseudocode f ~ (A ->), and also t ~ (B -> C) in normal notation).
When "applying" two type terms a structural unification is performed. The structures of two terms are matched up, their sub-parts are matched up, and the resulting equivalences are noted as "substitutions" (... ~ ...) available to be performed and ensured in further simplifications of the resulting type terms (and if some incompatibility were to be thus discovered, the type would be then rejected).
This follows a general structure / type derivation rule rooted in the logical rule of Modus Ponens:
A -> B C
--------------
B , where A ~ C
And thus,
liftA2 :: A f => ( a -> b -> c ) -> f a -> f b -> f c
(<$>) :: F h => (d -> e) -> h d -> h e
(,) :: s -> (t -> (s, t))
---------------------------------------------------------------------------------
liftA2 (<$>) (,) :: f b -> f c
---------------------------------------------------------------------------------
b ~ h d f ~ (s->)
a ~ d->e c ~ h e a ~ t->(s,t)
\_ _ _ _ _ _ _ _ _ _ _ _ _ _ a ~ d->e
----------------------------------------------------
d ~ t e ~ (s,t)
liftA2 (<$>) (,) :: f b -> f c
~ (s -> b ) -> (s -> c )
~ F h => (s -> h d) -> (s -> h e )
~ F h => (s -> h t) -> (s -> h (s,t))
(writing A for Applicative and F for Functor, as an abbreviation). The substitutions stop when there are no more type variables to substitute.
There's some freedom as to which type variables are chosen to be substituted on each step, but the resulting terms will be equivalent up to consistent renaming of the type variables, anyway. For example we could choose
~ F h => (s -> h d) -> (s -> h e )
~ F h => (s -> h d) -> (s -> h (s,t))
~ F h => (s -> h d) -> (s -> h (s,d))
The Applicative ((->) s) constraint was discovered in the process. It checks out since this instance exists for all s. We can see it by typing :i Applicative at the prompt in GHCi. Looking through the list of instances it prints, we find instance Applicative ((->) a) -- Defined in `Control.Applicative'.
If there were no such instance the type derivation would stop and report the error, it wouldn't just skip over it. But since the constraint holds, it just disappears as it does not constrain the derived type, Functor h => (s -> h t) -> (s -> h (s,t)). It's already "baked in".
The instance defines (f <*> g) x = f x $ g x but the definition itself is not needed in type derivations, only the fact that it exists. As for the liftA2, it is defined as
liftA2 h f g x = (h <$> f <*> g) x -- for any Applicative (sans the `x`)
= (h . f <*> g) x -- for functions
= (h . f) x (g x)
= f x `h` g x -- just another combinator
(yes, (<*>) = liftA2 ($) ), so
liftA2 (<$>) (,) g s = (,) s <$> g s
= do { r <- g s -- in pseudocode, with
; return (s, r) -- "Functorial" Do
}
Or in other words, liftA2 (<$>) (,) = \ g s -> (s ,) <$> g s.
With the type Functor m => (s -> m t) -> s -> m (s,t). Which is what we have derived.
(*) See also:
Haskell: how to infer the type of an expression manually
Prism is like Iso except one of the two conversions is partial. Is there an optic where both conversions are partial?
Of course one can create a type (s -> Maybe a, b -> Maybe t) but I'm wondering if such a thing could be expressed an as Optic _ _?
You can generalize Isos (i.e., (s -> a, b -> t)) into (s -> m a, b -> m t) by replacing profunctors over Hask (i.e., Profunctor, that's the constraint in the definition of Iso as an Optic) with profunctors over Kleisli categories (here for the Maybe monad).
class Monad m => KProfunctor m p where
dimapM :: (s -> m a) -> (b -> m t) -> p a b -> p s t
-- dimapM pure pure = id
-- dimapM f g . dimapM h i = dimapM (h >=> f) (g >=> i)
type Optic p s t a b = p a b -> p s t
type KIso m s t a b = forall p. KProfunctor m p => Optic p s t a b
To construct one example of such profunctor, take the type of pseudo-isos (s -> m a, b -> m t) that we are trying to work with in the first place, and put s and t as the main indices:
data PseudoIso m a b s t = MkPseudoIso
{ toM :: s -> m a
, fromM :: b -> m t
}
instance Monad m => KProfunctor m (PseudoIso m) where
-- exercise for the reader
To go from PseudoIso to KIso, use dimapM (the fields of PseudoIso are exactly the right type for the arguments of dimapM)
To go from KIso to PseudoIso, partially apply to the identity PseudoIso a b a b
Actually, it doesn't have to be a Kleisli category. A profunctor over any category (:->) :: k -> k -> Type will give you a class of optics of the form (s :-> a, b :-> t).
Note: you can define an instance of Choice with KProfunctor Maybe, so maybe everything should really be specialized to Maybe so Choice could reasonably be added as a superclass of KProfunctor, then KIso would be a subtype of Prism.
Let us look at profunctor encoding. It's simpler.
Choice is a class for Prisms, we are making subclass of Prisms, so Choice is natural choice for a superclass:
class Choice p => Weird p where
weird :: (s -> Maybe a) -> (b -> Maybe t) -> p a b -> p s t
If we try to write an instance for p = (->), it won't work.
So our new kind of optics is not a superclass of Setter.
The hierarchy would probably look something like: at best probably something like
(Can a Traversal and thus Lens be turned into the new Weird optic, maybe?)
Lens
/ \
Iso Traversal -> Setter
\ /
Prism
\
Weird
Let's try with another concrete Profunctors.
I'll use types as defined in my blog post: Glassery
ForgetM is used to implement preview
type Optic' p s a = p a a -> p s s
preview :: Optic' (ForgetM a) s a -> s -> Maybe a
preview o = runForgetM (o (ForgetM Just))
newtype ForgetM r a b = ForgetM { runForgetM :: a -> Maybe r }
instance Profunctor (ForgetM r) where
dimap f _ (ForgetM p) = ForgetM (p . f)
instance Choice (ForgetM r) where
right' (ForgetM p) = ForgetM (either (const Nothing) p)
instance Weird (ForgetM r) where
weird sa _bt (ForgetM ab) = ForgetM $ \s -> sa s >>= ab
TaggedM can be used to define something in oppisite direction (not in Glassery):
repreview :: Optic' TaggedM s a -> a -> Maybe s
repreview o a = unTaggedM (o (TaggedM (Just a)))
newtype TaggedM a b = TaggedM { unTaggedM :: Maybe b }
instance Profunctor TaggedM where
dimap _sa bt (TaggedM b) = TaggedM (fmap bt b)
instance Choice TaggedM where
right' (TaggedM b) = TaggedM (fmap Right b)
instance Weird TaggedM where
weird _sa bt (TaggedM b) = TaggedM (b >>= bt)
We can now try this.
Simple case works:
*Main> preview (weird Just Just) 'x'
Just 'x'
*Main> repreview (weird Just Just) 'x'
Just 'x'
Prisms can be used as new thing (thing right' = _Right):
*Main> preview right' (Left 'x')
Nothing
*Main> preview right' (Right 'x')
Just 'x'
There's also a nice symmetric Profunctor:
newtype Re p s t a b = Re { runRe :: p b a -> p t s }
instance Profunctor p => Profunctor (Re p s t) where
dimap f g (Re p) = Re (p . dimap g f)
instance Cochoice p => Choice (Re p s t) where
right' (Re p) = Re (p . unright)
instance Choice p => Cochoice (Re p s t) where
unright (Re p) = Re (p . right')
and we can write Weird instance for it:
instance Weird p => Weird (Re p s t) where
weird sa bt (Re p) = Re (p . weird bt sa)
And we notice that, we need to add Cochoice to be the superclass of Weird:
class (Choice p, Cochoice p) => Weird p where
weird :: (s -> Maybe a) -> (b -> Maybe t) -> p a b -> p s t
That starts to look promising.
van-Laarhoven. That's tricky.
Compare Prism in profunctor and VL encodings:
type PrismVL s t a b = forall f p. (Choice p, Applicative f) => p a (f b) -> p s (f t)
type PrismP s t a b = forall p. (Choice p) => p a b -> p s t
The good start is to assume that this Weird thing would look like
type WeirdOptic s t a b = forall p f. (Weird p, Applicative f) => Optic p f s t a b
Maybe we would need to strengten the f constraint too, a bit.
But I leave something for you to experiment with.
Also an open question is what's the intuition behind this new Weird optic;
and what laws it should have (to be an optic, and not just two functions
smashed together). It feels than trying to do similar Monad / Kleisli optic
with anything fancier than Maybe is difficult, but maybe it would work out too.
Here's a solution in lens-style optics (rather than profunctor optics as in the other answers) using the Filterable type-class:
-- A partial variant of (#) for partial reviews
infixr 8 #?
(#?) :: Optic Tagged Maybe s t a b -> b -> Maybe t
f #? b = Just b & Tagged & f & unTagged
-- A Prism "turned around", i.e a getter but a partial review
type InvPrism s t a b =
forall p f. (Profunctor p, Filterable f) => Optic p f s t a b
-- A partial iso-morphism, i.e a partial getter and partial review
type PartialIso s t a b =
forall p f. (Choice p, Applicative f, Filterable f) => Optic p f s t a b
-- Turn APrism around
invPrism :: APrism b a t s -> InvPrism s t a b
invPrism p =
dimap
(review (reviewing (clonePrism p)))
(mapMaybe (^? getting (clonePrism p)))
-- Create a PartialIso from two partial conversions
partialIso :: (s -> Maybe a) -> (b -> Maybe t) -> PartialIso s t a b
partialIso sma ams =
dimap
(maybe (Left ()) Right . sma)
(catMaybes . either (const (pure Nothing)) (fmap ams)) .
right'
-- Coerce APrism to an Optic'
reviewing ::
(Choice p, Bifunctor p, Functor f, Settable f) =>
APrism s t a b -> Optic' p f t b
reviewing p =
bimap f (fmap f)
where
f = runIdentity . unTagged . clonePrism p . Tagged . Identity
Consider the function
f g h x y = g (g x) (h y)
What is its type? Obviously I can just use :t f to find out, but if I need to deduce this manually, what's the best way to go about this?
The method I have been shown is to assign types to parameters and deduce from there - e.g. x :: a, y :: b gives us that g :: a -> c and h :: b -> d for some c,d (from g x, h y) and then we keep on making deductions from there (c = a from g (g x) (h y) etc.).
However this sometimes just turns into a huge mess and often I'm not sure how to make further deductions or work out when I'm done. Other problems sometimes happen - for instance, in this case x will turn out to be a function, but that was not obvious to me before cheating and looking up the type.
Is there a specific algorithm that will always work (and is reasonable for a human to execute quickly)? Otherwise, are there some heuristics or tips that I am missing?
Let's inspect the function at the top level:
f g h x y = g (g x) (h y)
We will begin by assigning names to types, then going along and specialising them as we learn more about the function.
Firstly, let's assign a type to the top expression. Let's call it a:
g (g x) (h y) :: a
Let's take the first argument out and assign types respectively:
-- 'expanding' (g (g x)) (h y) :: a
h y :: b
g (g x) :: b -> a
And again
-- 'expanding' g (g x) :: b -> a
g x :: c
g :: c -> b -> a
And again
-- 'expanding' g x :: c
x :: d
g :: d -> c
But hold on: we now have that g :: c -> b -> a and that g :: d -> c. So by inspection, we know that c and d are equivalent (written c ~ d) and also that c ~ b -> a.
This can be inferred by simply comparing the two types for g that we have inferred. Note that this is not a type contradiction, since the type variables are general enough to fit their equivalents. This would be a contradiction if we had inferred, for instance, that Int ~ Bool somewhere.
So we now have the following information in total: (a little work omitted)
y :: e
h :: e -> b
x :: b -> a -- Originally d, applied d ~ b -> a.
g :: (b -> a) -> b -> a -- Originally c -> b -> a, applied c ~ b -> a
This was done by substituting the most specific form of each type variable, that is substituting c and d for the more specific b -> a.
So, simply inspecting which arguments go where, we see that
f :: ((b -> a) -> b -> a) -> (e -> b) -> (b -> a) -> e -> a
This is confirmed by GHC.
Well the function is:
f g h x y = g (g x) (h y)
or more verbose:
f g h x y = (g (g x)) (h y)
Intially we assume that all the four parameters (g, h, x, and y) have different types. We also introduce an output type for our function (here t):
g :: a
h :: b
x :: c
y :: d
f g h x y :: t
But now we are going to perform some inference. We see for example g x, so this means that there is a function application with g the function, and x the parameter. This means that g is a function, with as input type c, so we redefine the type of g to:
g :: a ~ (c -> e)
h :: b
x :: c
y :: d
f g h x y :: t
(here the tilde ~ means that two types are the same, so a is the same as c -> e)).
Since g has type g :: c -> e, and x has type c, this thus means that the result of the function application g x has type g x :: e.
We see another function application, g as function, and g x as argument. So this means that the input type of g (which is c), should be equal to the type of g x (which is e), hence we know that c ~ e, so the types now are:
c ~ e
g :: a ~ (c -> c)
h :: b
x :: c
y :: d
f g h x y :: t
Now we see a function application with h the function, and y the argument. So that means that h is a function, and the input type is the same as the type of y :: d, so h has type d -> f, so that means:
c ~ e
g :: a ~ (c -> c)
h :: b ~ (d -> f)
x :: c
y :: d
f g h x y :: t
finally we see a function application with g (g x) the function, and h y the argument, so that means that the ouput type of g (g x) :: c should be a function, with f as input type, and t as output type, so that means that c ~ (f -> t), and therefore:
c ~ e
c ~ (f -> t)
g :: a ~ (c -> c) ~ ((f -> t) -> (f -> t))
h :: b ~ (d -> f)
x :: (f -> t)
y :: d
f g h x y :: t
So that means that since f has those parameters g, h, x and y, the type of f is:
f :: ((f -> t) -> (f -> t)) -> (d -> f) -> (f -> t) -> d -> t
-- \_________ __________/ \__ ___/ \__ ___/ |
-- v v v |
-- g h x y
You already described how to do it, but maybe you missed the unification step. That is, sometimes we know that two variables are the same:
x :: a
y :: b
g :: a -> b -- from g x
h :: c -> d -- from h y
a ~ b -- from g (g x)
We know that a and b are the same, because we passed g x, a b, to g, which expects an a. So now we replace all the bs with a, and keep going until we have considered all subexpressions...
With regard to your "huge mess" comment, I have a couple things to say:
This is the way to do it. If it's too hard, you just need to practice, and it will get easier. You will start to develop an intuition and it will come more easily.
This particular function is not an easy function to do. I've been programming Haskell for 12 years and I still need to go through the unification algorithm on paper for this one. The fact that it is so abstract doesn't help -- if I knew what this function's purpose was it would be much easer.
Simply write down all the entities' types under them:
f g h x y = g (g x) (h y)
x :: x y :: y
h :: y -> a , h y :: a
g :: x -> b , g x :: b
g :: b -> (a -> t) , x ~ b , b ~ (a -> t)
f :: (x -> b) -> (y -> a) -> x -> y -> t , x ~ b , b ~ (a -> t)
f :: (b -> b) -> (y -> a) -> b -> y -> t , b ~ (a -> t)
-- g h x y
Thus f :: ((a -> t) -> (a -> t)) -> (y -> a) -> (a -> t) -> y -> t. That's all.
Indeed,
~> :t let f g h x y = g (g x) (h y) in f
:: ((t1 -> t) -> t1 -> t) -> (t2 -> t1) -> (t1 -> t) -> t2 -> t
This goes like this:
x must have some type, let's call it x: x :: x.
y must have some type, let's call it y: y :: y.
h y must have some type, let's call it a: h y :: a. hence h :: y -> a.
g x must have some type, let's call it b: g x :: b. hence g :: x -> b.
g _ _ must have some type, let's call it t. hence g :: b -> a -> t.
which is the same as g :: b -> (a -> t).
the two type signatures for g must unify, i.e. be the same under some substitution of type variables involved, since the two signatures describe the same entity, g.
thus we must have x ~ b, b ~ (a -> t). This is the substitution.
Having all the types of the arguments to f, we know it produces what g produces, i.e. t. So we can write down its type, (x -> b) -> (y -> a) -> x -> y -> t.
Lastly, we substitute the types according to the substitution, to reduce the number of type variables involved. Thus we substitute b for x first, and then a -> t for b, each time removing the eliminated type variable from the substitution.
When the substitution is empty, we are DONE.
Of course we could have chosen to replace b with x at first, ending up with the substitution x ~ (a -> t), and then we'd end up with the same type in the end, if we always replace the simpler types with the more complex ones (like, replacing b with (a -> t), and not vice versa).
Simple steps, guaranteed results.
Here's another attempt at shorter / clearer derivation. We focus on the fact that g x serves as g's argument, thus g x :: x (and the trivial part still remains, h y :: a):
f g h x y = g (g x) (h y) {- g :: g , h :: h , x :: x , y :: y
g h x y x y , g x :: x -- !
x a t , g x a :: t
x a :: t ... x ~ a->t
f :: g ->h ->x ->y->t
f :: (x ->x )->(y->a)->x ->y->t
f :: ((a->t)->a->t)->(y->a)->(a->t)->y->t -}
Pretty simple after all.
The last argument in the definition can be elided, as f g h x = (g . g) x . h.
Recently I've finally started to feel like I understand catamorphisms. I wrote some about them in a recent answer, but briefly I would say a catamorphism for a type abstracts over the process of recursively traversing a value of that type, with the pattern matches on that type reified into one function for each constructor the type has. While I would welcome any corrections on this point or on the longer version in the answer of mine linked above, I think I have this more or less down and that is not the subject of this question, just some background.
Once I realized that the functions you pass to a catamorphism correspond exactly to the type's constructors, and the arguments of those functions likewise correspond to the types of those constructors' fields, it all suddenly feels quite mechanical and I don't see where there is any wiggle room for alternate implementations.
For example, I just made up this silly type, with no real concept of what its structure "means", and derived a catamorphism for it. I don't see any other way I could define a general-purpose fold over this type:
data X a b f = A Int b
| B
| C (f a) (X a b f)
| D a
xCata :: (Int -> b -> r)
-> r
-> (f a -> r -> r)
-> (a -> r)
-> X a b f
-> r
xCata a b c d v = case v of
A i x -> a i x
B -> b
C f x -> c f (xCata a b c d x)
D x -> d x
My question is, does every type have a unique catamorphism (up to argument reordering)? Or are there counterexamples: types for which no catamorphism can be defined, or types for which two distinct but equally reasonable catamorphisms exist? If there are no counterexamples (i.e., the catamorphism for a type is unique and trivially derivable), is it possible to get GHC to derive some sort of typeclass for me that does this drudgework automatically?
The catamorphism associated to a recursive type can be derived mechanically.
Suppose you have a recursively defined type, having multiple constructors, each one with its own arity. I'll borrow OP's example.
data X a b f = A Int b
| B
| C (f a) (X a b f)
| D a
Then, we can rewrite the same type by forcing each arity to be one, uncurrying everything. Arity zero (B) becomes one if we add a unit type ().
data X a b f = A (Int, b)
| B ()
| C (f a, X a b f)
| D a
Then, we can reduce the number of constructors to one, exploiting Either instead of multiple constructors. Below, we just write infix + instead of Either for brevity.
data X a b f = X ((Int, b) + () + (f a, X a b f) + a)
At the term-level, we know we can rewrite any recursive definition
as the form x = f x where f w = ..., writing an explicit fixed point equation x = f x. At the type-level, we can use the same method
to refector recursive types.
data X a b f = X (F (X a b f)) -- fixed point equation
data F a b f w = F ((Int, b) + () + (f a, w) + a)
Now, we note that we can autoderive a functor instance.
deriving instance Functor (F a b f)
This is possible because in the original type each recursive reference only occurred in positive position. If this does not hold, making F a b f not a functor, then we can't have a catamorphism.
Finally, we can write the type of cata as follows:
cata :: (F a b f w -> w) -> X a b f -> w
Is this the OP's xCata type? It is. We only have to apply a few type isomorphisms. We use the following algebraic laws:
1) (a,b) -> c ~= a -> b -> c (currying)
2) (a+b) -> c ~= (a -> c, b -> c)
3) () -> c ~= c
By the way, it's easy to remember these isomorphisms if we write (a,b) as a product a*b, unit () as1, and a->b as a power b^a. Indeed they become
c^(a*b) = (c^a)^b
c^(a+b) = c^a*c^b
c^1 = c
Anyway, let's start to rewrite the F a b f w -> w part, only
F a b f w -> w
=~ (def F)
((Int, b) + () + (f a, w) + a) -> w
=~ (2)
((Int, b) -> w, () -> w, (f a, w) -> w, a -> w)
=~ (3)
((Int, b) -> w, w, (f a, w) -> w, a -> w)
=~ (1)
(Int -> b -> w, w, f a -> w -> w, a -> w)
Let's consider the full type now:
cata :: (F a b f w -> w) -> X a b f -> w
~= (above)
(Int -> b -> w, w, f a -> w -> w, a -> w) -> X a b f -> w
~= (1)
(Int -> b -> w)
-> w
-> (f a -> w -> w)
-> (a -> w)
-> X a b f
-> w
Which is indeed (renaming w=r) the wanted type
xCata :: (Int -> b -> r)
-> r
-> (f a -> r -> r)
-> (a -> r)
-> X a b f
-> r
The "standard" implementation of cata is
cata g = wrap . fmap (cata g) . unwrap
where unwrap (X y) = y
wrap y = X y
It takes some effort to understand due to its generality, but this is indeed the intended one.
About automation: yes, this can be automatized, at least in part.
There is the package recursion-schemes on hackage which allows
one to write something like
type X a b f = Fix (F a f b)
data F a b f w = ... -- you can use the actual constructors here
deriving Functor
-- use cata here
Example:
import Data.Functor.Foldable hiding (Nil, Cons)
data ListF a k = NilF | ConsF a k deriving Functor
type List a = Fix (ListF a)
-- helper patterns, so that we can avoid to match the Fix
-- newtype constructor explicitly
pattern Nil = Fix NilF
pattern Cons a as = Fix (ConsF a as)
-- normal recursion
sumList1 :: Num a => List a -> a
sumList1 Nil = 0
sumList1 (Cons a as) = a + sumList1 as
-- with cata
sumList2 :: forall a. Num a => List a -> a
sumList2 = cata h
where
h :: ListF a a -> a
h NilF = 0
h (ConsF a s) = a + s
-- with LambdaCase
sumList3 :: Num a => List a -> a
sumList3 = cata $ \case
NilF -> 0
ConsF a s -> a + s
A catamorphism (if it exists) is unique by definition. In category theory a catamorphism denotes the unique homomorphism from an initial algebra into some other algebra. To the best of my knowledge in Haskell all catamorphisms exists because Haskell's types form a Cartesian Closed Category where terminal objects, all products, sums and exponentials exist. See also Bartosz Milewski's blog post about F-algebras, which gives a good introduction to the topic.
In Haskell, it is easy to write functions that act on or return tuples of things, e.g. the prelude function splitAt:
splitAt :: Int -> [a] -> ([a], [a])
but is there no easy, convenient, way of writing functions that act on or result in cotuples of things? E.g. a function that returns an Int or a Double. As a concrete example, let's say I want to write a function
MyDivision :: Int -> Int -> (Int + Double)
where + is my symbol for cotupling, so MyDivision x y returns x/y as an Int if the division results in an integer but as a Double if the division does not result in an integer.
So far, it seems that I have two choices, either declare a new datatype
data IntOrDouble = AnInt Int | ADouble Double
or use
Either Int Double
where the first alternative requires a lot of typing and thinking of names and the second alternative quickly gets messy when you have larger cotuples and get types looking like
Either (Either a (Either b c)) (Either (Either d f) g)
Now, if I had a a cotuple type, say
a + b + c + d
I would like to be able to form functions
f :: (a + b + c + d) -> e
g :: (a + b + c + d) -> (e + f + g + h)
by just supplying functions
f1 :: a -> e, f2 :: b -> e, f3 :: c -> e, f4 :: d -> e
g1 :: a -> e, g2 :: b -> f, g3 :: c -> g, g4 :: d -> h
and setting
f = f1 + f2 + f3 + f4
g = g1 <+> g2 <+> g3 <+> g4
or something of the like.
Is this possible?
Well co-tuples are properly called coproducts which is just Either.
So, let's go ahead and do something like
{-# LANGUAGE TypeOperators #-}
type (+) = Either
This is left associative by the way. Now we have pretty syntax like
foo :: Int + Bool + Char
foo = Right 'c'
Now, what you seem to want there is in fact very similar to a church representation of Either flattened out. We can just build this up with the either combinator
(+) :: (a -> c) -> (b -> c) -> (a + b) -> c
l + r = either l r
(<+>) :: (a -> c) -> (b -> d) -> (a + b) -> (c + d)
l <+> r = either (Left . l) (Right . r)
infixl 4 <+>, +
A fun challenge would be to create a generic inject function which takes something like Proxy k where k is some representation of natural numbers at the type level and returns a great nested mess of Eithers for you.
Update:
I got bored, here's the code for generic inj
data Nat = S Nat | Z
type NatRep (n :: Nat) = Proxy n
type family Tuplish (l :: Nat) (r :: Nat) t
type instance Tuplish Z Z t = t
type instance Tuplish (S n) Z t = (Tuplish n Z ()) + t
type instance Tuplish l (S n) t = (Tuplish l n t) + ()
predP :: Proxy (S n) -> Proxy n
predP = reproxy
class Inject (l :: Nat) (r :: Nat) v where
inj :: NatRep l -> NatRep r -> v -> Tuplish l r v
instance Inject Z Z v where
inj _ _ = id
instance Inject (S n) Z v where
inj _ _ v = Right v
instance Inject n m v => Inject n (S m) v where
inj l r v = Left (inj l (predP r) v)
I renamed your + to >+< and your <+> to >*<, but you could do something like this:
type a + b = Either a b
(>+<) :: (a -> c) -> (b -> c) -> a + b -> c
(>+<) = either
(>*<) :: (a -> e) -> (b -> f) -> a + b -> e + f
(f >*< _) (Left a) = Left (f a)
(_ >*< g) (Right b) = Right (g b)
I tried to name the operators to be more suggestive of their operation.
Here's another way to implement >*<:
import Control.Arrow ((+++))
(>*<) :: (a -> e) -> (b -> f) -> a + b -> e + f
(>*<) = (+++)
As a side note: "Tuples" are often called product types and this is what's called a coproduct type (or sum type). The most basic coproduct type is Either and all other coproduct types are isomorphic to Either A B for some types A and B.