Which Haskell Functors are equivalent to the Reader functor - haskell

Some Haskell functors F a are obviously isomorphic to T -> a for some type T, e.g.
data Pair a = Pair a a -- isomorphic to Bool -> a
data Reader r a = Reader (r -> a) -- isomorphic to r -> a (duh!)
data Identity a = Identity a -- isomorphic to () -> a
data Phantom a = Phantom -- isomorphic to void -> a
(These isomorphism are only up to strictness, and considering only finite data structures.)
So in general, how can we characterize functors where this is possible?
And is the question “Which Haskell Functors are representable?” the same question?

And Noah said unto the animals "Go forth and multiply!", but the snakes said
"We cannot multiply, for we are adders.", so Noah took wood from the Ark and,
shaping it, said "I am building you a table of logs.".
Representable functors are sometimes also called "Naperian" functors (it's Peter Hancock's term: Hank's a denizen of the same part of Edinburgh as John Napier, of logarithmic fame) because when F x ~= T -> x, and remembering that, combinatorially, T -> x is "x to the power T", we see that T is in some sense Log F.
The first thing to note is that F () ~= T -> () ~= (). That tells us there is only one shape. Functors which offer us a choice of shape cannot be Naperian, because they don't give a uniform presentation of the positions for data. That means [] is not Naperian, because different-length lists have positions represented by different types. However, an infinite Stream has positions given by the natural numbers.
Correspondingly, given any two F structures, their shapes are bound to match, so they have a sensible zip, giving us the basis for an Applicative F instance.
Indeed, we have
a -> p x
=====================
(Log p, a) -> x
making p a right adjoint, so p preserves all limits, hence unit and products in particular, making it a monoidal functor, not just a lax monoidal functor. That is, the alternative presentation of Applicative has operations which are isomorphisms.
unit :: () ~= p ()
mult :: (p x, p y) ~= p (x, y)
Let's have a type class for the things. I cook it a bit differently from the Representable class.
class Applicative p => Naperian p where
type Log p
logTable :: p (Log p)
project :: p x -> Log p -> x
tabulate :: (Log p -> x) -> p x
tabulate f = fmap f logTable
-- LAW1: project logTable = id
-- LAW2: project px <$> logTable = px
We have a type Log f, representing at least some of the positions inside an f; we have a logTable, storing in each position the representative of that position, acting like a 'map of an f' with placenames in each place; we have a project function extracting the data stored at a given position.
The first law tells us that the logTable is accurate for all the positions which are represented. The second law tells us that we have represented all the positions. We may deduce that
tabulate (project px)
= {definition}
fmap (project px) logTable
= {LAW2}
px
and that
project (tabulate f)
= {definition}
project (fmap f logTable)
= {free theorem for project}
f . project logTable
= {LAW1}
f . id
= {composition absorbs identity}
f
We could imagine a generic instance for Applicative
instance Naperian p => Applicative p where
pure x = fmap (pure x) logTable
pf <$> px = fmap (project pf <*> project ps) logTable
which is as much as to say that p inherits its own K and S combinators from the usual K and S for functions.
Of course, we have
instance Naperian ((->) r) where
type Log ((->) r) = r -- log_x (x^r) = r
logTable = id
project = ($)
Now, all the limit-like constructions preserve Naperianity. Log maps limity things to colimity things: it calculates left adjoints.
We have the terminal object and products.
data K1 x = K1
instance Applicative K1 where
pure x = K1
K1 <*> K1 = K1
instance Functor K1 where fmap = (<*>) . pure
instance Naperian K1 where
type Log K1 = Void -- "log of 1 is 0"
logTable = K1
project K1 nonsense = absurd nonsense
data (p * q) x = p x :*: q x
instance (Applicative p, Applicative q) => Applicative (p * q) where
pure x = pure x :*: pure x
(pf :*: qf) <*> (ps :*: qs) = (pf <*> ps) :*: (qf <*> qs)
instance (Functor p, Functor q) => Functor (p * q) where
fmap f (px :*: qx) = fmap f px :*: fmap f qx
instance (Naperian p, Naperian q) => Naperian (p * q) where
type Log (p * q) = Either (Log p) (Log q) -- log (p * q) = log p + log q
logTable = fmap Left logTable :*: fmap Right logTable
project (px :*: qx) (Left i) = project px i
project (px :*: qx) (Right i) = project qx i
We have identity and composition.
data I x = I x
instance Applicative I where
pure x = I x
I f <*> I s = I (f s)
instance Functor I where fmap = (<*>) . pure
instance Naperian I where
type Log I = () -- log_x x = 1
logTable = I ()
project (I x) () = x
data (p << q) x = C (p (q x))
instance (Applicative p, Applicative q) => Applicative (p << q) where
pure x = C (pure (pure x))
C pqf <*> C pqs = C (pure (<*>) <*> pqf <*> pqs)
instance (Functor p, Functor q) => Functor (p << q) where
fmap f (C pqx) = C (fmap (fmap f) pqx)
instance (Naperian p, Naperian q) => Naperian (p << q) where
type Log (p << q) = (Log p, Log q) -- log (q ^ log p) = log p * log q
logTable = C (fmap (\ i -> fmap (i ,) logTable) logTable)
project (C pqx) (i, j) = project (project pqx i) j
Naperian functors are closed under greatest fixpoints, with their logarithms being the corresponding least fixpoints. E.g., for streams, we have
log_x (Stream x)
=
log_x (nu y. x * y)
=
mu log_xy. log_x (x * y)
=
mu log_xy. log_x x + log_x y
=
mu log_xy. 1 + log_xy
=
Nat
It's a bit fiddly to render that in Haskell without introducing Naperian bifunctors (which have two sets of positions for two sorts of things), or (better) Naperian functors on indexed types (which have indexed positions for indexed things). What's easy, though, and hopefully gives the idea, is the cofree comonad.
data{-codata-} CoFree p x = x :- p (CoFree p x)
-- i.e., (I * (p << CoFree p)) x
instance Applicative p => Applicative (CoFree p) where
pure x = x :- pure (pure x)
(f :- pcf) <*> (s :- pcs) = f s :- (pure (<*>) <*> pcf <*> pcs)
instance Functor p => Functor (CoFree p) where
fmap f (x :- pcx) = f x :- fmap (fmap f) pcx
instance Naperian p => Naperian (CoFree p) where
type Log (CoFree p) = [Log p] -- meaning finite lists only
logTable = [] :- fmap (\ i -> fmap (i :) logTable) logTable
project (x :- pcx) [] = x
project (x :- pcx) (i : is) = project (project pcx i) is
We may take Stream = CoFree I, giving
Log Stream = [Log I] = [()] ~= Nat
Now, the derivative D p of a functor gives its type of one-hole context, telling us i) the shape of a p, ii) the position of the hole, iii) the data that are not in the hole. If p is Naperian, there is no choice of shape, so putting trivial data in the non-hole positions, we find that we just get the position of the hole.
D p () ~= Log p
More on that connection can be found in this answer of mine about tries.
Anyhow, Naperian is indeed a funny local Scottish name for Representable, which are the things for which you can build a table of logs: they are the constructions characterized entirely by projection, offering no choice of 'shape'.

Related

Is the only `FlippyFloppyMorphism` `const mempty`?

In Haskell, we have the interesting fact that any type constructor f :: * -> * which is simultaneously a Functor and a Contravariant is phantom in its type parameter:
phantom :: (Functor f, Contravariant f) => f x -> f y
Another way to put this is that every type constructor that is simultaneously a Functors and a Contravariant is naturally isomorphic to Const x, for some x.
This implies that the "only" way (up to isomorphism) to instantiate the class:
class FlippyFloppyFunctor f
where
ffmap :: Either (y -> x) (x -> y) -> f x -> f y
so that it obeys the functor laws:
ffmap (Left id) = id
ffmap (Right id) = id
ffmap (Left (g . f)) = ffmap (Left f) . ffmap (Left g)
ffmap (Right (f . g)) = ffmap (Right f) . ffmap (Right g)
is:
weirdmap :: Either (y -> x) (x -> y) -> Const r x -> Const r y
weirdmap = const $ \(Const x) -> Const x
i.e. modulo newtypes, const id.
I find it difficult to understand why this is the only function of its type that satisfies the constraints, although I can sort of understand various informal arguments involving absurd :: Void -> a/discard :: a -> () as to why the existence of such a map implies the functor "is phantom" in its type parameter.
To understand it better, I tried to simplify the problem. Instead of thinking about FlippyFloppyFunctor, let's think about:
class (Monoid a, Monoid b) => FlippyFloppyMorphism a b
where
ffmorph :: Either a a -> b
with analogous laws:
ffmorph (Left mempty) = mempty
ffmorph (Right mempty) = mempty
ffmorph (Left (y <> x)) = ffmorph (Left x) <> ffmorph (Left y)
ffmorph (Right (x <> y)) = ffmorph (Right x) <> ffmorph (Right y)
Assuming that a and b are non-commutative monoids, is it still true that the only lawful implementation of FlippyFloppyMorphism is const mempty? Is it still possible to explain why the morphism must be "phantom" in the input monoids, without having a Void or a () to refer to?
It seems to me that the answer in the general case is "no", because monoids can be commutative.
If the monoid is commutative, then Dual a is the same monoid as a, and Either a a is the same as a, and hence we just degenerate to asking whether ffmorph is the only monoid homomorphism a -> b. The answer is "no".
For example, for the commutative monoid of addition, we have replicate 'a' :: Either (Sum Int) (Sum Int) -> String, where:
replicateA (Left 0) = ""
replicateA (Right 0) = ""
replicateA (Left (y + x)) = replicateA (Left x) ++ replicateA (Left y)
replicateA (Right (x + y)) = replicateA (Left x) ++ replicateA (Left y)
However, I think it might be the case that for _non_commutative monoids, the only possible implementation is const mempty (which I still don't have a proof for).

Is it possible to write join down for Arrows, not ArrowApply?

I tried writing down joinArr :: ??? a => a r (a r b) -> a r b.
I came up with a solution which uses app, therefore narrowing the a down to ArrowApply's:
joinArr :: ArrowApply a => a r (a r b) -> a r b
joinArr g = g &&& Control.Category.id >>> app
Is it possible to have this function written down for arrows?
My guess is no.
Control.Monad.join could have been a good stand-in for >>= in the definition of the Monad type class: m >>= k = join $ k <$> m.
Having joinArr :: Arrow a => a r (a r b) (a r b) on our hands, it would be possible to write down instance Arrow a => Monad (ArrowMonad a):
m >>= k = joinArr (k <$> m)
Please note that joinArr should be slightly tweaked to be able to deal with the wrapper. If we speak of ArrowApply:
joinArr :: ArrowApply a => ArrowMonad a (ArrowMonad a b) -> ArrowMonad a b
joinArr (ArrowMonad m) = ArrowMonad $
m &&& Control.Category.id >>>
first (arr (\x -> let ArrowMonad h = x in h)) >>>
app
instance ArrowApply a => Monad (ArrowMonad a) is already implemented in the source file.
I reckon this argument not to be the best one (if it is right).
Am I right? What is the more formal way to back this up (or disprove it)?
I think the formal reason that you can’t implement a x (a x y) -> a x y using only Arrow is that this requires a notion of either application (as you tried) or currying, or rather uncurrying in this case:
uncurry :: a x (a y z) -> a (x, y) z
With that, joinArr is simply:
joinArr :: a x (a x y) -> a x y
joinArr f = dup >>> uncurry f
where dup = id &&& id
But if we can’t implement this without apply, curry, or uncurry, that means that a must be a Cartesian closed category (CCC) because we need some notion of “exponential” or higher-order arrow, which ArrowApply gives us, but Arrow only gives us a Cartesian category. (And I believe ArrowApply is equivalent to Monad because Monad is a strong monad in a CCC.)
The closest you can get with only Arrow is an Applicative, as you saw in the definition of instance (Arrow a) => Applicative (ArrowMonad a), which happens to be equivalent in power to join in the Reader monad (since there join = (<*> id)), but not the stronger monadic join:
joinArr' :: a x (x -> y) -> a x y
joinArr' f = (f &&& id) >>> arr (uncurry ($))
Note that instead of a higher-order arrow here, a x (a x y), we just reuse the (->) type.

Derive positional Show

Notice how T 5 shows in
> newtype T = T { getT :: Int } deriving Show
> T 5
T {getT = 5}
Is there some way to derive the positional, non-record-syntax variant of Show for a type that was declared with record syntax?
(btw T is only a simple example to explain the question, I'm looking for a general answer for any type defined with record syntax)
Some options I would be satisfied with:
TH generation for it provided by a library
A Generic based derivation (where the manual instance refers to an existing function)
An easy way / guide to manually implement Show instances
Any other idea I didn't think about
For a more complicated example I have this hand-written instance:
instance ... where
showsPrec p (FuncType i o) =
showParen (p > 0)
(("FuncType " <>) . showsPrec 1 i . (" " <>) . showsPrec 1 o)
I would like the answer to be able to avoid this boilerplate.
Implementing Show by hand
The default way of implementing Show requires a fair amount of boilerplate. That is taken care of by show-combinators, reducing the code needed to the bare essentials:
instance Show ... where
showPrec = flip (\(FuncType i o) -> showCon "FuncType" #| i #| o)
I think this solution is the simplest possible. No extensions, no typeclass magic under the hood. Just plain functional programming.
(Disclaimer: I wrote the two libraries mentioned in this post.)
With GHC Generics
There is a generic implementation of Show in generic-data: gshowsPrec (link to source). But it shows types declared with record syntax as records.
Redoing the implementation
One way of course is to copy the implementation and remove the special handling of records.
{- 1. The usual boilerplate -}
class GShow p f where
gPrecShows :: p (ShowsPrec a) -> f a -> PrecShowS
instance GShow p f => GShow p (M1 D d f) where
gPrecShows p (M1 x) = gPrecShows p x
instance (GShow p f, GShow p g) => GShow p (f :+: g) where
gPrecShows p (L1 x) = gPrecShows p x
gPrecShows p (R1 y) = gPrecShows p y
{- 2. A simplified instance for (M1 C), that shows all constructors
using positional syntax. The body mostly comes from the instance
(GShowC p ('MetaCons s y 'False) f). -}
instance (Constructor c, GShowFields p f) => GShow p (M1 C c f) where
gPrecShows p x = gPrecShowsC p (conName x) (conFixity x) x
where
gPrecShowsC p name fixity (M1 x)
| Infix _ fy <- fixity, k1 : k2 : ks <- fields =
foldl' showApp (showInfix name fy k1 k2) ks
| otherwise = foldl' showApp (showCon cname) fields
where
cname = case fixity of
Prefix -> name
Infix _ _ -> "(" ++ name ++ ")"
fields = gPrecShowsFields p x
Type surgery
(Section named after my blogpost but this thread is a much simpler situation.)
Another way is to transform the generic representation of our type to pretend that it's not declared using record syntax. Fortunately, the only difference is in a phantom type parameter, so that transformation can be as simple as coerce at run time.
unsetIsRecord ::
Coercible (f p) (UnsetIsRecord f p) => Data f p -> Data (UnsetIsRecord f) p
unsetIsRecord = coerce
-- UnsetIsRecord defined at the end
The Data newtype basically creates a data type out of a generic representation (which is the inverse of what Generic does, in some sense). We can map a normally declared type to a Data type using toData :: a -> Data (Rep a) p.
Finally, we can directly apply the gshowsPrec function from the generic-data library to the output of unsetIsRecord.
instance Show T where
showsPrec n = gshowsPrec n . unsetIsRecord . toData
UnsetIsRecord should ideally be in generic-data, but since it's not yet there, here is a possible implementation:
type family UnsetIsRecord (f :: * -> *) :: * -> *
type instance UnsetIsRecord (M1 D m f) = M1 D m (UnsetIsRecord f)
type instance UnsetIsRecord (f :+: g) = UnsetIsRecord f :+: UnsetIsRecord g
type instance UnsetIsRecord (M1 C ('MetaCons s y _isRecord) f) = M1 C ('MetaCons s y 'False) f)

Computing a term of a list depending on all previous terms

I have the following identity, that defines (implicitly) the number of partitions of positive integers (that is, the number of ways you can write the integer as the sum of ordered positive nonzero integers):
Some notes:
This is studied in the book Analytic Combinatorics by Flajolet and Sedjewick, and the image of the formula is taken from there, since stackoverflow doesn't support LaTeX.
sigma is the sum of the divisors of a number
I want to write a haskell program that computes a list with P coefficients.
The i-th term depends on all previous terms (is a sum of the list resulting on zipping the sigmas and the previous Ps).
This problem is a good example of how you can "calculate" a program from its specification, like Gibbons writes in his paper.
The question is: Is there a known recursion scheme that captures this kind of computation? Every term in the list depends on a computation with all previous terms, (and the result has no relation with the previous ones, I mean, you have to do a new traversal for every term)
Statutory Calculus Warning. The basic answer to this question involves specialising a standard recursion scheme. But I got a bit carried away pulling at the thread of it. Things take a more abstract turn as I seek to apply the same method to structures other than lists. I end up reaching for Isaac Newton and Ralph Fox, and in the process devise the alopegmorphism, which may be something new.
But anyway, something of the sort ought to exist. It looks like a special case of the anamorphism or "unfold". Let's start with what's called unfoldr in the library.
unfoldr :: (seed -> Maybe (value, seed)) -> seed -> [value]
It shows how to grow a list of values from a seed, repeatedly using a function called a coalgebra. At each step, the coalgebra says whether to stop with [] or to carry on by consing a value onto a list grown from a new seed.
unfoldr coalg s = case coalg s of
Nothing -> []
Just (v, s') -> v : unfoldr coalg s'
Here, the seed type can be whatever you like — whatever local state is appropriate to the unfolding process. One entirely sensible notion of seed is simply "the list so far", perhaps in reverse order, so that the most recently added elements are nearest.
growList :: ([value] -> Maybe value) -> [value]
growList g = unfoldr coalg B0 where
coalg vz = case g vz of -- I say "vz", not "vs" to remember it's reversed
Nothing -> Nothing
Just v -> Just (v, v : vz)
At each step, our g operation looks at the context of values we already have and decides whether to add another: if so, the new value becomes both the head of the list and the most recent value in the new context.
So, this growList hands you at each step your list of previous results, ready for zipWith (*). The reversal is rather handy for the convolution, so perhaps we're looking at something like
ps = growList $ \ pz -> Just (sum (zipWith (*) sigmas pz) `div` (length pz + 1))
sigmas = [sigma j | j <- [1..]]
perhaps?
A recursion scheme? For lists, we have a special case of the anamorphism, where the seed is the context of what we've built so far, and once we've said how to build a bit more, we know how to grow the context by the same token. It's not hard to see how that works for lists. But how does it work for anamorphisms in general? Here's where things get hairy.
We build up possibly infinite values whose node shape is given by some functor f (whose parameter turns out to be "substructures" when we "tie the knot").
newtype Nu f = In (f (Nu f))
In an anamorphism, the coalgebra uses the seed to choose a shape for the outermost node, populated with seeds for the substructures. (Co)recursively, we map the anamorphism across, growing those seeds into substructures.
ana :: Functor f => (seed -> f seed) -> seed -> Nu f
ana coalg s = In (fmap (ana coalg) (coalg s))
Let's reconstruct unfoldr from ana. We can build lots of ordinary recursive structures from Nu and a few simple parts: the polynomial Functor kit.
newtype K1 a x = K1 a -- constants (labels)
newtype I x = I x -- substructure places
data (f :+: g) x = L1 (f x) | R1 (g x) -- choice (like Either)
data (f :*: g) x = f x :*: g x -- pairing (like (,))
with Functor instances
instance Functor (K1 a) where fmap f (K1 a) = K1 a
instance Functor I where fmap f (I s) = I (f s)
instance (Functor f, Functor g) => Functor (f :+: g) where
fmap h (L1 fs) = L1 (fmap h fs)
fmap h (R1 gs) = R1 (fmap h gs)
instance (Functor f, Functor g) => Functor (f :*: g) where
fmap h (fx :*: gx) = fmap h fx :*: fmap h gx
For lists of value, the node shape functor is
type ListF value = K1 () :+: (K1 value :*: I)
meaning "either a boring label (for nil) or the (cons) pair of a value label and a sublist". The type of a ListF value coalgebra becomes
seed -> (K1 () :+: (K1 value :*: I)) seed
which is isomorphic (by "evaluating" the polynomial ListF value at seed) to
seed -> Either () (value, seed)
which is but a hair's breadth from the
seed -> Maybe (value, seed)
that unfoldr expects. You can recover an ordinary list like so
list :: Nu (ListF a) -> [a]
list (In (L1 _)) = []
list (In (R1 (K1 a :*: I as))) = a : list as
Now, how do we grow some general Nu f? A good start is to choose the shape for the outermost node. A value of type f () gives just the shape of a node, with trivial stubs in the substructure positions. Indeed, to grow our trees, we basically need some way to choose the "next" node shape given some idea where we've got to and what we've done so far. We should expect
grow :: (..where I am in a Nu f under construction.. -> f ()) -> Nu f
Note that for growing lists, our step function returns a ListF value (), which is isomorphic to Maybe value.
But how do we express where we are in a Nu f so far? We're going to be so-many-nodes-in from the root of the structure, so we should expect a stack of layers. Each layer should tell us (1) its shape, (2) which position we're currently at, and (3) the structures already built to the left of that position, but we expect still to have stubs in the positions at which we have not yet arrived. In other words, it's an example of the dissection structure from my 2008 POPL paper about Clowns and Jokers.
The dissection operator turns a functor f (seen as a container of elements) into a bifunctor Diss f with two different sorts of elements, those on the left (clowns) and those on the right (jokers) of a "cursor position" within an f structure. First, let's have the Bifunctor class and some instances.
class Bifunctor b where
bimap :: (c -> c') -> (j -> j') -> b c j -> b c' j'
newtype K2 a c j = K2 a
data (f :++: g) c j = L2 (f c j) | R2 (g c j)
data (f :**: g) c j = f c j :**: g c j
newtype Clowns f c j = Clowns (f c)
newtype Jokers f c j = Jokers (f j)
instance Bifunctor (K2 a) where
bimap h k (K2 a) = K2 a
instance (Bifunctor f, Bifunctor g) => Bifunctor (f :++: g) where
bimap h k (L2 fcj) = L2 (bimap h k fcj)
bimap h k (R2 gcj) = R2 (bimap h k gcj)
instance (Bifunctor f, Bifunctor g) => Bifunctor (f :**: g) where
bimap h k (fcj :**: gcj) = bimap h k fcj :**: bimap h k gcj
instance Functor f => Bifunctor (Clowns f) where
bimap h k (Clowns fc) = Clowns (fmap h fc)
instance Functor f => Bifunctor (Jokers f) where
bimap h k (Jokers fj) = Jokers (fmap k fj)
Note that Clowns f is the bifunctor which amounts to an f structure containing only clowns, whilst Jokers f has only jokers. If you feel bothered by the repetition of all the Functor paraphernalia to get the Bifunctor paraphernalis, you're right to be bothered: it gets less laborious if we abstract away the arity and work with functors between indexed sets, but that's a whole other story.
Let's define dissection as a class which associates a bifunctor with a functor.
class (Functor f, Bifunctor (Diss f)) => Dissectable f where
type Diss f :: * -> * -> *
rightward :: Either (f j) (Diss f c j, c) ->
Either (j, Diss f c j) (f c)
The type Diss f c j represents an f-structure with a "hole" or "cursor position" at one element position, and in the positions to the left of the hole we have "clowns" in c, and to the right we have "jokers" in j. (The terminology is lifted from the Stealer's Wheel song "Stuck in the Middle with You".)
The key operation in the class is the isomorphism rightward which tells us how to move one place to the right, starting from either
left of a whole structure full of jokers, or
a hole in the structure, together with a clown to put in the hole
and arriving at either
a hole in the structure, together with the joker which came out of it, or
right of a whole structure full of clowns.
Isaac Newton was fond of dissections, but he called them divided differences and defined them on real-valued functions to get the slope between two points on a curve, thus
divDiff f c j = (f c - f j) / (c - j)
and he used them to make best polynomial approximations to any old functions, and the like. Multiply up and multiply out
divDiff f c j * c - j * divDiff f c j = f c - f j
then get rid of the subtraction by adding to both sides
f j + divDiff f c j * c = f c + j * divDiff f c j
and you've got the rightward isomorphism.
We might build a bit more intuition for these things if we look at the instances, and then we can get back to our original problem.
A boring old constant has zero as its divided difference.
instance Dissectable (K1 a) where
type Diss (K1 a) = K2 Void
rightward (Left (K1 a)) = (Right (K1 a))
rightward (Right (K2 v, _)) = absurd v
If we start to the left and go to the right, we jump over the whole structure, because there are no element positions. If we start in an element position, someone is lying!
The identity functor has just one position.
instance Dissectable I where
type Diss I = K2 ()
rightward (Left (I j)) = Left (j, K2 ())
rightward (Right (K2 (), c)) = Right (I c)
If we start to the left, we arrive in the position and out pops the joker; push in a clown and we finish on the right.
For sums, the structure is inherited: we just have to get the detagging and retagging correct.
instance (Dissectable f, Dissectable g) => Dissectable (f :+: g) where
type Diss (f :+: g) = Diss f :++: Diss g
rightward x = case x of
Left (L1 fj) -> ll (rightward (Left fj))
Right (L2 df, c) -> ll (rightward (Right (df, c)))
Left (R1 gj) -> rr (rightward (Left gj))
Right (R2 dg, c) -> rr (rightward (Right (dg, c)))
where
ll (Left (j, df)) = Left (j, L2 df)
ll (Right fc) = Right (L1 fc)
rr (Left (j, dg)) = Left (j, R2 dg)
rr (Right gc) = Right (R1 gc)
For products, we must be somewhere in a pair of structures: either we're on the left between clowns and jokers with the right structure all jokers, or the left structure is all clowns and we're on the right between clowns and jokers.
instance (Dissectable f, Dissectable g) => Dissectable (f :*: g) where
type Diss (f :*: g) = (Diss f :**: Jokers g) :++: (Clowns f :**: Diss g)
rightward x = case x of
Left (fj :*: gj) -> ll (rightward (Left fj)) gj
Right (L2 (df :**: Jokers gj), c) -> ll (rightward (Right (df, c))) gj
Right (R2 (Clowns fc :**: dg), c) -> rr fc (rightward (Right (dg, c)))
where
ll (Left (j, df)) gj = Left (j, L2 (df :**: Jokers gj))
ll (Right fc) gj = rr fc (rightward (Left gj)) -- (!)
rr fc (Left (j, dg)) = Left (j, R2 (Clowns fc :**: dg))
rr fc (Right gc) = Right (fc :*: gc)
The rightward logic ensures that we work our way through the left structure, then once we're done with it, we start work on the right. The line marked (!) is the key moment in the middle, where we emerge from the right of the left structure and then enter the left of the right structure.
Huet's notion of "left" and "right" cursor movements in data structures arise from dissectability (if you complete the rightward isomorphism with its leftward counterpart). The derivative of f is just the limit when the difference between clowns and jokers tend to zero, or for us, what you get when you have the same sort of stuff either side of the cursor.
Moreover, if you take clowns to be zero, you get
rightward :: Either (f x) (Diss f Void x, Void) -> Either (x, Diss f Void x) (f Void)
but we can remove the impossible input case to get
type Quotient f x = Diss f Void x
leftmost :: f x -> Either (x, Quotient f x) (f Void)
leftmost = rightward . Left
which tells us that every f structure either has a leftmost element or none at all, a result we learn in school as the "Remainder Theorem". The multivariate version of the Quotient operator is the "derivative" which Brzozowski applied to regular expressions.
But our special case is Fox's derivative (about which I learned from Dan Piponi):
type Fox f x = Diss f x ()
That's the type of f-structures with stubs to the right of a cursor. Now we can give the type of our general grow operator.
grow :: Dissectable f => ([Fox f (Nu f)] -> f ()) -> Nu f
Our "context" is a stack of layers, each of which has fully grown data to the left and stubs to the right. We can implement grow directly as follows:
grow g = go [] where
go stk = In (walk (rightward (Left (g stk)))) where
walk (Left ((), df)) = walk (rightward (Right (df, go (df : stk))))
walk (Right fm) = fm
As we arrive at each position, the joker we extract is just a stub, but its context tells us how to extend the stack in order to grow a substructure of the tree, which gives us the clown that we need to move right. Once we've filled in all the stubs with trees, we're done!
But here's the twist: grow is not so easy to express as an anamorphism. It's easy to give the "seed" for the leftmost child of each node, because we have only stubs to the right of us. But to give the next seed to the right, we need more than the leftmost seed — we need the tree that grows from it! The anamorphism pattern requires us to give all the seeds for substructures before growing any of them. Our growList is an anamorphism only because list nodes have at most one child.
So it's something new, after all, growing from nothing, but allowing later growth at a given layer to depend on earlier trees, with the Fox derivative capturing the idea of "stubs where we have yet to work". Perhaps we should call it an alopegmorphism, from the Greek αλωπηξ for "fox".
How about using self-reference and lazyness?
Assume the values of σ are in the infinite list sigma, then
p = [sum (zipWith (*) sigmas (reverse ps)) | ps <- inits p]
would implement this recursion quite neatly.
I am ignoring the factor of n here, for simplicity of the code, and also because I’m not sure what P_0 should be.

How can holes and contexts be implemented for higher-kinded types in a lens style uniplate library?

András Kovács proposed this question in response to an answer to a previous question.
In a lens-style uniplate library for types of kind * -> * based on the class
class Uniplate1 f where
uniplate1 :: Applicative m => f a -> (forall b. f b -> m (f b)) -> m (f a)
analogous to the class for types of kind *
class Uniplate on where
uniplate :: Applicative m => on -> (on -> m on) -> m on
is it possible to implement analogs to contexts and holes, which both have the type Uniplate on => on -> [(on, on -> on)] without requiring Typeable1?
It's clear that this could be implemented in the old-style of the uniplate library which used Str to represent the structure of the data by returning a structure with a type-level list of the types of the children.
A hole could be represented by the following data type, which would replace (on, on -> on) in the signatures for contexts and holes
data Hole f a where
Hole :: f b -> (f b -> f a) -> Hole f a
holes :: Uniplate1 f => f a -> [Hole f a]
...
However, it is unclear if there is an implementation for holes which doesn't require Typeable1.
The suggested type Hole is needlessly restrictive in the return type of the function. The following type can represent everything the former Hole represents, and more, without loss of any type information.
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE GADTs #-}
data Hole f a where
Hole :: f b -> (f b -> a) -> Hole f a
If we need to have a return type of f a, we can use Hole f (f a) to represent it. Since we will be using Holes a lot, it'd be nice to have a few utility functions. Because the return type of the function in Hole is no longer constrained to be in f, we can make a Functor instance for it
instance Functor (Hole f) where
fmap f (Hole b g) = Hole b (f . g)
contexts1 can be written for either version of Hole by replacing the constructors for tuples in the uniplate library's contexts with Hole:
contexts1 :: Uniplate1 f => f a -> [Hole f (f a)]
contexts1 x = Hole x id : f (holes1 x)
where
f xs = [ Hole y (ctx . context)
| Hole child ctx <- xs
, Hole y context <- contexts1 child]
holes1 is trickier, but can still be made by modifying holes from the uniplate library. It requires a new Replace1 Applicative Functor that uses Hole instead of a tuple. Everyhwere the second field of the tuple was modified by second (f .) we replace with fmap f for the Hole.
data Replace1 f a = Replace1 {replaced1 :: [Hole f a], replacedValue1 :: a}
instance Functor (Replace1 f) where
fmap f (Replace1 xs v) = Replace1 (map (fmap f) xs) (f v)
instance Applicative (Replace1 f) where
pure v = Replace1 [] v
Replace1 xs1 f <*> Replace1 xs2 v = Replace1 (ys1 ++ ys2) (f v)
where ys1 = map (fmap ($ v)) xs1
ys2 = map (fmap (f)) xs2
holes1 :: Uniplate1 f => f a -> [Hole f (f a)]
holes1 x = replaced1 $ descendM1 (\v -> Replace1 [Hole v id] v) x
decendM1 is defined in the preceding answer. Replace and Replace1 can be unified; how to do so is described after the examples.
Let's try some examples in terms of the code in the previous question. The following utility functions on Holes will be useful.
onHole :: (forall b. f b -> c) -> Hole f a -> c
onHole f (Hole x _) = f x
inHole :: (forall b. f b -> f b) -> Hole f a -> a
inHole g (Hole x f) = f . g $ x
Examples
We'll use the following example data and function, based on the code from the preceding questions:
example = If (B True) (I 2 `Mul` I 3) (I 1)
zero :: Expression b -> Expression b
zero x = case x of
I _ -> I 0
B _ -> B False
Add _ _ -> I 0
Mul _ _ -> I 0
Eq _ _ -> B False
And _ _ -> B False
Or _ _ -> B False
If _ a _ -> zero a
Holes
sequence_ . map (onHole print) . holes1 $ example
B True
Mul (I 2) (I 3)
I 1
Contexts
sequence_ . map (onHole print) . contexts1 $ example
If (B True) (Mul (I 2) (I 3)) (I 1)
B True
Mul (I 2) (I 3)
I 2
I 3
I 1
Replacement of each context
sequence_ . map print . map (inHole zero) . contexts1 $ example
I 0
If (B False) (Mul (I 2) (I 3)) (I 1)
If (B True) (I 0) (I 1)
If (B True) (Mul (I 0) (I 3)) (I 1)
If (B True) (Mul (I 2) (I 0)) (I 1)
If (B True) (Mul (I 2) (I 3)) (I 0)
Unifying Replace
The Replace Applicative Functor can be refactored so that it doesn't know about the type of holes for either Uniplate or Uniplate1, and instead only knows that the hole is a Functor. Holes for Uniplate were using the type (on, on -> a) and essentially using fmap f = second (f .); this is the composition of the (on, ) and on-> functors.
Instead of grabbing Compose from the transformers library, we'll make a new type for a Hole for Uniplate, which will make the example code here be more consistent and self-contained.
data Hole on a = Hole on (on -> a)
instance Functor (Hole on) where
fmap f (Hole on g) = Hole on (f . g)
We'll rename our Hole from before to Hole1.
data Hole1 f a where
Hole1 :: f b -> (f b -> a) -> Hole1 f a
instance Functor (Hole1 f) where
fmap f (Hole1 b g) = Hole1 b (f . g)
Replace can drop all knowledge of either type of hole.
data Replace f a = Replace {replaced :: [f a], replacedValue :: a}
instance Functor f => Functor (Replace f) where
fmap f (Replace xs v) = Replace (map (fmap f) xs) (f v)
instance Functor f => Applicative (Replace f) where
pure v = Replace [] v
Replace xs1 f <*> Replace xs2 v = Replace (ys1 ++ ys2) (f v)
where ys1 = map (fmap ($ v)) xs1
ys2 = map (fmap (f)) xs2
Both holes and holes1 can be implemented in terms of the new Replace.
holes :: Uniplate on => on -> [Hole on on]
holes x = replaced $ descendM (\v -> Replace [Hole v id] v) x
holes1 :: Uniplate1 f => f a -> [Hole1 f (f a)]
holes1 x = replaced $ descendM1 (\v -> Replace [Hole1 v id] v) x

Resources