Applicative without a functor - haskell

I have a type Image which is basically an c-array of floats. It is easy to create functions
such as map :: (Float -> Float) -> Image -> Image, or zipWith :: (Float -> Float -> Float) -> Image -> Image -> Image.
However, I have a feeling that it would also be possible to provide something that looks like an applicative instance on top of these functions, allowing more flexible pixel level manipulations like ((+) <$> image1 <*> image2) or ((\x y z -> (x+y)/z) <$> i1 <*> i2 <*> i3). However, the naive approach fails, since Image type cannot contain things other than floats, making it impossible to implement fmap as such.
How could this be implemented?

Reading the comments, I'm a little worried that size is under the carpet here. Is there a sensible behaviour when sizes mismatch?
Meanwhile, there may be something you can sensibly do along the following lines. Even if your arrays aren't easy to make polymorphic, you can make an Applicative instance like this.
data ArrayLike x = MkAL {sizeOf :: Int, eltOf :: Int -> x}
instance Applicative ArrayLike where
pure x = MkAL maxBound (pure x)
MkAL i f <*> MkAL j g = MkAL (min i j) (f <*> g)
(Enthusiasts will note that I've taken the product of the (Int ->) applicative with that induced by the (maxBound, min) monoid.)
Could you make a clean correspondence
imAL :: Image -> ArrayLike Float
alIm :: ArrayLike Float -> Image
by projection and tabulation? If so, you can write code like this.
alIm $ (f <$> imAL a1 <*> ... <*> imAL an)
Moreover, if you then want to wrap that pattern up as an overloaded operator,
imapp :: (Float -> ... -> Float) -> (Image -> ... -> Image)
it's a standard exercise in typeclass programming! (Ask if you need more of a hint.)
The crucial point, though, is that the wrapping strategy means you don't need to monkey with your array structures in order to put functional superstructure on top.

How would you expect to perform operations on pixels in an image? That is, for ((+) <$> image1 <*> image2), would you want to perform all the operations in Haskell and construct a new resulting image, or would you have to call C functions to do all the processing?
If it's the former, pigworker's answer is the approach I would take.
If instead it's required that all image manipulations be handled via C, how about creating a small DSL to represent the operations?

You'll get a much more compositional Image type if you generalize the "pixel" type from Float and extend from finite & discrete domain (arrays) to infinite & continuous domain.
As a demonstration of these generalizations, see the paper Functional Images and a corresponding gallery of (finite samplings of) example images.
As a result, you get instances of Monoid, Functor, Applicative, Monad, and Comonad.
Moreover, the meanings of these instances are entirely determined by the corresponding instances for functions, satisfying the principle of semantic type class morphisms, as described in the paper Denotational design with type class morphisms.
Section 13.2 of that paper briefly describes imagery.

Related

How do I get a handle on deep stacks of functors in Haskell?

Now and then I find myself mapping over a deep stack of functors, e.g. a parser for some collection of optional values:
-- parse a rectangular block of characters to a map of
-- coordinate to the character, or Nothing for whitespace
parseRectangle :: Parser (Map (Int, Int) (Maybe Char))
data Class = Letter | Digit | Other
classify :: Char -> Class
parseClassifiedRectangle :: Parser (Map (Int, Int) (Maybe Class))
parseClassifiedRectangle = fmap (fmap (fmap classify)) parseRectangle
What are some good ways around the nested fmaps? Oftentimes it's not as clear as here, and I end up adding fmaps until the code type checks. Simple code ends up as a mess of fmap boilerplate, where what I really want to express is "lift this function to the appropriate depth and apply it to the contained type".
Some ideas, none of which I've found particularly satisfactory so far:
define fmap2 :: (Functor f, Functor g) => (a -> b) -> g (f a) -> g (f b) and friends
define concrete helpers, like mapMaybeMap :: (a -> b) -> Map k (Maybe a) -> Map k (Maybe b)
introduce newtype wrappers for the functor stack, and make those instances of Functor, like newtype MaybeMapParser a = Parser (Map (Int, Int) (Maybe a))
Do others run into this problem in large codebases? Is it even a problem? How do you deal with it?
Let me break the ice on this interesting question that people seem shy about answering. This question probably comes down to more of a matter of style than anything, hence the lack of answers.
My approach would be something like the following:
parseClassifiedRectangle :: Parser (Map (Int, Int) (Maybe Class))
parseClassifiedRectangle = doClassify <$> parseRectangle
where
doClassify = Map.map (fmap classify)
I try to use <$> for the top level Functor, and save fmap for interior functors; although that doesn't always work too well in practice.
I've used a local named binding. But even if doClassify were left as f it sometimes helps clarify a high level view of whats happening: "on the parsed value we are doing a thing, see below for what thing does." I don't know what the efficiency concerns are for making a binding.
I've also used the specific instance of fmap for the Map instance. This helps orient me within the stack and gives a signpost for the final fmap.
Hope this helps.

Why is this Functor instance incorrect?

I've written this code:
newtype Pixel a = Pixel (a,a,a) deriving (Show)
instance Functor [Pixel Int] where
fmap f [] = []
fmap f [Pixel(a,b,c)] = [Pixel(f a, b, c)]
I want the functor to apply to the first element in the Pixel type, but I keep getting this error:
New.hs:17:18: error:
• Expecting one fewer arguments to ‘[Pixel Int]’
Expected kind ‘* -> *’, but ‘[Pixel Int]’ has kind ‘*’
• In the first argument of ‘Functor’, namely ‘[Pixel Int]’
In the instance declaration for ‘Functor [Pixel Int]’
I'm pretty lost on this issue, is there any way to apply a functor on an entire list? Or do I need to set up a functor for an individual Pixel type and then iterate through a list?
From what I understand, you're given a list of pixels and you want to change the first component (i.e. the red component) of every pixel. Hence, you want the following function:
changeAllPixels :: [Pixel Int] -> [Pixel Int]
Q: How do we change every element of a list? A: We use map:
changeAllPixels = map changeOnePixel
changeOnePixel :: Pixel Int -> Pixel Int
We only want to change the red component. Hence, we have:
changeOnePixel = changeRedComponent doSomething
changeRedComponent :: (a -> a) -> Pixel a -> Pixel a
changeRedComponent f (Pixel (r, g, b)) = Pixel (f r, g, b)
doSomething :: Int -> Int
Now you only need to implement doSomething. For example, if you want to invert the red component then you could implement doSomething as follows:
doSomething x = 255 - x
Note that we didn't make Pixel an instance of Functor. This is because we only want to change the red component and leave the green and blue components alone. We did however use map which is the fmap for lists.
I think the biggest problem you have is that you don't understand functors well. You should probably spend some time getting acquainted with them.
Actually, [Pixel Int] already has an instance of Functor because it is a list []. The Functor instance for list [] is defined in GHC base (it uses the definition of map). Now you just need a function that can be applied to each element of that list.
fmap show [(Pixel 0 0 0),(Pixel 1 0 0), (Pixel 0 1 0)]
Functor is generally defined for some container type. It takes a function and applies it to the contents of the container. Then when you call fmap on a container that has an instance of Functor, the compiler will check that the function can be applied to the elements of that container.
If you are still confused about Functors, I recommend this tutorial: Functors, Applicatives, And Monads In Pictures.
Your syntax is a bit off, fmap applies a function to a data type and you tell it how. To change values for a list of pixels you would need to map (fmap f) over the list.
Give this implementation a try.
instance Functor Pixel where
fmap f (Pixel (a,b,c)) = Pixel (f a, b, c)
Edit this won't work because a,b,c need to be of the same type and functors allow functions of type a->b.
As #AlexisKing commented you should use fmap, but instead write a function like mapPixelFirst :: (a -> a) -> Pixel a -> Pixel a. Then map this function over user list, don't use fmap.

Are Monad instances uniquely determined by their Applicative instances? [duplicate]

As described this question/answers, Functor instances are uniquely determined, if they exists.
For lists, there are two well know Applicative instances: [] and ZipList. So Applicative isn't unique (see also Can GHC derive Functor and Applicative instances for a monad transformer? and Why is there no -XDeriveApplicative extension?). However, ZipList needs infinite lists, as its pure repeats a given element indefinitely.
Are there other, perhaps better examples of data structures that have at least two Applicative instances?
Are there any such examples that only involve finite data structures? That is, like if hypothetically Haskell's type system distinguished inductive and coinductive data types, would it be possible to uniquely determine Applicative?
Going further, if we could extend both [] and ZipList to a Monad, we'd have an example where a monad isn't uniquely determined by the data type and its Functor. Alas, ZipList has a Monad instance only if we restrict ourselves to infinite lists (streams).
And return for [] creates a single-element list, so it requires finite lists. Therefore:
Are Monad instances uniquely determined by the data type? Or is there an example of a data type that can have two distinct Monad instances?
In the case there is an example with two or more distinct instances, an obvious question arises, if they must/can have the same Applicative instance:
Are Monad instances uniquely determined by the Applicative instance, or is there an example of an Applicative that can have two distinct Monad instances?
Is there an example of a data type with two distinct Monad instances, each having a different Applicative super-instance?
And finally we can ask the same question for Alternative/MonadPlus. This is complicated by the fact that there are two distinct set of MonadPlus laws. Assuming we accept one of the set of laws (and for Applicative we accept right/left distributivity/absorption, see also this question),
is Alternative uniquely determined by Applicative, and MonadPlus by Monad, or are there any counter-examples?
If any of the above are unique, I'd be interested in knowing why, to have a hint of a proof. If not, an counter-example.
First, since Monoids are not unique, neither are Writer Monads or Applicatives. Consider
data M a = M Int a
then you can give it Applicative and Monad instances isomorphic to either of:
Writer (Sum Int)
Writer (Product Int)
Given a Monoid instance for a type s, another isomorphic pair with different Applicative/Monad instances is:
ReaderT s (Writer s)
State s
As for having one Applicative instance extend to two different Monads, I cannot remember any example. However, back when I tried to convince myself completely about whether ZipList really cannot be made a Monad, I found the following pretty strong restriction that holds for any Monad:
join (fmap (\x -> fmap (\y -> f x y) ys) xs) = f <$> xs <*> ys
That doesn't give join for all values though: in the case of lists the restricted values are the ones where all elements have the same length, i.e. lists of lists with "rectangular" shape.
(For Reader monads, where the "shape" of monadic values doesn't vary, these are in fact all the m (m x) values, so those do have unique extension. EDIT: Come to think of it, Either, Maybe and Writer also have only "rectangular" m (m x) values, so their extension from Applicative to Monad is also unique.)
I wouldn't be surprised if an Applicative with two Monads exists, though.
For Alternative/MonadPlus, I cannot recall any law for instances using the Left Distribution law instead of Left Catch, I see nothing preventing you from just swapping (<|>) with flip (<|>). I don't know if there's a less trivial variation.
ADDENDUM: I suddenly remembered I had found an example of an Applicative with two Monads. Namely, finite lists. There's the usual Monad [] instance, but you can then replace its join by the following function (essentially making empty lists "infectious"):
ljoin xs
| any null xs = []
| otherwise = concat xs
(Alas, the lists need to be finite because otherwise the null check will never finish, and that would ruin the join . fmap return == id monad law.)
This has the same value as join/concat on rectangular lists of lists, so will give the same Applicative. As I recall, it turns out that the first two monad laws are automatic from that, and you just need to check ljoin . ljoin == ljoin . fmap ljoin.
Given that every Applicative has a Backwards counterpart,
newtype Backwards f x = Backwards {backwards :: f x}
instance Applicative f => Applicative (Backwards f) where
pure x = Backwards (pure x)
Backwards ff <*> Backwards fs = Backwards (flip ($) <$> fs <*> ff)
it's unusual for Applicative to be uniquely determined, just as (and this is very far from unrelated) many sets extend to monoids in multiple ways.
In this answer, I set the exercise of finding at least four distinct valid Applicative instances for nonempty lists: I won't spoil it here, but I will give a big hint on how to hunt.
Meanwhile, in some wonderful recent work (which I saw at a summer school a few months ago), Tarmo Uustalu showed a rather neat way to get a handle on this problem, at least when the underlying functor is a container, in the sense of Abbott, Altenkirch and Ghani.
Warning: Dependent types ahead!
What is a container? If you have dependent types to hand, you can present container-like functors F uniformly, as being determined by two components
a set of shapes, S : Set
an S-indexed set of positions, P : S -> Set
Up to isomorphism, container data structures in F X are given by the dependent pair of some shape s : S, and some function e : P s -> X, which tells you the element located at each position. That is, we define the extension of a container
(S <| P) X = (s : S) * (P s -> X)
(which, by the way, looks a lot like a generalized power series if you read -> as reversed exponentiation). The triangle is supposed to remind you of a tree node sideways, with an element s : S labelling the apex, and the baseline representing the position set P s. We say that some functor is a container if it is isomorphic to some S <| P.
In Haskell, you can easily take S = F (), but constructing P can take quite a bit of type-hackery. But that is something you can try at home. You'll find that containers are closed under all the usual polynomial type-forming operations, as well as identity,
Id ~= () <| \ _ -> ()
composition, where a whole shape is made from just one outer shape and an inner shape for each outer position,
(S0 <| P0) . (S1 <| P1) ~= ((S0 <| P0) S1) <| \ (s0, e0) -> (p0 : P0, P1 (e0 p0))
and some other things, notably the tensor, where there is one outer and one inner shape (so "outer" and "inner" are interchangeable)
(S0 <| P0) (X) (S1 <| P1) = ((S0, S1) <| \ (s0, s1) -> (P0 s0, P1 s1))
so that F (X) G means "F-structures of G-structures-all-the-same-shape", e.g., [] (X) [] means rectangular lists-of-lists. But I digress
Polymorphic functions between containers Every polymorphic function
m : forall X. (S0 <| P0) X -> (S1 <| P1) X
can be implemented by a container morphism, constructed from two components in a very particular way.
a function f : S0 -> S1 mapping input shapes to output shapes;
a function g : (s0 : S0) -> P1 (f s0) -> P0 s0 mapping output positions to input positions.
Our polymorphic function is then
\ (s0, e0) -> (f s0, e0 . g s0)
where the output shape is computed from the input shape, then the output positions are filled up by picking elements from input positions.
(If you're Peter Hancock, you have a whole other metaphor for what's going on. Shapes are Commands; Positions are Responses; a container morphism is a device driver, translating commands one way, then responses the other.)
Every container morphism gives you a polymorphic function, but the reverse is also true. Given such an m, we may take
(f s, g s) = m (s, id)
That is, we have a representation theorem, saying that every polymorphic function between two containers is given by such an f, g-pair.
What about Applicative? We kind of got a bit lost along the way, building all this machinery. But it has been worth it. When the underlying functors for monads and applicatives are containers, the polymorphic functions pure and <*>, return and join must be representable by the relevant notion of container morphism.
Let's take applicatives first, using their monoidal presentation. We need
unit : () -> (S <| P) ()
mult : forall X, Y. ((S <| P) X, (S <| P) Y) -> (S <| P) (X, Y)
The left-to-right maps for shapes require us to deliver
unitS : () -> S
multS : (S, S) -> S
so it looks like we might need a monoid. And when you check that the applicative laws, you find we need exactly a monoid. Equipping a container with applicative structure is exactly refining the monoid structures on its shapes with suitable position-respecting operations. There's nothing to do for unit (because there is no chocie of source position), but for mult, we need that whenenver
multS (s0, s1) = s
we have
multP (s0, s1) : P s -> (P s0, P s1)
satisfying appropriate identity and associativity conditions. If we switch to Hancock's interpretation, we're defining a monoid (skip, semicolon) for commands, where there is no way to look at the response to the first command before choosing the second, like commands are a deck of punch cards. We have to be able to chop up responses to combined commands into the individual responses to the individual commands.
So, every monoid on the shapes gives us a potential applicative structure. For lists, shapes are numbers (lengths), and there are a great many monoids from which to choose. Even if shapes live in Bool, we have quite a bit of choice.
What about Monad? Meanwhile, for monads M with M ~= S <| P. We need
return : Id -> M
join : M . M -> M
Looking at shapes first, that means we need a sort-of lopsided monoid.
return_f : () -> S
join_f : (S <| P) S -> S -- (s : S, P s -> S) -> S
It's lopsided because we get a bunch of shapes on the right, not just one. If we switch to Hancock's interpretation, we're defining a kind of sequential composition for commands, where we do let the second command be chosen on the basis of the first response, like we're interacting at a teletype. More geometrically, we're explaining how to glom two layers of a tree into one. It would be very surprising if such compositions were unique.
Again, for the positions, we have to map single output positions to pairs in a coherent way. This is trickier for monads: we first choose an outer position (response), then we have to choose an inner position(response) appropriate to the shape (command) found at the first position (chosen after the first response).
I'd love to link to Tarmo's work for the details, but it doesn't seem to have hit the streets yet. He has actually used this analysis to enumerate all possible monad structures for several choices of underlying container. I'm looking forward to the paper!
Edit. By way of doing honour to the other answer, I should observe that when everywhere P s = (), then (S <| P) X ~= (S, X) and the monad/applicative structures coincide exactly with each other and with the monoid structures on S. That is, for writer monads, we need only choose the shape-level operations, because there is exactly one position for a value in every case.

To what extent are Applicative/Monad instances uniquely determined?

As described this question/answers, Functor instances are uniquely determined, if they exists.
For lists, there are two well know Applicative instances: [] and ZipList. So Applicative isn't unique (see also Can GHC derive Functor and Applicative instances for a monad transformer? and Why is there no -XDeriveApplicative extension?). However, ZipList needs infinite lists, as its pure repeats a given element indefinitely.
Are there other, perhaps better examples of data structures that have at least two Applicative instances?
Are there any such examples that only involve finite data structures? That is, like if hypothetically Haskell's type system distinguished inductive and coinductive data types, would it be possible to uniquely determine Applicative?
Going further, if we could extend both [] and ZipList to a Monad, we'd have an example where a monad isn't uniquely determined by the data type and its Functor. Alas, ZipList has a Monad instance only if we restrict ourselves to infinite lists (streams).
And return for [] creates a single-element list, so it requires finite lists. Therefore:
Are Monad instances uniquely determined by the data type? Or is there an example of a data type that can have two distinct Monad instances?
In the case there is an example with two or more distinct instances, an obvious question arises, if they must/can have the same Applicative instance:
Are Monad instances uniquely determined by the Applicative instance, or is there an example of an Applicative that can have two distinct Monad instances?
Is there an example of a data type with two distinct Monad instances, each having a different Applicative super-instance?
And finally we can ask the same question for Alternative/MonadPlus. This is complicated by the fact that there are two distinct set of MonadPlus laws. Assuming we accept one of the set of laws (and for Applicative we accept right/left distributivity/absorption, see also this question),
is Alternative uniquely determined by Applicative, and MonadPlus by Monad, or are there any counter-examples?
If any of the above are unique, I'd be interested in knowing why, to have a hint of a proof. If not, an counter-example.
First, since Monoids are not unique, neither are Writer Monads or Applicatives. Consider
data M a = M Int a
then you can give it Applicative and Monad instances isomorphic to either of:
Writer (Sum Int)
Writer (Product Int)
Given a Monoid instance for a type s, another isomorphic pair with different Applicative/Monad instances is:
ReaderT s (Writer s)
State s
As for having one Applicative instance extend to two different Monads, I cannot remember any example. However, back when I tried to convince myself completely about whether ZipList really cannot be made a Monad, I found the following pretty strong restriction that holds for any Monad:
join (fmap (\x -> fmap (\y -> f x y) ys) xs) = f <$> xs <*> ys
That doesn't give join for all values though: in the case of lists the restricted values are the ones where all elements have the same length, i.e. lists of lists with "rectangular" shape.
(For Reader monads, where the "shape" of monadic values doesn't vary, these are in fact all the m (m x) values, so those do have unique extension. EDIT: Come to think of it, Either, Maybe and Writer also have only "rectangular" m (m x) values, so their extension from Applicative to Monad is also unique.)
I wouldn't be surprised if an Applicative with two Monads exists, though.
For Alternative/MonadPlus, I cannot recall any law for instances using the Left Distribution law instead of Left Catch, I see nothing preventing you from just swapping (<|>) with flip (<|>). I don't know if there's a less trivial variation.
ADDENDUM: I suddenly remembered I had found an example of an Applicative with two Monads. Namely, finite lists. There's the usual Monad [] instance, but you can then replace its join by the following function (essentially making empty lists "infectious"):
ljoin xs
| any null xs = []
| otherwise = concat xs
(Alas, the lists need to be finite because otherwise the null check will never finish, and that would ruin the join . fmap return == id monad law.)
This has the same value as join/concat on rectangular lists of lists, so will give the same Applicative. As I recall, it turns out that the first two monad laws are automatic from that, and you just need to check ljoin . ljoin == ljoin . fmap ljoin.
Given that every Applicative has a Backwards counterpart,
newtype Backwards f x = Backwards {backwards :: f x}
instance Applicative f => Applicative (Backwards f) where
pure x = Backwards (pure x)
Backwards ff <*> Backwards fs = Backwards (flip ($) <$> fs <*> ff)
it's unusual for Applicative to be uniquely determined, just as (and this is very far from unrelated) many sets extend to monoids in multiple ways.
In this answer, I set the exercise of finding at least four distinct valid Applicative instances for nonempty lists: I won't spoil it here, but I will give a big hint on how to hunt.
Meanwhile, in some wonderful recent work (which I saw at a summer school a few months ago), Tarmo Uustalu showed a rather neat way to get a handle on this problem, at least when the underlying functor is a container, in the sense of Abbott, Altenkirch and Ghani.
Warning: Dependent types ahead!
What is a container? If you have dependent types to hand, you can present container-like functors F uniformly, as being determined by two components
a set of shapes, S : Set
an S-indexed set of positions, P : S -> Set
Up to isomorphism, container data structures in F X are given by the dependent pair of some shape s : S, and some function e : P s -> X, which tells you the element located at each position. That is, we define the extension of a container
(S <| P) X = (s : S) * (P s -> X)
(which, by the way, looks a lot like a generalized power series if you read -> as reversed exponentiation). The triangle is supposed to remind you of a tree node sideways, with an element s : S labelling the apex, and the baseline representing the position set P s. We say that some functor is a container if it is isomorphic to some S <| P.
In Haskell, you can easily take S = F (), but constructing P can take quite a bit of type-hackery. But that is something you can try at home. You'll find that containers are closed under all the usual polynomial type-forming operations, as well as identity,
Id ~= () <| \ _ -> ()
composition, where a whole shape is made from just one outer shape and an inner shape for each outer position,
(S0 <| P0) . (S1 <| P1) ~= ((S0 <| P0) S1) <| \ (s0, e0) -> (p0 : P0, P1 (e0 p0))
and some other things, notably the tensor, where there is one outer and one inner shape (so "outer" and "inner" are interchangeable)
(S0 <| P0) (X) (S1 <| P1) = ((S0, S1) <| \ (s0, s1) -> (P0 s0, P1 s1))
so that F (X) G means "F-structures of G-structures-all-the-same-shape", e.g., [] (X) [] means rectangular lists-of-lists. But I digress
Polymorphic functions between containers Every polymorphic function
m : forall X. (S0 <| P0) X -> (S1 <| P1) X
can be implemented by a container morphism, constructed from two components in a very particular way.
a function f : S0 -> S1 mapping input shapes to output shapes;
a function g : (s0 : S0) -> P1 (f s0) -> P0 s0 mapping output positions to input positions.
Our polymorphic function is then
\ (s0, e0) -> (f s0, e0 . g s0)
where the output shape is computed from the input shape, then the output positions are filled up by picking elements from input positions.
(If you're Peter Hancock, you have a whole other metaphor for what's going on. Shapes are Commands; Positions are Responses; a container morphism is a device driver, translating commands one way, then responses the other.)
Every container morphism gives you a polymorphic function, but the reverse is also true. Given such an m, we may take
(f s, g s) = m (s, id)
That is, we have a representation theorem, saying that every polymorphic function between two containers is given by such an f, g-pair.
What about Applicative? We kind of got a bit lost along the way, building all this machinery. But it has been worth it. When the underlying functors for monads and applicatives are containers, the polymorphic functions pure and <*>, return and join must be representable by the relevant notion of container morphism.
Let's take applicatives first, using their monoidal presentation. We need
unit : () -> (S <| P) ()
mult : forall X, Y. ((S <| P) X, (S <| P) Y) -> (S <| P) (X, Y)
The left-to-right maps for shapes require us to deliver
unitS : () -> S
multS : (S, S) -> S
so it looks like we might need a monoid. And when you check that the applicative laws, you find we need exactly a monoid. Equipping a container with applicative structure is exactly refining the monoid structures on its shapes with suitable position-respecting operations. There's nothing to do for unit (because there is no chocie of source position), but for mult, we need that whenenver
multS (s0, s1) = s
we have
multP (s0, s1) : P s -> (P s0, P s1)
satisfying appropriate identity and associativity conditions. If we switch to Hancock's interpretation, we're defining a monoid (skip, semicolon) for commands, where there is no way to look at the response to the first command before choosing the second, like commands are a deck of punch cards. We have to be able to chop up responses to combined commands into the individual responses to the individual commands.
So, every monoid on the shapes gives us a potential applicative structure. For lists, shapes are numbers (lengths), and there are a great many monoids from which to choose. Even if shapes live in Bool, we have quite a bit of choice.
What about Monad? Meanwhile, for monads M with M ~= S <| P. We need
return : Id -> M
join : M . M -> M
Looking at shapes first, that means we need a sort-of lopsided monoid.
return_f : () -> S
join_f : (S <| P) S -> S -- (s : S, P s -> S) -> S
It's lopsided because we get a bunch of shapes on the right, not just one. If we switch to Hancock's interpretation, we're defining a kind of sequential composition for commands, where we do let the second command be chosen on the basis of the first response, like we're interacting at a teletype. More geometrically, we're explaining how to glom two layers of a tree into one. It would be very surprising if such compositions were unique.
Again, for the positions, we have to map single output positions to pairs in a coherent way. This is trickier for monads: we first choose an outer position (response), then we have to choose an inner position(response) appropriate to the shape (command) found at the first position (chosen after the first response).
I'd love to link to Tarmo's work for the details, but it doesn't seem to have hit the streets yet. He has actually used this analysis to enumerate all possible monad structures for several choices of underlying container. I'm looking forward to the paper!
Edit. By way of doing honour to the other answer, I should observe that when everywhere P s = (), then (S <| P) X ~= (S, X) and the monad/applicative structures coincide exactly with each other and with the monoid structures on S. That is, for writer monads, we need only choose the shape-level operations, because there is exactly one position for a value in every case.

Generalizing mapAndUnzip

I'm looking for an easy way to write the function
mapAndUnzip :: (Functor f) => (a -> (b,c)) -> f a -> (f b, f c)
I'm not entirely convinced that Functor is a strong enough constraint, but I'll use it for concreteness. I want to be able to apply this function when f has type (to name a few) [], Data.Vector.Unboxed.Vector, and my own wrapper types around [a] and Vector a. (Other possible types include Array, Repa vectors, etc.)
My key requirement is that I should not need a constraint like (Unbox (b,c)), only (Unbox b,Unbox c). Subrequirement: compute the function only once for each input element.
I see a way to do this for Vector by building two mutable vectors as I map over the input, but I'm hoping there's a better way than making a new class and my own instances for different types. The list-specific way that GHC.Util defines mapAndUnzip makes me think a generic solution might not be possible, but I figured I'd get a second opinion before hacking my own solution.
Functor is enough, you can do:
(EDIT: Compute g only once)
mapAndUnzip g fa = (fmap fst fbc, fmap snd fbc)
where
fbc = fmap g fa

Resources