Does a joined Bitraversable require Monad? - haskell

Despite the jargon filled title I don't think this question is very complex.
Introducing the characters
There are two important Functor combinators at play here. Flip equivalent to the haskell functiong flip but operating on types
newtype Flip p a b
= Flip
{ unFlip :: p b a
}
and Join equivalent to the W combinator on types, it takes a bifunctor and produces a functor along both its arguments
newtype Join p a
= W
{ unW :: p a a
}
Traversable
Now for Foldable it is possible to make the following instance:
instance
( forall a . Foldable (p a)
, forall a . Foldable (Flip p a)
)
=> Foldable (Join p) where
foldr g x (W xs) = foldr g (foldr g x xs) (Flip xs)
That is to say if p is foldable across both of its arguments then Join p is foldable. This is done by folding across the left and then the right.
Now I would like to make an analogous instance for Traversable, however I run into a problem. I can write sequence easily enough
sequence (W xs) = (map W . join . map (sequenceA . unFlip) . sequenceA . Flip) xs
However it seems that I need to be able to use join so I am having trouble writing sequenceA. In fact it very much seems impossible to write a sequenceA.
However I struggle to come up with a counter example. That is a p which is traversable on two arguments but not traversable when joined.
So far I've tried all the basics but none are counter examples. Join (,) is traversable
sequenceA (W (x, y)) = liftA2 (W . (,)) x y
Higher order tuples such as Join ((,,) a) are fine.
sequenceA (W (x, y, z)) = liftA2 (W . (,,) x) y z
Join Either is also traversable
sequenceA (W (Left x)) = map (W . Left) x
sequenceA (W (Right x)) = map (W . Right) x
I've come up with more examples by composing types around, which I will leave out for simplicity but needless to say they all ended up being traversable.
Is there a counter example? Can this instance be written?

Related

Natural map derivation algorithm

This Reddit post by Edward Kmett provides a constructive definition of a natural map, the one from the free theorem for fmap (which I read in yet another Edward Kmett's post):
For given f, g, h and k, such that f . g = h . k: $map f . fmap g = fmap h . $map k, where $map is the natural map for the given constructor.
I do not fully understand the algorithm. Let us approach it step-by-step:
We can define a "natural map" by induction over any particular concrete choice of F you give.
Ultimately any such ADT is made out of sums, products, (->)'s, 1s, 0s, a's, invocations of other
functors, etc.
Consider:
data Smth a = A a a a | B a (Maybe a) | U | Z Void deriving ...
No arrows. Let us see how fmap (which I reckon to be the natural choice for any ADT without (->)s in it) would operate here:
instance Functor Smth where
fmap xy (A x x1 x2) = A (xy x) (xy x1) (xy x2)
fmap xy (B x xPlus1) = B (xy x) (fmap xy xPlus1)
-- One can pattern-match 'xPlus1' as 'Just x1' and 'Nothing'.
-- From my point of view, 'fmap' suits better here. Reasons below.
fmap _xy U = U
fmap _xy (Z z) = absurd z
Which seems natural. To put this more formally, we apply xy to every x, apply fmap xy to every T x, where T is a Functor, we leave every unit unchanged, and we pass every Void onto absurd. This works for recursive definitions too!
data Lst a = Unit | Prepend a (Lst a) deriving ...
instance Functor Lst where
fmap xy Unit = Unit
fmap xy (Prepend x lstX) = Prepend (xy x) (fmap xy lstX)
And for the non-inductive types:(Detailed explanation in this answer under the linked post.)
Graph a = Node a [Graph a]
instance Functor Graph where
fmap xy (Node x children) = Node (xy x) (fmap (fmap xy) children)
This part is clear.
When we allow (->) we now have the first thing that mixes variance up. The left-hand type argument of (->) is in contravariant position, the right-hand side is in covariant position. So you need to track the final type variable through the entire ADT and see if it occurs in positive and/or negative position.
Now we include (->)s. Let us try to keep this induction going:
We somehow derived natural maps for T a and S a. Thus, we want to consider the following:
data T2S a = T2S (T a -> S a)
instance ?Class? T2S where
?map? ?? (T2S tx2sx) = T2S $ \ ty -> ???
And I believe this to be the point where we start choosing. We have the following options:
(Phantom) a occurs neither in T nor in S. a in T2S is phantom, thus, we can implement both fmap and contramap as const phantom.
(Covariant) a occurs in S a and does not occur in T a. Thus, this something among the lines of ReaderT with S a (which does not actually depend on a) as environment, which substitutes ?Class? with Functor, ?map? with fmap, ???, ?? with xy with:
let tx = phantom ty
sx = tx2sx tx
sy = fmap xy sx
in sy
(Contravariant) a occurs in T a and does not occur in S a. I do not see a way to make this thing a covariant functor, so we implement a Contravariant instance here, which substitutes ?map? with contramap, ?? with yx, ??? with:
let tx = fmap yx ty
sx = tx2sx tx
sy = phantom sx
in sy
(Invariant) a occurs both in T a and S a. We can no longer use phantom, which came in quite handy. There is a module Data.Functor.Invariant by Edward Kmett. It provides the following class with a map:
class Invariant f where
invmap :: (a -> b) -> (b -> a) -> f a -> f b
-- and some generic stuff...
And yet, I cannot see a way to turn this into something we can pluf into the free theorem for fmap - the type requires an additional function-argument, which we can't brush off as id or something. Anyway, we put invmap instead of ?map?, xy yx instead of ??, and the following instead of ???:
let tx = fmap yx ty
sx = tx2sx tx
sy = fmap xy sx
in sy
So, is my understanding of such an algorithm correct? If so, how are we to properly process the Invariant case?
I think your algorithm is too complex, because you are trying to write one algorithm. Writing two algorithms instead makes things much simpler. One algorithm will build the natural fmap, and the other will build the natural contramap. BUT both algorithms need to be nondeterministic in the following sense: there will be types where they cannot succeed, and so do not return an implementation; and there will be types where there are multiple ways they can succeed, but they're all equivalent.
To start, let's carefully define what it means to be a parameterized type. Here's the different kinds of parameterized types we can have:
F ::= F + F'
| F * F'
| F -> F'
| F . F'
| Id
| Const X
In Const X, the X ranges over all concrete, non-parameterized types, like Int and Bool and so forth. And here's their interpretation, i.e. the concrete type they are isomorphic to once given a parameter:
[[F + F']] a = Either ([[F]] a) ([[F']] a)
[[F * F']] a = ([[F]] a, [[F']] a)
[[F -> F']] a = [[F]] a -> [[F']] a
[[F . F']] a = [[F]] ([[F']] a)
[[Id]] a = a
[[Const X]] a = X
Now we can give our two algorithms. The first bit you've already written yourself:
fmap #(F + F') f (Left x) = Left (fmap #F f x)
fmap #(F + F') f (Right x) = Right (fmap #F' f x)
fmap #(F * F') f (x, y) = (fmap #F f x, fmap #F f y)
fmap #(Id) f x = f x
fmap #(Const X) f x = x
These correspond to the clauses you gave in your first instance. Then, in your [Graph a] example, you gave a clause corresponding to this:
fmap #(F . F') f x = fmap #F (fmap #F' f) x
That's fine, but this is also the first moment where we get some nondeterminism. One way to make this a functor is indeed nested fmaps; but another way is nested contramaps.
fmap #(F . F') f x = contramap #F (contramap #F' f) x
If both clauses are possible, then there are no Ids in either F or F', so both instances will return x unchanged.
The only thing left now is the arrow case, the one you ask about. But it turns out it's very easy in this formalism, there is only one choice:
fmap #(F -> F') f x = fmap #F' f . x . contramap #F f
That's the whole algorithm, in full detail, for defining the natural fmap. ...except one detail, which is the algorithm for the natural contramap. But hopefully if you followed all of the above, you can reproduce that algorithm yourself. I encourage you to give it a shot, then check your answers against mine below.
contramap #(F + F') f (Left x) = Left (contramap #F f x)
contramap #(F + F') f (Right x) = Right (contramap #F' f x)
contramap #(F * F') f (x, y) = (contramap #F f x, contramap #F' f y)
contramap #(F -> F') f x = contramap #F' f . x . fmap #F f
contramap #(F . F') f x = contramap #F (fmap #F' f) x
-- OR
contramap #(F . F') f x = fmap #F (contramap #F' f) x
-- contramap #(Id) fails
contramap #(Const X) f x = x
One thing of interest to me personally: it turns out that contramap #(Id) is the only leaf case that fails. All further failures are inductive failures ultimately deriving from this one -- a fact I had never thought of before! (The dual statement is that it turns out that fmap #(Id) is the only leaf case that actually uses its first function argument.)

Is `data PoE a = Empty | Pair a a` a monad?

This question comes from this answer in
example of a functor that is Applicative but not a Monad:
It is claimed that the
data PoE a = Empty | Pair a a deriving (Functor,Eq)
cannot have a monad instance, but I fail to see that with:
instance Applicative PoE where
pure x = Pair x x
Pair f g <*> Pair x y = Pair (f x) (g y)
_ <*> _ = Empty
instance Monad PoE where
Empty >>= _ = Empty
Pair x y >>= f = case (f x, f y) of
(Pair x' _,Pair _ y') -> Pair x' y'
_ -> Empty
The actual reason why I believe this to be a monad is that it is isomorphic to Maybe (Pair a) with Pair a = P a a. They are both monads, both traversables so their composition should form a monad, too. Oh, I just found out not always.
Which counter-example failes which monad law? (and how to find that out systematically?)
edit: I did not expect such an interest in this question. Now I have to make up my mind if I accept the best example or the best answer to the "systematically" part.
Meanwhile, I want to visualize how join works for the simpler Pair a = P a a:
P
________/ \________
/ \
P P
/ \ / \
1 2 3 4
it always take the outer path, yielding P 1 4, more commonly known as a diagonal in a matrix representation. For monad associativy I need three dimensions, a tree visualization works better. Taken from chi's answer, this is the failing example for join, and how I can comprehend it.
Pair
_________/\_________
/ \
Pair Pair
/\ /\
/ \ / \
Pair Empty Empty Pair
/\ /\
1 2 3 4
Now you do the join . fmap join by collapsing the lower levels first, for join . join collapse from the root.
Apparently, it is not a monad. One of the monad "join" laws is
join . join = join . fmap join
Hence, according to the law above, these two outputs should be equal, but they are not.
main :: IO ()
main = do
let x = Pair (Pair (Pair 1 2) Empty) (Pair Empty (Pair 7 8))
print (join . join $ x)
-- output: Pair 1 8
print (join . fmap join $ x)
-- output: Empty
The problem is that
join x = Pair (Pair 1 2) (Pair 7 8)
fmap join x = Pair Empty Empty
Performing an additional join on those does not make them equal.
how to find that out systematically?
join . join has type m (m (m a)) -> m (m a), so I started with a triple-nested Pair-of-Pair-of-Pair, using numbers 1..8. That worked fine. Then, I tried to insert some Empty inside, and quickly found the counterexample above.
This approach was possible since a m (m (m Int)) only contains a finite amount of integers inside, and we only have constructors Pair and Empty to try.
For these checks, I find the join law easier to test than, say, associativity of >>=.
QuickCheck immediately finds a counterexample to associativity.
{-# LANGUAGE DeriveFunctor #-}
import Test.QuickCheck
data PoE a = Empty | Pair a a deriving (Functor,Eq, Show)
instance Applicative PoE where
pure x = Pair x x
Pair f g <*> Pair x y = Pair (f x) (g y)
_ <*> _ = Empty
instance Monad PoE where
Empty >>= _ = Empty
Pair x y >>= f = case (f x, f y) of
(Pair x' _,Pair _ y') -> Pair x' y'
_ -> Empty
instance Arbitrary a => Arbitrary (PoE a) where
arbitrary = oneof [pure Empty, Pair <$> arbitrary <*> arbitrary]
prop_assoc :: PoE Bool -> (Bool -> PoE Bool) -> (Bool -> PoE Bool) -> Property
prop_assoc m k h =
((m >>= k) >>= h) === (m >>= (\a -> k a >>= h))
main = do
quickCheck $ \m (Fn k) (Fn h) -> prop_assoc m k h
Output:
*** Failed! Falsifiable (after 35 tests and 3 shrinks):
Pair True False
{False->Pair False False, True->Pair False True, _->Empty}
{False->Pair False True, _->Empty}
Pair False True /= Empty
Since you are interested in how to do it systematically, here's how I found a counterexample with quickcheck:
{-# LANGUAGE DeriveFunctor #-}
import Control.Monad ((>=>))
import Test.QuickCheck
-- <your code>
Defining an arbitrary instance to generate random PoEs.
instance (Arbitrary a) => Arbitrary (PoE a) where
arbitrary = do
emptyq <- arbitrary
if emptyq
then return Empty
else Pair <$> arbitrary <*> arbitrary
And tests for the monad laws:
prop_right_id m = (m >>= return) == m
where
_types = (m :: PoE Int)
prop_left_id fun x = (return x >>= f) == f x
where
_types = fun :: Fun Int (PoE Int)
f = applyFun fun
prop_assoc fun gun hun x = (f >=> (g >=> h)) x == ((f >=> g) >=> h) x
where
_types = (fun :: Fun Int (PoE Int),
gun :: Fun Int (PoE Int),
hun :: Fun Int (PoE Int),
x :: Int)
f = applyFun fun
g = applyFun gun
h = applyFun hun
I don't get any failures for the identity laws, but prop_assoc does generate a counterexample:
ghci> quickCheck prop_assoc
*** Failed! Falsifiable (after 7 tests and 36 shrinks):
{6->Pair 1 (-1), _->Empty}
{-1->Pair (-3) (-4), 1->Pair (-1) (-2), _->Empty}
{-3->Empty, _->Pair (-2) (-4)}
6
Not that it's terribly helpful for understanding why the failure occurs, it does give you a place to start. If we look carefully, we see that we are passing (-3) and (-2) to the third function; (-3) maps to Empty and (-2) maps to a Pair, so we can't defer to the laws of either of the two monads PoE is composed of.
This kind of potential Monad instance can be concisely described as "taking the diagonal". It is easier to see why if we use the join presentation. Here is join for the Pair type you mention:
join (P (P a00 a11) (P a10 a11)) = P a00 a11
Taking the diagonal, however, is only guaranteed to give a lawful join for fixed length (or infinite) lists. That's because of the associativity law:
join . join = join . fmap join
If the n-th list in a list of lists doesn't have an n-th element, it will lead to the diagonal being trimmed: it will end before its n-th element. join . join takes the outer diagonal (of a list of lists of lists) first, while join . fmap join takes the inner diagonals first. It may be possible for an insufficiently long innermost list which is not in the outer diagonal to trim join . fmap join, but it can't possibly affect join . join. (This would be easier to show with a picture instead of words.)
Your PoE is a list-like type that doesn't have fixed length (the length is either zero or two). It turns out that taking its diagonal doesn't give us a monad, as the potential issue discussed above actually gets in the way (as illustrated in chi's answer).
Additional notes:
This is precisely the reason ZipList is not a monad: the zippy behaviour amounts to taking the diagonal.
Infinite lists are isomorphic to functions from the naturals, and fixed length lists are isomorphic to functions from the naturals up to an appropriate value. This means you can get a Monad instance for them out of the instance for functions -- and the instance you get, again, amounts to taking the diagonal.
Once upon a time I got confused about this exact issue.
(Posting this as a separate answer, as it has little overlap with my other one.)
The actual reason why I believe this to be a monad is that it is isomorphic to Maybe (Pair a) with Pair a = P a a. They are both monads, both traversables so their composition should form a monad, too. Oh, I just found out not always.
The conditions for the composition of monads m-over-n with n traversable are:
-- Using TypeApplications notation to make the layers easier to track.
sequenceA #n #m . pure #n = fmap #m (pure #n)
sequenceA #n #m . fmap #n (join #m)
= join #m . fmap #m (sequenceA #n #m) . sequenceA #n #m
sequenceA #n #m . join #n
= fmap #m (join #n) . sequenceA #n #m . fmap #n (sequenceA #n #m)
(There is also sequenceA #n #m . fmap #n (pure #m) = pure #m, but that always holds.)
In our case, we have m ~ Maybe and n ~ Pair. The relevant method definitions for Pair would be:
fmap f (P x y) = P (f x) (f y)
pure x = P x x
P f g <*> P x y = P (f x) (g y)
join (P (P a00 a01) (P a10 a11)) = P a00 a11 -- Let's pretend join is a method.
sequenceA (P x y) = P <$> x <*> y
Let's check the third property:
sequenceA #n #m . join #n
= fmap #m (join #n) . sequenceA #n #m . fmap #n (sequenceA #n #m)
-- LHS
sequenceA . join $ P (P a00 a01) (P a10 a11)
sequenceA $ P a00 a11
P <$> a00 <*> a11 -- Maybe (Pair a)
-- RHS
fmap join . sequenceA . fmap sequenceA $ P (P a00 a01) (P a10 a11)
fmap join . sequenceA $ P (P <$> a00 <*> a01) (P <$> a10 <*> a11)
fmap join $ P <$> (P <$> a00 <*> a01) <*> (P <$> a10 <*> a11)
fmap join $ (\x y z w -> P (P x y) (P z w)) <$> a00 <*> a01 <*> a10 <*> a11
(\x _ _ w -> P x w) <$> a00 <*> a01 <*> a10 <*> a11 -- Maybe (Pair a)
These are clearly not the same: while any a values will be drawn exclusively from a00 and a11, the effects of a01 and a10 are ignored in the left-hand side, but not in the right-hand side (in other words, if a01 or a10 are Nothing, the RHS will be Nothing, but the LHS won't necessarily be so). The LHS corresponds exactly to the vanishing Empty in chi's answer, and the RHS corresponds to the inner diagonal trimming described in my other answer.
P.S.: I forgot to show that the would-be instance we are talking about here is the same one being discussed in the question:
join' :: m (n (m (n a))) -> m (n a)
join' = fmap #m (join #n) . join #m . fmap #m (sequenceA #n #m)
With m ~ Maybe and n ~ Pair, we have:
join' :: Maybe (Pair (Maybe (Pair a))) -> Maybe (Pair a)
join' = fmap #Maybe (join #Pair) . join #Maybe . fmap #Maybe (sequenceA #Pair #Maybe)
join #Maybe . fmap #Maybe (sequenceA #Pair #Maybe) means the join' will result in Nothing unless there are no Nothings anywhere:
join' = \case
Just (P (Just (P a00 a01)) (Just (P a10 a11))) -> _
_ -> Nothing
Working out the non-Nothing case is straightforward:
fmap join . join . fmap sequenceA $ Just (P (Just (P a00 a01)) (Just (P a10 a11)))
fmap join . join $ Just (Just (P (P a00 a01) (P a10 a11)))
fmap join $ Just (P (P a00 a01) (P a10 a11))
Just (P a00 a11)
Therefore...
join' = \case
Just (P (Just (P a00 _)) (Just (P _ a11))) -> Just (P a00 a11)
_ -> Nothing
... which is essentially the same as:
join = \case
Pair (Pair a00 _) (Pair _ a11) -> Pair (a00 a11)
_ -> Empty

Is there a nicer way to apply a function to both elements of a pair in a list than a list comprehension?

I use this a fair bit:
a' = [ (f x, f y) | (x, y) <- a ]
Is there a better way to do that?
You can use the (***) operator from Control.Arrow
> map (f *** f) a
or define your own helper function
> let both f (x, y) = (f x, f y)
> map (both f) a
Alternative solution:
import Data.Bifunctor
bimap f f pair
Bifunctor.bimap is basically the same as Arrow.(***), but works for other bifunctors (like Either a b), too.
Digression:
The reason why there is nothing predefined for your case is that you can't write instances of Functor, Applicative etc for (,) having the same element type twice. With an own "vector-like" type you wouldn't have this problem:
data Pair a = Pair a a deriving Show
instance Functor Pair where
fmap f (Pair x y) = Pair (f x) (f y)
Now you can write things like map (fmap (+1)) [Pair 12 14, Pair 17 18]. Or if you want to use different operations on your Pair, you can go one step further:
instance Applicative Pair where
pure x = Pair x x
(Pair f g) <*> (Pair x y) = Pair (f x) (g y)
If you work a lot with same-element-type pairs, it could be useful to switch from (,) to such a type.
If you use lens, you can use over both f, or both %~ f. This has the advantage of being more composable -- for example, if you have a pair of lists, you can use something like both.mapped +~ toUpper (:: ([Char],[Char]) -> ([Char],[Char])).

Applying multiple functions to the same value point-free style in Haskell

I was bored one day and wanted to exercise my brain, so I decided to do the 99 Haskell Problems but restricted myself to doing them in point-free style. A problem that seems to crop up a lot when I'm doing things in point-free style is this: How do you apply multiple functions to the same value while keeping each result as an independent entity? Using pointed notation:
foobar x = [id x, reverse x]
And what I've come up with so far in point-free notation:
foobar' = `map` [id, reverse] ($ x)
I can't seem to get that x off the end of there.
Others have already posted how you can do this using the Reader monad, but that's not the only way. It turns out that your second function is pretty close. I think you meant to post
foobar' x = (`map` [id, reverse]) ($ x)
Since the x is already near a rightmost position, you're almost there. First, transform the section ($ x) into a function, because it's a bit easier to work with:
-- by the definition of a right operator section
foobar'2 x = (`map` [id, reverse]) (\y -> ($) y x)
Next remove the x from the lambda body by bringing a new variable into scope, and applying the function to x
-- lambda abstraction I think...
foobar'2 x = (`map` [id, reverse]) $ (\z y -> ($) y z) x
Rewrite this application as a function composition, and then you can eta reduce:
-- by definition of '.'
foobar'3 x = (`map` [id, reverse]) . (\z y -> ($) y z) $ x
-- eta reduction
foobar'4 = (`map` [id, reverse]) . (\z y -> ($) y z)
Finally, notice that we can replace the lambda with a function
-- by definition of `flip`
foobar'5 = (`map` [id,reverse]) . flip ($)
and you have a point-free form.
You will be interested in the Applicative instance of the reader monad:
instance Applicative (e ->)
Using it you can easily distribute an argument:
liftA2 (+) sin cos 3
Here sin and cos are functions, which both receive the value 3. The individual results are then combined using (+). You can further combine this with the Category instance of (->), but of cource specialized versions of (.) and id are already defined in the Prelude.
Background: The Applicative instance for (e ->) really represents the SKI calculus, where (<*>) is the S combinator and pure is the K combinator. S is precisely used to distribute an argument to two functions:
S f g x = f x (g x)
It takes a function application (f g) and makes both dependent on the value x ((f x) (g x)).
Use sequence:
> let foobar' = sequence [id, reverse]
> foobar' "abcde"
["abcde","edcba"]
There are a few basic idiomatic combinators which pop up repeatedly, and are reimplemented with various higher concepts and libraries, but which are essentially very simple. Names may vary, and some are implementable in terms of others:
fork (f,g) x = (f x, g x) -- == (f &&& g)
prod (f,g) x = (f $ fst x, g $ snd x) -- == (f *** g)
pmap f (x,y) = (f x, f y) -- == (f *** f)
dup x = (x,x)
etc. Of course uncurry f (x,y) == f x y gets used a lot with these, too.
&&& and *** are defined in Control.Arrow, as well as first and second. Then prod (f,id) == first f, prod(id,g) == second g etc. etc.
So your foobar becomes
foobar = (\(a,b)->[a,b]) . fork (id,reverse)
= (\(a,b)->[a,b]) . (id &&& reverse)
= (\(a,b)->[a,b]) . (id *** reverse) . dup
= join $ curry ( (\(a,b)->[a,b]) . second reverse)
For the last one you need to also import Control.Monad and Control.Monad.Instances. See also this question.
late edit: also, using Control.Applicative as hinted in answer by ertes,
= (:) <*> ((:[]) . reverse)

Trick for "reusing" arguments in Haskell?

From time to time I stumble over the problem that I want to express "please use the last argument twice", e.g. in order to write pointfree style or to avoid a lambda. E.g.
sqr x = x * x
could be written as
sqr = doubleArgs (*) where
doubleArgs f x = f x x
Or consider this slightly more complicated function (taken from this question):
ins x xs = zipWith (\ a b -> a ++ (x:b)) (inits xs) (tails xs)
I could write this code pointfree if there were a function like this:
ins x = dup (zipWith (\ a b -> a ++ (x:b))) inits tails where
dup f f1 f2 x = f (f1 x) (f2 x)
But as I can't find something like doubleArgs or dup in Hoogle, so I guess that I might miss a trick or idiom here.
From Control.Monad:
join :: (Monad m) -> m (m a) -> m a
join m = m >>= id
instance Monad ((->) r) where
return = const
m >>= f = \x -> f (m x) x
Expanding:
join :: (a -> a -> b) -> (a -> b)
join f = f >>= id
= \x -> id (f x) x
= \x -> f x x
So, yeah, Control.Monad.join.
Oh, and for your pointfree example, have you tried using applicative notation (from Control.Applicative):
ins x = zipWith (\a b -> a ++ (x:b)) <$> inits <*> tails
(I also don't know why people are so fond of a ++ (x:b) instead of a ++ [x] ++ b... it's not faster -- the inliner will take care of it -- and the latter is so much more symmetrical! Oh well)
What you call 'doubleArgs' is more often called dup - it is the W combinator (called warbler in To Mock a Mockingbird) - "the elementary duplicator".
What you call 'dup' is actually the 'starling-prime' combinator.
Haskell has a fairly small "combinator basis" see Data.Function, plus some Applicative and Monadic operations add more "standard" combinators by virtue of the function instances for Applicative and Monad (<*> from Applicative is the S - starling combinator for the functional instance, liftA2 & liftM2 are starling-prime). There doesn't seem to be much enthusiasm in the community for expanding Data.Function, so whilst combinators are good fun, pragmatically I've come to prefer long-hand in situations where a combinator is not directly available.
Here is another solution for the second part of my question: Arrows!
import Control.Arrow
ins x = inits &&& tails >>> second (map (x:)) >>> uncurry (zipWith (++))
The &&& ("fanout") distributes an argument to two functions and returns the pair of the results. >>> ("and then") reverses the function application order, which allows to have a chain of operations from left to right. second works only on the second part of a pair. Of course you need an uncurry at the end to feed the pair in a function expecting two arguments.

Resources