How do you use the latest version (0.8.1.2 at time of writing) of the iteratee library? - haskell

I've read tutorials on the iteratee and enumerator concepts, and have implemented a sample version as a way of learning how they work. However, the types used in the iteratee package are very different than any of the tutorials I have found. For example, Iteratee is defined as:
Iteratee
runIter :: forall r. (a -> Stream s -> m r)
-> ((Stream s -> Iteratee s m a)
-> Maybe SomeException -> m r)
-> m r
I really don't understand what I am meant to do with that. Are there any tutorials on using this version, and why it was written this way (ie what benefits this has over the original way Oleg did it).

Disclaimer: I'm the current maintainer of iteratee.
You may find some of the files in the iteratee Examples directory useful for understanding how to use the library; word.hs is probably the easiest to follow.
Basically, users shouldn't need to use runIter unless you're creating custom enumeratees. Iteratees can be created by combining the provided primitives, and also with the liftI, idone, and icont functions, enumerated over, and then run with run or tryRun.
Oleg has two "original" versions and a CPS version (and possibly others too). The original versions are both in http://okmij.org/ftp/Haskell/Iteratee/IterateeM.hs. The first is the actual code, and the second is in the comments. The first requires some special functions, >>== and $$, in place of the usual >>= and $. The second can use the standard functions, but unfortunately it's very difficult to reason about monadic ordering with this version. There are a few other drawbacks as well. The CPS version avoids all of these issues, which is why I switched over iteratee. I also find that iteratees defined in this style are shorter and more readable. Unfortunately I'm not aware of any tutorials specific to the CPS version, however Oleg's comments may be useful.

Disclaimer: I don't know the iteratee much and have never used them. So take my answer
with a grain of salt.
This definition is equivalent to Oleg's (more precisely, it's
a CPS-style sum), with a twist: it guarantees that accessing the
iteratee always return a monadic value.
Here is Oleg definition:
data Iteratee el m a =
| IE_done a
| IE_cont (Maybe ErrMsg)
(Stream el -> m (Iteratee el m a, Stream el))
So it's a sum of either done a, we're done a give a as a result,
or cont (Maybe ErrMsg) (Stream el -> ...) : a way to continue the
iteration given another chunk of input and, possibly an error (in
which case continuing the continuation amounts to restarting the
computation).
It is well-known that Either a b is equivalent to forall r. (a ->
r) -> (b -> r) -> r : giving you either a or b is equivalent to
promising you that, for any result r transform on a and transform
of b you may come up with, I will be able to produce such an r (to
do that I must have a a or a b). In a sense, (Either a b)
introduces a data, and r. (a -> r) -> (b -> r) -> r eliminates
this data: if such a function was named case_ab, then case_ab (\a ->
foo_a) (\b -> foo_b) is equivalent to the pattern matching case
ab of { Left a -> foo_a; Right b -> foo_b } for some ab :: Either a b.
So here is the continuation (we talk of continuations here because
(a -> r) represents "what will happen of the value once we know it's
an a") equivalent of Oleg's definition:
data Iteratee el m a =
forall r.
(a -> r) ->
((Maybe ErrMsg), (Stream el -> m (Iteratee el m a, Stream el)) -> r) ->
r
But there is a twist in the iteratee definition (modulo some innocuous
currying): the result is not r but m r: in a sense, we force the
result of pattern matching on our iteratee to always live in the monad
m.
data Iteratee el m a =
forall r.
(a -> m r) ->
(Maybe ErrMsg -> (Stream el -> m (Iteratee el m a, Stream el)) -> m r) ->
m r
Finally, notice that the "continuating the iteration" data in Oleg's
definition is Stream .. -> m (Iterate .., Stream ..), while in the
iteratee package it's only Stream -> Iteratee. I assume that they
have removed the monad here because they enforce it at the outer level
(if you apply the iteratee, you are forced to live in the monad, so
why also force subsequent computation to live in the monad?). I don't
know why there isn't the Stream output anymore, I suppose it means
that those Iteratee have to consume all the input when it is available
(or encode a "not finished yet" logic in the return type a). Perhaps
this is for efficiency reasons.

Related

Generic Pattern in Haskell

I was hoping that Haskell's compiler would understand that f v Is type-safe given Unfold v f (Although that is a tall order).
data Sequence a = FirstThen a (Sequence a) | Repeating a | UnFold b (b -> b) (b -> a)
Is there some way that I can encapsulate a Generic pattern for a datatype without adding extra template parameters.
(I am aware of a solution for this specific case using lazy maps but I am after a more general solution)
You can use existential quantification to get there:
data Sequence a = ... | forall b. UnFold b (b -> b) (b -> a)
I'm not sure this buys you much over the simpler solution of storing the result of unfolding directly, though:
data Stream a = Cons a (Stream a)
data Sequence' a = ... | Explicit (Stream a)
In particular, if somebody hands you a Sequence a, you can't pattern match on the b's contained within, even if you think you know what type they are.

How could we know that an applicative can't be a Monad?

From the example of Validation (https://hackage.haskell.org/package/Validation), I'm trying to get an intuition of detecting how/why an applicative could not be a Monad (Why can AccValidation not have a Monad instance?)
Could you challenge my reasoning ?
I think about a monad in the way we handle behind the join (m ( m b) -> m b), let's develop my understanding with an example like Validation:
in data Validation err a, the functor structure is (Validation err). When you look at the definition of the bind for Monad and specializing the types for Validation you get the following :
(>>=) :: m a -> (a -> m b) -> m b
(>>=) :: (Validation err) a -> ( a -> (Validation err) b) -> (Validation err) b
if you beta reduce (>>=) you'll get :
m a -> (a -> m b) -> m b // if we apply (m a) in the monadic function
m ( m b) -> m b
then to get the result of (>>=) which is m b, you'll use join :
join :: (Monad m) => m (m a) -> m a
join x = x >>= id
If you play with the types you'll get :
join m ( m b ) = m ( m b) >>= (\(m b) -> m b -> m b) which gives m b
So that join just drop the outermost structure, only the value in the innermost type (the value of the innermost functor) is kept/transmitted through the sequence.
In a monad we can't pass some information from the functor structure (e.g Validation err) to the next 'action', the only think we can pass is the value. The only think you could do with that structure is short-circuiting the sequence to get information from it.
You can't perform a sequence of action on the information from the functor structure (e.g accumulating something like error..)
So I would say that an applicative that is squashing its structure with some logic on its structure could be suspicious as not being able to become a Monad ?
This isn't really an answer, but it's too long for a comment.
This and other referenced discussions in that thread are relevant. I think the question is posed sort of backwards: all Monads naturally give rise to an Applicative (where pure = return, etc); the issue is that most users expect/assume that (where a type is instance Monad) the Applicative instance is semantically equivalent to the instance to which the Monad gives rise.
This is documented in the Applicative class as a sort of law, but I'm not totally convinced it's justified. The argument seems to be that having an Applicative and Monad that don't agree in this way is confusing.
My experience using Validation is that it's a nightmare to do anything large with it, both because the notation becomes a mess and because you find you have some data dependencies (e.g. you need to parse and validate one section based on the parse of a previous section). You end up defining bindV which behave like an Error monad >>= since a proper Monad instance is considered dubious.
And yet using a Monad/Applicative pair like this does what you want: especially when using ApplicativeDo (I imagine; haven't tried this), the effect of writing your parser (e.g.) in Monadic style is that you can accumulate as many errors as possible at every level, based on the data dependencies of your parsing code. Haxl arguably fudges this "law" in a similar way.
I don't have enough experience with other types that are Applicative but not Monad to know whether there's a sensible rule for when it's "okay" for the Applicative to disagree in this way. Maybe it's totally arbitrary that Validation seems to work sensibly.
In any case...
I'm not sure how to directly answer your question. I think you start by taking the laws documented at the bottom of Applicative class docs, and flip them, so you get:
return = pure
ap m1 m2 = m1 <*> m2
If ap were a method of Monad and the above was a minimal complete definition then you'd simply have to test whether the above passed the Monad laws to answer your question for any Applicative, but that's not the case of course.

Monads: Determining if an arbitrary transformation is possible

There are quite a few of questions here about whether or not certain transformations of types that involve Monads are possible.
For instance, it's possible to make a function of type f :: Monad m => [m a] -> m [a], but impossible to make a function of type g :: Monad m => m [a] -> [m a] as a proper antifunction to the former. (IE: f . g = id)
I want to understand what rules one can use to determine if a function of that type can or cannot be constructed, and why these types cannot be constructed if they disobey these rules.
The way that I've always thought about monads is that a value of type Monad m => m a is some program of type m that executes and produces an a. The monad laws reinforce this notion by thinking of composition of these programs as "do thing one then do thing two", and produce some sort of combination of the results.
Right unit Taking a program and just returning its value should
be the same as just running the original program.
m >>= return = m
Left unit If you create a simple program that just returns a value,
and then pass that value to a function that creates a new program, then
the resulting program should just be as if you called the function on the
value.
return x >>= f = f x
Associativity If you execute a program m, feed its result into a function f that produces another program, and then feed that result into a third function g that also produces a program, then this is identical to creating a new function that returns a program based on feeding the result of f into g, and feeding the result of m into it.
(m >>= f) >>= g = m >>= (\x -> f x >>= g)
Using this intuition about a "program that creates a value" can come to some conclusions about what it means for the functions that you've provided in your examples.
Monad m => [m a] -> m [a] Deviating from the intuitive definition of what this function should do is hard: Execute each program in sequence and collect the results. This produces another program that produces a list of results.
Monad m => m [a] -> [m a] This doesn't really have a clear intuitive definition, since it's a program that produces a list. You can't create a list without getting access to the resulting values which in this case means executing a program. Certain monads, that have a clear way to extract a value from a program, and provide some variant of m a -> a, like the State monad, can have sane implementations of some function like this. It would have to be application specific though. Other monads, like IO, you cannot escape from.
Monad m => (a -> m b) -> m (a -> b) This also doesn't really have a clear intuitive definition. Here you have a function f that produces a program of type m b, but you're trying to return a function of type m (a -> b). Based on the a, f creates completely different programs with different executing semantics. You cannot encompass these variations in a single program of type m (a -> b), even if you can provide a proper mapping of a -> b in the programs resulting value.
This intuition doesn't really encompass the idea behind monads completely. For example, the monadic context of a list doesn't really behave like a program.
Something easy to remember is : "you can't escape from a Monad" (it's kind of design for it). Transforming m [a] to [m a] is a form of escape, so you can't.
On the other hand you can easily create a Monad from something (using return) so traversing ([m a] -> m [a]) is usually possible.
If you take a look at "Monad laws", monad only constrain you to define a composition function but not reverse function.
In the first example you can compose the list elements.
In the second one Monad m => m [a] -> [m a], you cannot split an action into multiple actions ( action composition is not reversible).
Example:
Let's say you have to read 2 values.
s1 <- action
s2 <- action
Doing so, action result s2 depends by the side effect made by s1.
You can bind these 2 actions in 1 action to be executed in the same order, but you cannot split them and execute action from s2 without s1 made the side effect needed by the second one.
Not really an answer, and much too informal for my linking, but nevertheless I have a few interesting observations that won't fit into a comment. First, let's consider this function you refer to:
f :: Monad m => [m a] -> m [a]
This signature is in fact stronger than it needs to be. The current generalization of this is the sequenceA function from Data.Traversable:
sequenceA :: (Traversable t, Applicative f) -> t (f a) -> f (t a)
...which doesn't need the full power of Monad, and can work with any Traversable and not just lists.
Second: the fact that Traversable only requires Applicative is I think really significant to this question, because applicative computations have a "list-like" structure. Every applicative computation can be rewritten to have the form f <$> a1 <*> ... <*> an for some f. Or, informally, every applicative computation can be seen as a list of actions a1, ... an (heterogeneous on the result type, homogeneous in the functor type), plus an n-place function to combine their results.
If we look at sequenceA through this lens, all it does is choose an f built out of the appropriate nested number of list constructors:
sequenceA [a1, ..., an] == f <$> a1 <*> ... <*> an
where f v1 ... vn = v1 : ... : vn : []
Now, I haven't had the chance to try and prove this yet, but my conjectures would be the following:
Mathematically speaking at least, sequenceA has a left inverse in free applicative functors. If you have a Functor f => [FreeA f a] and you sequenceA it, what you get is a list-like structure that contains those computations and a combining function that makes a list out of their results. I suspect however that it's not possible to write such a function in Haskell (unSequenceFreeA :: (Traversable t, Functor f) => FreeA f (t a) -> Maybe (t (Free f a))), because you can't pattern match on the structure of the combining function in the FreeA to tell that it's of the form f v1 ... vn = v1 : ... : vn : [].
sequenceA doesn't have a right inverse in a free applicative, however, because the combining function that produces a list out of the results from the a1, ... an actions may do anything; for example, return a constant list of arbitrary length (unrelated to the computations that the free applicative value performs).
Once you move to non-free applicative functors, there will no longer be a left inverse for sequenceA, because the non-free applicative functor's equations translate into cases where you can no longer tell apart which of two t (f a) "action lists" was the source for a given f (t a) "list-producing action."

What is indexed monad?

What is indexed monad and the motivation for this monad?
I have read that it helps to keep track of the side effects. But the type signature and documentation doesn't lead me to anywhere.
What would be an example of how it can help to keep track of side effects (or any other valid example)?
As ever, the terminology people use is not entirely consistent. There's a variety of inspired-by-monads-but-strictly-speaking-isn't-quite notions. The term "indexed monad" is one of a number (including "monadish" and "parameterised monad" (Atkey's name for them)) of terms used to characterize one such notion. (Another such notion, if you're interested, is Katsumata's "parametric effect monad", indexed by a monoid, where return is indexed neutrally and bind accumulates in its index.)
First of all, let's check kinds.
IxMonad (m :: state -> state -> * -> *)
That is, the type of a "computation" (or "action", if you prefer, but I'll stick with "computation"), looks like
m before after value
where before, after :: state and value :: *. The idea is to capture the means to interact safely with an external system that has some predictable notion of state. A computation's type tells you what the state must be before it runs, what the state will be after it runs and (like with regular monads over *) what type of values the computation produces.
The usual bits and pieces are *-wise like a monad and state-wise like playing dominoes.
ireturn :: a -> m i i a -- returning a pure value preserves state
ibind :: m i j a -> -- we can go from i to j and get an a, thence
(a -> m j k b) -- we can go from j to k and get a b, therefore
-> m i k b -- we can indeed go from i to k and get a b
The notion of "Kleisli arrow" (function which yields computation) thus generated is
a -> m i j b -- values a in, b out; state transition i to j
and we get a composition
icomp :: IxMonad m => (b -> m j k c) -> (a -> m i j b) -> a -> m i k c
icomp f g = \ a -> ibind (g a) f
and, as ever, the laws exactly ensure that ireturn and icomp give us a category
ireturn `icomp` g = g
f `icomp` ireturn = f
(f `icomp` g) `icomp` h = f `icomp` (g `icomp` h)
or, in comedy fake C/Java/whatever,
g(); skip = g()
skip; f() = f()
{h(); g()}; f() = h(); {g(); f()}
Why bother? To model "rules" of interaction. For example, you can't eject a dvd if there isn't one in the drive, and you can't put a dvd into the drive if there's one already in it. So
data DVDDrive :: Bool -> Bool -> * -> * where -- Bool is "drive full?"
DReturn :: a -> DVDDrive i i a
DInsert :: DVD -> -- you have a DVD
DVDDrive True k a -> -- you know how to continue full
DVDDrive False k a -- so you can insert from empty
DEject :: (DVD -> -- once you receive a DVD
DVDDrive False k a) -> -- you know how to continue empty
DVDDrive True k a -- so you can eject when full
instance IxMonad DVDDrive where -- put these methods where they need to go
ireturn = DReturn -- so this goes somewhere else
ibind (DReturn a) k = k a
ibind (DInsert dvd j) k = DInsert dvd (ibind j k)
ibind (DEject j) k = DEject j $ \ dvd -> ibind (j dvd) k
With this in place, we can define the "primitive" commands
dInsert :: DVD -> DVDDrive False True ()
dInsert dvd = DInsert dvd $ DReturn ()
dEject :: DVDrive True False DVD
dEject = DEject $ \ dvd -> DReturn dvd
from which others are assembled with ireturn and ibind. Now, I can write (borrowing do-notation)
discSwap :: DVD -> DVDDrive True True DVD
discSwap dvd = do dvd' <- dEject; dInsert dvd ; ireturn dvd'
but not the physically impossible
discSwap :: DVD -> DVDDrive True True DVD
discSwap dvd = do dInsert dvd; dEject -- ouch!
Alternatively, one can define one's primitive commands directly
data DVDCommand :: Bool -> Bool -> * -> * where
InsertC :: DVD -> DVDCommand False True ()
EjectC :: DVDCommand True False DVD
and then instantiate the generic template
data CommandIxMonad :: (state -> state -> * -> *) ->
state -> state -> * -> * where
CReturn :: a -> CommandIxMonad c i i a
(:?) :: c i j a -> (a -> CommandIxMonad c j k b) ->
CommandIxMonad c i k b
instance IxMonad (CommandIxMonad c) where
ireturn = CReturn
ibind (CReturn a) k = k a
ibind (c :? j) k = c :? \ a -> ibind (j a) k
In effect, we've said what the primitive Kleisli arrows are (what one "domino" is), then built a suitable notion of "computation sequence" over them.
Note that for every indexed monad m, the "no change diagonal" m i i is a monad, but in general, m i j is not. Moreover, values are not indexed but computations are indexed, so an indexed monad is not just the usual idea of monad instantiated for some other category.
Now, look again at the type of a Kleisli arrow
a -> m i j b
We know we must be in state i to start, and we predict that any continuation will start from state j. We know a lot about this system! This isn't a risky operation! When we put the dvd in the drive, it goes in! The dvd drive doesn't get any say in what the state is after each command.
But that's not true in general, when interacting with the world. Sometimes you might need to give away some control and let the world do what it likes. For example, if you are a server, you might offer your client a choice, and your session state will depend on what they choose. The server's "offer choice" operation does not determine the resulting state, but the server should be able to carry on anyway. It's not a "primitive command" in the above sense, so indexed monads are not such a good tool to model the unpredictable scenario.
What's a better tool?
type f :-> g = forall state. f state -> g state
class MonadIx (m :: (state -> *) -> (state -> *)) where
returnIx :: x :-> m x
flipBindIx :: (a :-> m b) -> (m a :-> m b) -- tidier than bindIx
Scary biscuits? Not really, for two reasons. One, it looks rather more like what a monad is, because it is a monad, but over (state -> *) rather than *. Two, if you look at the type of a Kleisli arrow,
a :-> m b = forall state. a state -> m b state
you get the type of computations with a precondition a and postcondition b, just like in Good Old Hoare Logic. Assertions in program logics have taken under half a century to cross the Curry-Howard correspondence and become Haskell types. The type of returnIx says "you can achieve any postcondition which holds, just by doing nothing", which is the Hoare Logic rule for "skip". The corresponding composition is the Hoare Logic rule for ";".
Let's finish by looking at the type of bindIx, putting all the quantifiers in.
bindIx :: forall i. m a i -> (forall j. a j -> m b j) -> m b i
These foralls have opposite polarity. We choose initial state i, and a computation which can start at i, with postcondition a. The world chooses any intermediate state j it likes, but it must give us the evidence that postcondition b holds, and from any such state, we can carry on to make b hold. So, in sequence, we can achieve condition b from state i. By releasing our grip on the "after" states, we can model unpredictable computations.
Both IxMonad and MonadIx are useful. Both model validity of interactive computations with respect to changing state, predictable and unpredictable, respectively. Predictability is valuable when you can get it, but unpredictability is sometimes a fact of life. Hopefully, then, this answer gives some indication of what indexed monads are, predicting both when they start to be useful and when they stop.
There are at least three ways to define an indexed monad that I know.
I'll refer to these options as indexed monads à la X, where X ranges over the computer scientists Bob Atkey, Conor McBride and Dominic Orchard, as that is how I tend to think of them. Parts of these constructions have a much longer more illustrious history and nicer interpretations through category theory, but I first learned of them associated with these names, and I'm trying to keep this answer from getting too esoteric.
Atkey
Bob Atkey's style of indexed monad is to work with 2 extra parameters to deal with the index of the monad.
With that you get the definitions folks have tossed around in other answers:
class IMonad m where
ireturn :: a -> m i i a
ibind :: m i j a -> (a -> m j k b) -> m i k b
We can also define indexed comonads à la Atkey as well. I actually get a lot of mileage out of those in the lens codebase.
McBride
The next form of indexed monad is Conor McBride's definition from his paper "Kleisli Arrows of Outrageous Fortune". He instead uses a single parameter for the index. This makes the indexed monad definition have a rather clever shape.
If we define a natural transformation using parametricity as follows
type a ~> b = forall i. a i -> b i
then we can write down McBride's definition as
class IMonad m where
ireturn :: a ~> m a
ibind :: (a ~> m b) -> (m a ~> m b)
This feels quite different than Atkey's, but it feels more like a normal Monad, instead of building a monad on (m :: * -> *), we build it on (m :: (k -> *) -> (k -> *).
Interestingly you can actually recover Atkey's style of indexed monad from McBride's by using a clever data type, which McBride in his inimitable style chooses to say you should read as "at key".
data (:=) a i j where
V :: a -> (a := i) i
Now you can work out that
ireturn :: IMonad m => (a := j) ~> m (a := j)
which expands to
ireturn :: IMonad m => (a := j) i -> m (a := j) i
can only be invoked when j = i, and then a careful reading of ibind can get you back the same as Atkey's ibind. You need to pass around these (:=) data structures, but they recover the power of the Atkey presentation.
On the other hand, the Atkey presentation isn't strong enough to recover all uses of McBride's version. Power has been strictly gained.
Another nice thing is that McBride's indexed monad is clearly a monad, it is just a monad on a different functor category. It works over endofunctors on the category of functors from (k -> *) to (k -> *) rather than the category of functors from * to *.
A fun exercise is figuring out how to do the McBride to Atkey conversion for indexed comonads. I personally use a data type 'At' for the "at key" construction in McBride's paper. I actually walked up to Bob Atkey at ICFP 2013 and mentioned that I'd turned him inside out at made him into a "Coat". He seemed visibly disturbed. The line played out better in my head. =)
Orchard
Finally, a third far-less-commonly-referenced claimant to the name of "indexed monad" is due to Dominic Orchard, where he instead uses a type level monoid to smash together indices. Rather than go through the details of the construction, I'll simply link to this talk:
https://github.com/dorchard/effect-monad/blob/master/docs/ixmonad-fita14.pdf
As a simple scenario, assume you have a state monad. The state type is a complex large one, yet all these states can be partitioned into two sets: red and blue states. Some operations in this monad make sense only if the current state is a blue state. Among these, some will keep the state blue (blueToBlue), while others will make the state red (blueToRed). In a regular monad, we could write
blueToRed :: State S ()
blueToBlue :: State S ()
foo :: State S ()
foo = do blueToRed
blueToBlue
triggering a runtime error since the second action expects a blue state. We would like to prevent this statically. Indexed monad fulfills this goal:
data Red
data Blue
-- assume a new indexed State monad
blueToRed :: State S Blue Red ()
blueToBlue :: State S Blue Blue ()
foo :: State S ?? ?? ()
foo = blueToRed `ibind` \_ ->
blueToBlue -- type error
A type error is triggered because the second index of blueToRed (Red) differs from the first index of blueToBlue (Blue).
As another example, with indexed monads you can allow a state monad to change the type for its state, e.g. you could have
data State old new a = State (old -> (new, a))
You could use the above to build a state which is a statically-typed heterogeneous stack. Operations would have type
push :: a -> State old (a,old) ()
pop :: State (a,new) new a
As another example, suppose you want a restricted IO monad which does not
allow file access. You could use e.g.
openFile :: IO any FilesAccessed ()
newIORef :: a -> IO any any (IORef a)
-- no operation of type :: IO any NoAccess _
In this way, an action having type IO ... NoAccess () is statically guaranteed to be file-access-free. Instead, an action of type IO ... FilesAccessed () can access files. Having an indexed monad would mean you don't have to build a separate type for the restricted IO, which would require to duplicate every non-file-related function in both IO types.
An indexed monad isn't a specific monad like, for example, the state monad but a sort of generalization of the monad concept with extra type parameters.
Whereas a "standard" monadic value has the type Monad m => m a a value in an indexed monad would be IndexedMonad m => m i j a where i and j are index types so that i is the type of the index at the beginning of the monadic computation and j at the end of the computation. In a way, you can think of i as a sort of input type and j as the output type.
Using State as an example, a stateful computation State s a maintains a state of type s throughout the computation and returns a result of type a. An indexed version, IndexedState i j a, is a stateful computation where the state can change to a different type during the computation. The initial state has the type i and state and the end of the computation has the type j.
Using an indexed monad over a normal monad is rarely necessary but it can be used in some cases to encode stricter static guarantees.
It may be important to take a look how indexing is used in dependent types (eg in agda). This can explain how indexing helps in general, then translate this experience to monads.
Indexing permits to establish relationships between particular instances of types. Then you can reason about some values to establish whether that relationship holds.
For example (in agda) you can specify that some natural numbers are related with _<_, and the type tells which numbers they are. Then you can require that some function is given a witness that m < n, because only then the function works correctly - and without providing such witness the program will not compile.
As another example, given enough perseverance and compiler support for your chosen language, you could encode that the function assumes that a certain list is sorted.
Indexed monads permit to encode some of what dependent type systems do, to manage side effects more precisely.

Converting monads

Lets say I have function
(>>*=) :: (Show e') => Either e' a -> (a -> Either e b) -> Either e b
which is converting errors of different types in clean streamlined functions. I am pretty happy about this.
BUT
Could there possibly be function <*- that would do similar job insted of <- keyword, that it would not look too disturbing?
Well, my answer is really the same as Toxaris' suggestion of a foo :: Either e a -> Either e' a function, but I'll try to motivate it a bit more.
A function like foo is what we call a monad morphism: a natural transformation from one monad into another one. You can informally think of this as a function that sends any action in the source monad (irrespective of result type) to a "sensible" counterpart in the target monad. (The "sensible" bit is where it gets mathy, so I'll skip those details...)
Monad morphisms are a more fundamental concept here than your suggested >>*= function for handling this sort of situation in Haskell. Your >>*= is well-behaved if it's equivalent to the following:
(>>*=) :: Monad m => n a -> (a -> m b) -> m b
na >>*= k = morph na >>= k
where
-- Must be a monad morphism:
morph :: n a -> m a
morph = ...
So it's best to factor your >>*= out into >>= and case-specific monad morphisms. If you read the link from above, and the tutorial for the mmorph library, you'll see examples of generic utility functions that use user-supplied monad morphisms to "edit" monad transformer stacks—for example, use a monad morphism morph :: Error e a -> Error e' a to convert StateT s (ErrorT e IO) a into StateT s (ErrorT e' IO) a.
It is not possible to write a function that you can use instead of the <- in do notation. The reason is that to the left of <-, there is a pattern, but functions take values. But maybe you can write a function
foo :: (Show e') => Either e' a -> Either e a
that converts the error messages and then use it like this:
do x <- foo $ code that creates e1 errors
y <- foo $ code that creates e2 errors
While this is not as good as the <*- you're asking for, it should allow you to use do notation.

Resources