unbound variables in monad associativity law - haskell

Using ghci I have computed:
Prelude> let m = [1,2]
Prelude> let ys = [4, 5, 6]
Prelude> m >>= (\x -> ys >>= (\y -> return (x, y)))
[(1,4),(1,5),(1,6),(2,4),(2,5),(2,6)]
The monadic expression above doesn't seem to correspond to either side of the monad associativity law:
(m >>= f) >>= g ≡ m >>= (\x -> f x >>= g)
I would like to know how monad associativity can be applied to the expression:
m >>= (\x -> ys >>= (\y -> return (x, y)))
Because return (x,y) closes on both the surrounding function and the one containing it, it seems that an intermediate monad, as exists on the left side of the associativity law (m >>= f), cannot exist in this example.

I think that you're confusing the monadic laws for the structure of a monadic expression. The monadic associativity law states that the expression (m >>= f) >>= g must be equivalent to the expression m >>= (\x -> f x >>= g) for the data type of m to be considered a monad.
This does not imply that every monadic expression must be of the form (m >>= f) >>= g.
For example m >>= f is a perfectly valid monadic expression even though it's not of the form (m >>= f) >>= g. However it still obeys the monadic associativity law because m can be expanded to m >>= return (from the monadic right identity law m >>= return ≡ m). Hence:
m >>= f
-- is equivalent to
(m >>= return) >>= f
-- is of the form
(m >>= f) >>= g
In your example m >>= (\x -> ys >>= (\y -> return (x, y))) is of the form m >>= f where f is \x -> ys >>= (\y -> return (x, y)).
Although \x -> ys >>= (\y -> return (x, y)) is not of the form \x -> f x >>= g (from the right hand side of the monadic associativity law) it doesn't mean that it breaks the monadic laws.
The expression m >>= (\x -> ys >>= (\y -> return (x, y))) can be expanded into the monadic associative form by substituting m >>= return for m:
(m >>= return) >>= (\x -> ys >>= (\y -> return (x, y)))
-- is of the form
(m >>= f) >>= g
-- and can be written as
m >>= (\x -> return x >> (\x -> ys >>= (\y -> return (x, y))))
Hope that clarifies things.

Indeed, it's not possible to directly apply the associative law, because of the scope of x in the original expression:
import Control.Monad (liftM)
test = let m = [1,2]
ys = [4, 5, 6]
in m >>= (\x -> ys >>= (\y -> return (x, y)))
However, we can reduce the scope of x if we include it into the result of the first monadic computation. Instead of returning [Int] in x -> ys, we'll use \x -> liftM ((,) x) ys and return [(Int,Int)], where the first number in each pair is always x and the second is one of ys. (Note that for lists liftM is the same as map.) And the second function will read the value of x from its input:
test1 = let m = [1,2]
ys = [4, 5, 6]
in m >>= (\x -> liftM ((,) x) ys >>= (\(x', y) -> return (x', y)))
(The monadic function \(x', y) -> return (x', y) could be now simplified just to return, and subsequently >>= return removed completely, but let's keep it there for the sake of the argument.)
Now each monadic function is self-contained and we can apply the associativity law:
test2 = let m = [1,2]
ys = [4, 5, 6]
in (m >>= \x -> liftM ((,) x) ys) >>= (\(x, y) -> return (x, y))

The monadic law applies to functions of one arguments only. The expression
xs >>= (\x -> ys >>= (\y -> (x, y)))
is really equivalent to:
xs >>= \x -> fmap ($ x) $ ys >>= \y -> return (\x -> (x,y))
(if we were to avoid capturing x)
So you can't apply the same laws - we have fmap as f and no g from the associativity law.
The above is of course the same as:
xs >>= \x -> fmap ($ x) $ fmap (\y x -> (x,y)) ys
or
xs >>= \x -> fmap (\y -> (x,y)) ys

Related

proving monad laws of a new monad instance (list of maybe)

I made a new List of Maybe Monad instance and tried to prove the implementation does satisfy the Monad laws, am I doing it right or is the implementation incorrect? Any pointer is appreciated. Thanks!
newtype Test a = Test { getTest :: [Maybe a] }
deriving Functor
instance Applicative Test where
pure = return
(<*>) = liftM2 ($)
instance Monad Test where
return :: a -> Test a
return a = Test $ [Just a]
(>>=) :: Test a -> (a -> Test b) -> Test b
Test [Nothing] >>= f = Test [Nothing]
Test [Just x] >>= f = f x
{-
1. return x >>= f = f x
return x >>= f = [Just x] >>= f = f x
2. m >>= return = m
[Nothing] >>= return = [Nothing]
[Just x] >>= return = return x = [Just x]
3. (m >>= f) >>= g == m >>= (\x -> (f x >>= g))
m = [Nothing]
L.H.S. = ([Nothing] >>= f ) >>= g = Nothing >>= g = Nothing
R.H.S. = [Nothing] >>= (\x -> (f x >>= g)) = Nothing
m = [Just x]
L.H.S. = ([Just x] >>= f) >>= g = f x >>= g
R.H.S. = [Just x] >>= (\v -> (f v >>= g)) = (\v -> (f v >>= g)) x
= f x >>= g
-}
The bits of the proof you have written are only incorrect in unimportant ways. Specifically, in these two lines:
([Nothing] >>= f ) >>= g = Nothing >>= g = Nothing
[Nothing] >>= (\x -> (f x >>= g)) = Nothing
The three bare Nothings should be [Nothing]s.
However, the proof is incomplete, because there are values of type Test a that are neither of the form [Just (x :: a)] nor [Nothing]. This makes the proof as a whole incorrect in an important way.

Can Recursive-Do Be Desugared

Can a recursive-do statement be desugared to a series of >>= statements? If so, what does the reverse state monad's definition for >>= look like when it's desugared?
instance MonadFix m => Monad (StateT s m) where
return x = ...
m >>= f = StateT $ \s -> do
rec
(x, s'') <- runStateT m s'
(x', s') <- runStateT (f x) s
return (x', s'')
Recursive-do desugars not only to a series of >>= calls, but also, as long as there is actually recursion, into an mfix call. It is in the mfix call that the whole recursiveness happens, via what is technically termed "magic fairy dust".
Seriously though, how it happens is different for every monad, and that's why it's a class MonadFix rather than just a function. But the important point is that it can "magically" pass you your own result as a parameter, which is only possible due to Haskell's laziness, and therefore must be handled with care.
In general, something like this:
do
rec
x <- f y
y <- g x
return $ h x y
Desugars into this:
mfix (\ ~(x, y) -> do
x' <- f y
y' <- g x'
return (x', y')
)
>>= (\(x, y) -> h x y)
So applying to the reverse state definition, it would look like this:
m >>= f = StateT $ \s ->
mfix (\ ~((x, s''), (x',s')) -> do
(x0, s0'') <- runStateT m s'
(x0', s0') <- runStateT (f x0) s
return ((x0, s0''), (x0', x0'))
)
>>= (\(x, s''), (x',s') -> return (x', s''))
And from here, we can just desugar the regular do as usual:
m >>= f = StateT $ \s ->
mfix (\ ~((x, s''), (x',s')) ->
runStateT m s' >>= \(x0, s0'') ->
runStateT (f x0) s >>= \(x0', s0') ->
return ((x0, s0''), (x0', x0'))
)
>>= (\(x, s''), (x',s') -> return (x', s''))
(that is, unless I messed up something - a lot of ticks flying around :-)

Two implementations for List Monad bind in the literature: why are they equivalent?

Reading the Monad chapter in "Programming in Haskell" 2nd ed. from Graham Hutton, I found this example on page 167 to illustrate the behaviour of the List Monad:
> pairs [1,2] [3,4]
[(1,3),(1,4),(2,3),(2,4)]
With pairs defined like this:
pairs :: [a] -> [b] -> [(a,b)]
pairs xs ys = do x <- xs
y <- ys
return (x,y)
And this implementation of bind:
instance Monad [] where
-- (>>=) :: [a] -> (a -> [b]) -> [b]
xs >>= f = [y | x <- xs, y <- f x]
I tried to understand with pencil and paper how the example worked out, but didn't get it.
Then I found, that in other books the bind operation is defined differently:
...
xs >>= f = concat (fmap f xs)
With this definition I understand why the example works.
But the first definition is the one I found on hackage in the prelude, so I trust its correct.
My question:
Can anybody explain why the first definition is equivalent to the second? (Where does the concat-stuff happen in the first one?)
List comprehensions are just syntactic sugar. Basically, [f x y | x<-l, y<-m] is sugar for
concatMap (\x -> concatMap (\y -> return $ f x y) m) l
or equivalently
concat $ fmap (\x -> concat $ fmap (\y -> return $ f x y) m) l
thus the two implementations are indeed equivalent by definition.
Anyway you can of course manually evaluate the example from the comprehension-based definition, using “intuitive set comprehension” evaluation:
pairs [1,2] [3,4]
≡ do { x <- [1,2]; y <- [3,4]; return (x,y) }
≡ [1,2] >>= \x -> [3,4] >>= \y -> return (x,y)
≡ [p | x<-[1,2], p <- (\ξ -> [3,4] >>= \y -> return (ξ,y)) x]
≡ [p | x<-[1,2], p <- ([3,4] >>= \y -> return (x,y))]
≡ [p | x<-[1,2], p <- [q | y<-[3,4], q <- (\υ -> return (x,υ)) y]]
≡ [p | x<-[1,2], p <- [q | y<-[3,4], q <- return (x,y)]]
≡ [p | x<-[1,2], p <- [q | y<-[3,4], q <- [(x,y)]]]
≡ [p | x<-[1,2], p <- [(x,3), (x,4)]]
≡ [(1,3), (1,4)] ++ [(2,3), (2,4)]
≡ [(1,3), (1,4), (2,3), (2,4)]
[y | x <- xs, y <- f x] takes all the xs in xs one-by-one and
apply f to them, which is a monadic action a -> [a], thus the result is a list of values ([a])
the comprehension proceeds to address each y in f x one-by-one
each y is sent to the output list
this is equivalent to first mapping f over each of the elements of the input list, resulting in a list of nested lists, that are then concatenated. Notice that fmap is map for lists, and you could use concatMap f xs as the definition of xs >>= f.

Applicative functors: why can fmap take a function with more than one argument?

I am getting into Haskell and found the book "learn you a Haskell" most helpful. I am up to the section on applicative functors.
I am puzzled by the following as it appears in the book:
(\x y z -> [x, y, z]) <$> (+3) <*> (*2) <*> (/2) $ 5
which yields the output:
[8.0,10.0,2.5]
First of all, I have confirmed my suspicion in ghci in regards to precedence of the operators, so that the above equals the following ugly statement:
(((\x y z -> [x,y,z]) <$> (+3)) <*> (*2) <*> (/2)) $ 5
So from that it becomes clear that the first thing that happens is the fmap call via the (<$>) infix operator.
And this is the core of what boggles my mind currently.
The definition of fmap (here shown as infix (<$>)) is:
(<$>) :: (Functor f) => (a -> b) -> f a -> f b
But in the equation I am struggling with, (\x y z -> [x, y, z]) takes three arguments, not just one. So how could the first argument of type (a -> b) be satisfied?
I think it might have to do with partial application / currying but I cannot figure it out. I would greatly appreciate an explanation. Hope I have formulated the question well enough.
Simple answer: there are no functions with multiple arguments in Haskell!
There are two candidates for what you might call "dyadic function": a function that takes a (single!) tuple, and – by far prevalent in Haskell – curried functions. Those take just one argument, but the result is a function again.
So, to figure out what e.g. fmap (+) does, let's write
type IntF = Int -> Int
-- (+) :: Int -> IntF
-- fmap :: ( a -> b ) -> f a -> f b
-- e.g.:: (Int->IntF) -> f Int->f IntF
Test it yourself in GHCi:
Prelude> type IntF = Int -> Int
Prelude> let (#) = (+) :: Int -> IntF
Prelude> :t fmap (#)
fmap (#) :: Functor f => f Int -> f IntF
Consider a function of type
f :: a -> b -> c -> d
where d is any other type. Due to currying, this can be thought of as a function with the following type
f :: a -> (b -> c -> d)
i.e. a function that takes an a and returns function of type b -> c -> d. If you apply fmap, you have
-- the type of fmap, which is also :: (a -> r) -> (f a -> f r)
fmap :: Functor f => (a -> r) -> f a -> f r
-- the type of f
f :: a -> (b -> c -> d)
-- so, setting r = b -> c -> d
fmap f :: f a -> f (b -> c -> d)
Which is now of the right type to be used as the left-hand argument to (<*>).
Because you can take a 3-argument function, feed it just one argument, and this results in a 2-argument function. So you're going to end up with a list of 2-argument functions. You can then apply one more argument, ending up with a list of 1-argument functions, and finally apply the last argument, whereupon you end up with a list of ordinary numbers.
Incidentally, this is why Haskell has curried functions. It makes it easy to write constructs like this one which work for any number of function arguments. :-)
I personally find the applicative functor instance for functions a bit strange. I'll walk you through this example to try to understand intuitively what's going on:
>>> :t (\x y z -> [x, y, z]) <$> (+3)
... :: Num a => a -> a -> a -> [a]
>>> ((\x y z -> [x, y, z]) <$> (+3)) 1 2 3
[4,2,3]
This applies (+3) to the first parameter of the inner function. The other 2 outer parameters are passed to the inner function unmodified.
Let's add an applicative:
>>> :t (\x y z -> [x, y, z]) <$> (+3) <*> (*2)
... :: Num a => a -> a -> [a]
>>> ((\x y z -> [x, y, z]) <$> (+3) <*> (*2)) 1 2
[4,2,2]
This applies (+3) to the first argument as before. With the applicative, the first outer parameter (1) is applied (*2) and passed as the second parameter of the inner function. The second outer parameter is passed unmodified to the inner function as its third parameter.
Guess what happens when we use another applicative:
>>> :t (\x y z -> [x, y, z]) <$> (+3) <*> (*2) <*> (/2)
... :: Fractional a => a -> [a]
>>> (\x y z -> [x, y, z]) <$> (+3) <*> (*2) <*> (/2) $ 1
[4.0,2.0,0.5]
3 applications to the same parameter passed as 3 arguments to the inner function.
It's not theoretically solid explanation, but it can give an intuition about how the applicative instance of functions works.
Background
Let's start with the definition of the <*> and pure for functions as an instance of Applicative. For pure, it will take any garbage value, and return x. For <*>, you can think of it as applying x to f, getting a new function out of it, then applying it to the output of g x.
instance Applicative ((->) r) where
pure x = (\_ -> x)
f <*> g = \x -> f x (g x)
Now, let's look at the definition of <$>. It is just an infix version of fmap.
(<$>) :: (Functor f) => (a -> b) -> f a -> f b
f <$> x = fmap f x
Recall that fmap has the following implementation:
instance Functor ((->) r) where
fmap f g = (\x -> f (g x))
Proving that f <$> x is just pure f <*> x
Let's start with pure f <*> x. Replace pure f with (\_ -> f).
pure f <*> x
= (\_ -> f) <*> x
Now, let's apply the definition of <*>, which is f <*> g = \q -> f q (g q).
(\_ -> f) <*> x
= \q -> (\_ -> f) q (x q)
Notice we can simplify (\_ -> f) q as just f. The function takes in whatever value we give it, and returns f.
\q -> (\_ -> f) q (x q)
= \q -> f (x q)
That looks just like our definition of fmap! And the <$> operator is just infix fmap.
\q -> f (x q)
= fmap f x
= f <$> x
Let's keep this in mind: f <$> g is just pure f <*> g.
Understanding (\x y z -> [x, y, z]) <$> (+3) <*> (*2) <*> (/2) $ 5
First step is to rewrite the left side of expression to use <*> instead of <$>. Using what we just proved in in the previous section:
(\x y z -> [x, y, z]) <$> (+3)
= pure (\x y z -> [x, y, z]) <*> (+3)
So the full expression becomes
pure (\x y z -> [x, y, z]) <*> (+3) <*> (*2) <*> (/2) $ 5
Let's simplify the first operator using the definition of <*>
pure (\x y z -> [x, y, z]) <*> (+3)
= \a -> f a (g a) --substitute f and g
= \a -> pure (\x y z -> [x, y, z]) a ((+3) a)
Now let's substitute pure x with (\_ -> x). Observe that a becomes the garbage value that's used as _, and is consumed to return the function (\x y z -> [x, y, z]).
\a -> (\_-> (\x y z -> [x, y, z])) a ((+3) a)
= \a -> (\x y z -> [x, y, z]) ((+3) a)
Now let's look back at the full expression, and tackle the next <*>. Again, let's apply the definition of <*>.
(\a -> (\x y z -> [x, y, z]) ((+3) a)) <*> (*2)
= \b -> (\a -> (\x y z -> [x, y, z]) ((+3) a)) b ((*2) b)
Finally, let's repeat this one last time for the final <*>.
(\b -> (\a -> (\x y z -> [x, y, z]) ((+3) a)) b ((*2) b)) <*> (/2)
= \c -> (\b -> (\a -> (\x y z -> [x, y, z]) ((+3) a)) b ((*2) b)) c ((/2) c)
Notice that it's a function that takes a single value. We'll feed it 5.
(\c -> (\b -> (\a -> (\x y z -> [x, y, z]) ((+3) a)) b ((*2) b)) c ((/2) c)) 5
(\5 -> (\b -> (\a -> (\x y z -> [x, y, z]) ((+3) a)) b ((*2) b)) 5 ((/2) 5))
(\b -> (\a -> (\x y z -> [x, y, z]) ((+3) a)) b ((*2) b)) 5 (2.5 )
(\5 -> (\a -> (\x y z -> [x, y, z]) ((+3) a)) 5 ((*2) 5)) (2.5 )
(\a -> (\x y z -> [x, y, z]) ((+3) a)) 5 (10 ) (2.5 )
(\5 -> (\x y z -> [x, y, z]) ((+3) 5)) (10 ) (2.5 )
(\x y z -> [x, y, z]) (8 ) (10 ) (2.5 )
(\x y z -> [x, y, z]) (8) (10) (2.5)
= [8, 10, 2.5]
And that's how we get the final answer.

Associativity of monads

newtype Set a = Set [a]
New Type Set that contains a list.
empty :: Set a
empty = Set []
sing :: a -> Set a
sing x = Set [x]
Function to creat a set.
memSet :: (Eq a) => a -> Set a -> Bool
memSet _ (Set []) = False
memSet x (Set xs)
| elem x xs = True
| otherwise = False
{-
makeSet :: (Eq a) => [a] -> Set a
makeSet [] = empty
makeset (x:xs) = union (sing x) (makeSet xs)
-- etc
-- we need the obvious stuff:
union :: Set a -> Set a -> Set a
unionMult :: [ Set a ] -> Set a
intersection :: Set a -> Set a -> Set a
subSet :: Set a -> Set a -> Bool
mapSet :: (a -> b) -> Set a -> Set b
mapset f (Set xs) = makeSet (map f xs)
-}
-- now making it a monad:
instance Monad Set where
return = sing
(Set x) >>= f = unionMult (map f x)
Verification:
Left identity:
return a >>= f ≡ f a
Right identity:
m >>= return ≡ m
Associativity:
(m >>= f) >>= g ≡ m >>= (\x -> f x >>= g)
left:
return x >>= f
(Set [x]) >>= f
unionMult (map f [x])
unionMult [ (f x) ] = f x
right:
(Set [xs]) >>= return
unionMult (map return [xs])
unionMult [ys]
Set [xs]
Need help with the last one.
Since Set a is just a newtype around [a] lets use [] directly. The proofs will be similar so long as we use Set's instances; we'll be able to use []'s constructors (somewhat) directly. That's nice because then we can prove things inductively.
We want to show that for all xs :: [a] xs >>= return == xs. Let's first assume that xs == [].
[] >>= return
unionConcat (map return [])
unionConcat []
[]
Without defining unionConcat we can use this to show that unless unionConcat [] = [] holds, we can't get associativity. We'll keep that in mind for later.
Now we'll do the inductive step, assuming that we have some particular xs :: [a] where xs >>= return == xs, can we show that (x:xs) >>= return == x:xs?
(x:xs) >>= return
unionConcat (map return (x:xs))
unionConcat (return x : map return xs)
...
x : unionConcat (map return xs)
x : (xs >>= return)
x:xs -- working upward from the bottom here
Providing yet another property of unionConcat---
unionConcat (return x : xs) = x : unionConcat xs
So even before we have a definition of unionConcat we can already say that our properties will hold contingent on it following certain properties of its own. We ought to translate the (:) constructor back into a notion for sets, though.
unionConcat (return x : xs) = insert x (unionConcat xs)
unionConcat is already defined in Data.Set.... To be concrete, I will use the following definiitions in this proof
unionConcat = Data.Set.unions
return = Data.Set.fromList [a]
(I will use other functions defined in Data.Set here, some may require "Ord a", presumably that won't be a problem).
I also make use of the following properties
union x y = fromList (toList x ++ toList y)
concat . map (:[]) = id
The first states that the union of two sets can be obtained by taking a list of items in the set, concatinating them, then removing the repeats.... This follows from the definition of what a set is
The second property just states that concat and map (:[]) are inverses of each other. This should also be obvious from the definition of concat
map (:[]) [a, b, c, ....] = [[a], [b], [c], ....]
concat [[a], [b], [c], ....] = [a, b, c, ....]
(In order to really finish this proof, I would have to show that these properties follow from the Haskell definitions of (:[]), concat and union, but this is more detail that I think you want, and the actual definitions might change from version to version, so we will just have to assume that the writers of these functions followed the spirit of how sets and concat should work).
(In case it isn't obvious, remember the monkey operator (:[]) wraps single elements in brackets- (:[]) x = [x]).
Since "unions" is just a multiple appliction of "union", and "concat" is just a multiple application of (++), the first propterty can be generalized to
unions sets = fromList (concat $ map toLists sets)
Now for the proof-
y >>= return
= unions $ map return (toList y)
= unions $ map (fromList . (:[])) (toList y)
= unions $ map fromList $ map (:[]) (toList y)
= unions $ map fromList $ map (:[]) $ toList y
= fromList $ concat $ map toList $ map fromList $ map (:[]) (toList y)
= fromList $ concat $ map (:[]) (toList y)
= fromList $ toList y
= y
QED
Edit- See discussion below, I made a mistake and proved the wrong law (d'oh, I should have just read the title of the question :) ), so I am adding the correct one (associativity) below.
Two prove associativity, we need to use two properties....
property 1 - toList (x >>= f) = su (toList x >>=' toList . f)
property 2 - su (x >>=' f) = su (su x >>=' f)
where su sorts and uniqs a list, ie-
su [4,2,4,1] = [1,2,4],
and >>=' is the array bind operator,
x >>=' f = concat . map f x
The first property should be obvious.... It just states that you can get the result of x >>= f in two different ways, either by applying f to the values in the set x and taking the union, or to the exact same values in the corresponding list, and concating the values. The only hitch is that you might get repeat values in the list (the set couldn't even allow that), so you apply the su function on the right side to canonicalize the result (note that toList also outputs in the same form).
The second property states that if you sort/uniq a result at the end of a pipeline of binds, you can also perform it earlier in the pipeline without changing the answer. Again, this should be obvious.... Adding/removing duplicates or reordering the values with the initial list only add/removes duplicates or reorders the final result. But we are going to remove the duplicates and reorder at the end anyway, so it doesn't matter.
(A more rigorous proof of these two properties could be given based on the definitions of map/concat, toList, etc, but it would blow up the size of this posting.... I'll assume that everyone's intuition is strong enough and continue....)
Using these, I can now show you the proof. The general plan is to use the known associativity of the array bind operator, and the relationship of arrays with sets to show that the set bind operator must also be associative.
Since
toList set1 == toList set2
implies that
set1 == set2
I can prove
toList ((y >>= f) >>= g) = toList (y >>= (\x -> f x >>= g))
to get the desired result.
toList ((y >>= f) >>= g)
su (toList (y >>= f) >>=' toList . g) --by property 1
su (su (toList y >>=' toList . f) >>=' toList . g) --by property 1
su ((toList y >>=' toList . f) >>=' toList . g) --by property 2
su (toList y >>=' (\x -> (toList . f) x >>=' toList . g)) --by the associativity of the array bind operator
su (toList y >>=' (\x -> su (toList (f x) >>=' toList . g))) --by property 2 and associativity of (.)
su (toList y >>=' (\x -> toList (f x >>= g))) --by property 1
su (toList y >>=' toList (\x -> f x >>= g)) --by associativity of (.)
su (su (toList y >>=' toList (\x -> f x >>= g))) --by property 2
su (toList (y >>= (\x -> f x >>= g))) --by property 1
toList (y >>= (\x -> f x >>= g)) --because toList is already sorted/uniqued
QED
> U :: [Setx] --> Set x
>
> (m >>= f) >>= g ≡ m >>= (\x -> f x >>= g)
> VL(leftSide)
> (m >>= f) >>= g
> (Set x >>= f) >>=g <=>
> (U(map f x)) >>=g <=> (U(map f x)=Set y)
> Set y >>= g <=>
>
>
> HL:(right Side)
> m >>= (\x -> f x >>= g) <=>
> Set x >>=(\x -> f x >>= g) (funktionen \x -> f x gives a Set y it will consume value of x.)
But this prrof i wrong. (U= UnionMult.)
I was told that i should try to create a function conposition for both left side and right side. It will help in showing that right side and left side are equal.
HL: rightSide
VL leftSide
want to show VL==HL

Resources