Can you determine the min or max of a list using only the list monad? - haskell

Trying to understand the relation between Monad and Foldable. I am aware that that part of the value of the Monad, Applicative and Functor typeclasses is their ability to lift functions over structure, but what if I wanted to generate a summary value (e.g. min or max) for the values contained in a Monad?
This would be impossible without an accumulator right (like in foldable)? And to have an accumulator you have to inject or destroy structure?
min :: Ord a => a -> a -> a
foldMin :: (Foldable t, Ord a) => t a -> Maybe a
foldMin t = foldr go Nothing t
where
go x Nothing = Just x
go x (Just y) = Just (min x y)
Here, the Nothing value is the accumulator. So it would not be possible to do an operation that produces a summary value like this within the confines of a do block?

I'm not entirely sure I understand the question, so forgive me if this isn't a useful answer, but as I understand it, the core of the question is this:
So it would not be possible to do an operation that produces a summary value like this within the confines of a do block?
Correct, that would not be possible. Haskell's do notation is syntactic sugar over Monad, so basically syntactic sugar over >>= and return.
return, as you know, doesn't let you 'access' the contents of the Monad, so the only access to the contents you have is via >>=, and in the case of the list monad, for instance, that only gives you one value at a time.
Notice that Foldable doesn't even require that the data container is a Functor (much less a Monad). Famously, Set isn't a Functor instance, but it is a Foldable instance.
You can, for example, find the minimum value in a set:
Prelude Data.Foldable Set> foldr (\x -> Just . maybe x (min x)) Nothing $ Set.fromList [42, 1337, 90125, 2112]
Just 42

The contrived and inefficient code below is the closest I can get to "using only the list monad". This is probably not what the OP is looking for, but here it is.
I also exploit head (which you can replace with listToMaybe, if we want totality), and null. I also use empty (which you can replace with []).
The code works by non deterministically picking an element m and then checking that no greater elements exist. This has a quadratic complexity.
import Control.Applicative
maximum :: Ord a => [a] -> a
maximum xs = head maxima
where
isMax m = null $ do
x <- xs
if x > m
then return x
else empty
maxima = do
m <- xs -- non deterministically pick a maximum
if isMax m
then return m
else empty

I'm also not sure, what the actual question ist, but the need for an accumulator can be hidden with a Monoid instance. Then - for your minimum example - you can use use foldMap from Data.Foldable to map and merge all values of your Foldable. E.g.:
data Min a = Min { getMin :: Maybe a } deriving Show
instance Ord a => Monoid (Min a) where
mempty = Min Nothing
mappend a (Min Nothing) = a
mappend (Min Nothing) b = b
mappend (Min (Just a)) (Min (Just b)) = Min (Just (min a b))
foldMin :: (Foldable t, Ord a) => t a -> Maybe a
foldMin = getMin . foldMap (Min . Just)

Related

What is the main difference between Free Monoid and Monoid?

Looks like I have a pretty clear understanding what a Monoid is in Haskell, but last time I heard about something called a free monoid.
What is a free monoid and how does it relate to a monoid?
Can you provide an example in Haskell?
As you already know, a monoid is a set with an element e and an operation <> satisfying
e <> x = x <> e = x (identity)
(x<>y)<>z = x<>(y<>z) (associativity)
Now, a free monoid, intuitively, is a monoid which satisfies only those equations above, and, obviously, all their consequences.
For instance, the Haskell list monoid ([a], [], (++)) is free.
By contrast, the Haskell sum monoid (Sum Int, Sum 0, \(Sum x) (Sum y) -> Sum (x+y)) is not free, since it also satisfies additional equations. For instance, it's commutative
x<>y = y<>x
and this does not follow from the first two equations.
Note that it can be proved, in maths, that all the free monoids are isomorphic to the list monoid [a]. So, "free monoid" in programming is only a fancy term for any data structure which 1) can be converted to a list, and back, with no loss of information, and 2) vice versa, a list can be converted to it, and back, with no loss of information.
In Haskell, you can mentally substitute "free monoid" with "list-like type".
In a programming context, I usually translate free monoid to [a]. In his excellent series of articles about category theory for programmers, Bartosz Milewski describes free monoids in Haskell as the list monoid (assuming one ignores some problems with infinite lists).
The identity element is the empty list, and the binary operation is list concatenation:
Prelude Data.Monoid> mempty :: [Int]
[]
Prelude Data.Monoid> [1..3] <> [7..10]
[1,2,3,7,8,9,10]
Intuitively, I think of this monoid to be 'free' because it a monoid that you can always apply, regardless of the type of value you want to work with (just like the free monad is a monad you can always create from any functor).
Additionally, when more than one monoid exists for a type, the free monoid defers the decision on which specific monoid to use. For example, for integers, infinitely many monoids exist, but the most common are addition and multiplication.
If you have two (or more integers), and you know that you may want to aggregate them, but you haven't yet decided which type of aggregation you want to apply, you can instead 'aggregate' them using the free monoid - practically, this means putting them in a list:
Prelude Data.Monoid> [3,7]
[3,7]
If you later decide that you want to add them together, then that's possible:
Prelude Data.Monoid> getSum $ mconcat $ Sum <$> [3,7]
10
If, instead, you wish to multiply them, you can do that as well:
Prelude Data.Monoid> getProduct $ mconcat $ Product <$> [3,7]
21
In these two examples, I've deliberately chosen to elevate each number to a type (Sum, Product) that embodies a more specific monoid, and then use mconcat to perform the aggregation.
For addition and multiplication, there are more succinct ways to do this, but I did it that way to illustrate how you can use a more specific monoid to interpret the free monoid.
A free monoid is a specific type of monoid. Specifically, it’s the monoid you get by taking some fixed set of elements as characters and then forming all possible strings from those elements. Those strings, with the underlying operation being string concatenation, form a monoid, and that monoid is called the free monoid.
A monoid (M,•,1) is a mathematical structure such that:
M is a set
1 is a member of M
• : M * M -> M
a•1 = a = 1•a
Given elements a, b and c in M, we have a•(b•c) = (a•b)•c.
A free monoid on a set M is a monoid (M',•,0) and function e : M -> M' such that, for any monoid (N,*,1), given a (set) map f : M -> N we can extend this to a monoid morphism f' : (M',•,0) -> (N,*,1), i.e
f a = f' (e a)
f' 0 = 1
f' (a•b) = (f' a) • (f' b)
In other words, it is a monoid that does nothing special.
An example monoid is the integers with the operation being addition and the identity being 0. Another monoid is sequences of integers with the operation being concatenation and the identity being the empty sequence. Now the integers under addition is not a free monoid on the integers. Consider the map into sequences of integers taking n to (n). Then for this to be free we would need to extend this to a map taking n + m to (n,m), i.e. it must take 0 to (0) and to (0,0) and to (0,0,0) and so on.
On the other hand if we try to look at sequences of integers as a free monoid on the integers, we see that it seems to work in this case. The extension of the map into the integers with addition is one that takes the sum of a sequence (with the sum of () being 0).
So what is the free monoid on a set S? Well one thing we could try is just arbitrary binary trees of S. In a Haskell type this would look like:
data T a = Unit | Single a | Conc (T a) (T a)
And it would have an identity of Unit, e = Single and (•) = Conc.
And we can write a function to show how it is free:
-- here the second argument represents a monoid structure on b
free :: (a -> b) -> (b -> b -> b, b) -> T a -> b
free f ((*),zero) = f' where
f' (Single a) = f a
f' Unit = zero
f' (Conc a b) = f' a * f' b
It should be quite obvious that this satisfies the required laws for a free monoid on a. Except for one: T a is not a monoid because it does not quite satisfy laws 4 or 5.
So now we should ask if we can make this into a simpler free monoid, ie one that is an actual monoid. The answer is yes. One way is to observe that Conc Unit a and Conc a Unit and Single a should be the same. So let’s make the first two types unrepresentable:
data TInner a = Single a | Conc (TInner a) (TInner a)
data T a = Unit | Inner (TInner a)
A second observation we can make is that there should be no difference between Conc (Conc a b) c and Conc a (Conc b c). This is due to law 5 above. We can then flatten our tree:
data TInner a = Single a | Conc (a,TInner a)
data T a = Unit | Inner (TInner a)
The strange construction with Conc forces us to only have a single way to represent Single a and Unit. But we see we can merge these all together: change the definition of Conc to Conc [a] and then we can change Single x to Conc [x], and Unit to Conc [] so we have:
data T a = Conc [a]
Or we can just write:
type T a = [a]
And the operations are:
unit = []
e a = [a]
(•) = append
free f ((*),zero) = f' where
f' [] = zero
f' (x:xs) = f x * f' xs
So in Haskell, the list type is called the free monoid.

Maybe monad and a list

Ok, so I am trying to learn how to use monads, starting out with maybe. I've come up with an example that I can't figure out how to apply it to in a nice way, so I was hoping someone else could:
I have a list containing a bunch of values. Depending on these values, my function should return the list itself, or a Nothing. In other words, I want to do a sort of filter, but with the consequence of a hit being the function failing.
The only way I can think of is to use a filter, then comparing the size of the list I get back to zero. Is there a better way?
This looks like a good fit for traverse:
traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
That's a bit of a mouthful, so let's specialise it to your use case, with lists and Maybe:
GHCi> :set -XTypeApplications
GHCi> :t traverse #[] #Maybe
traverse #[] #Maybe :: (a -> Maybe b) -> [a] -> Maybe [b]
It works like this: you give it an a -> Maybe b function, which is applied to all elements of the list, just like fmap does. The twist is that the Maybe b values are then combined in a way that only gives you a modified list if there aren't any Nothings; otherwise, the overall result is Nothing. That fits your requirements like a glove:
noneOrNothing :: (a -> Bool) -> [a] -> Maybe [a]
noneOrNothing p = traverse (\x -> if p x then Nothing else Just x)
(allOrNothing would have been a more euphonic name, but then I'd have to flip the test with respect to your description.)
There are a lot of things we might discuss about the Traversable and Applicative classes. For now, I will talk a bit more about Applicative, in case you haven't met it yet. Applicative is a superclass of Monad with two essential methods: pure, which is the same thing as return, and (<*>), which is not entirely unlike (>>=) but crucially different from it. For the Maybe example...
GHCi> :t (>>=) #Maybe
(>>=) #Maybe :: Maybe a -> (a -> Maybe b) -> Maybe b
GHCi> :t (<*>) #Maybe
(<*>) #Maybe :: Maybe (a -> b) -> Maybe a -> Maybe b
... we can describe the difference like this: in mx >>= f, if mx is a Just-value, (>>=) reaches inside of it to apply f and produce a result, which, depending on what was inside mx, will turn out to be a Just-value or a Nothing. In mf <*> mx, though, if mf and mx are Just-values you are guaranteed to get a Just value, which will hold the result of applying the function from mf to the value from mx. (By the way: what will happen if mf or mx are Nothing?)
traverse involves Applicative because the combining of values I mentioned at the beginning (which, in your example, turns a number of Maybe a values into a Maybe [a]) is done using (<*>). As your question was originally about monads, it is worth noting that it is possible to define traverse using Monad rather than Applicative. This variation goes by the name mapM:
mapM :: (Traversable t, Monad m) => (a -> m b) -> t a -> m (t b)
We prefer traverse to mapM because it is more general -- as mentioned above, Applicative is a superclass of Monad.
On a closing note, your intuition about this being "a sort of filter" makes a lot of sense. In particular, one way to think about Maybe a is that it is what you get when you pick booleans and attach values of type a to True. From that vantage point, (<*>) works as an && for these weird booleans, which combines the attached values if you happen to supply two of them (cf. DarthFennec's suggestion of an implementation using any). Once you get used to Traversable, you might enjoy having a look at the Filterable and Witherable classes, which play with this relationship between Maybe and Bool.
duplode's answer is a good one, but I think it is also helpful to learn to operate within a monad in a more basic way. It can be a challenge to learn every little monad-general function, and see how they could fit together to solve a specific problem. So, here's a DIY solution that shows how to use do notation and recursion, tools which can help you with any monadic question.
forbid :: (a -> Bool) -> [a] -> Maybe [a]
forbid _ [] = Just []
forbid p (x:xs) = if p x
then Nothing
else do
remainder <- forbid p xs
Just (x : remainder)
Compare this to an implementation of remove, the opposite of filter:
remove :: (a -> Bool) -> [a] -> [a]
remove _ [] = []
remove p (x:xs) = if p x
then remove p xs
else
let remainder = remove p xs
in x : remainder
The structure is the same, with just a couple differences: what you want to do when the predicate returns true, and how you get access to the value returned by the recursive call. For remove, the returned value is a list, and so you can just let-bind it and cons to it. With forbid, the returned value is only maybe a list, and so you need to use <- to bind to that monadic value. If the return value was Nothing, bind will short-circuit the computation and return Nothing; if it was Just a list, the do block will continue, and cons a value to the front of that list. Then you wrap it back up in a Just.

Why is toList (1, 2) == [2]

As the question says, why is toList (1, 2) == [2]?
I remember something similar happening when fmapping on tuples, but I do not remember why or if it is related.
(1,2) does not correspend to the list [1,2]. That wouldn't make sense: what would then (True, 3.14) correspend to? You can't have the list [True, 3.14], because a list can only contain elements of a single type. (Haskell is different from e.g. Python here.)
The only way to pick elements of guaranteed a single type from any tuple is, well, to take only a single element. Hence toList, as generated from the Foldable (a,) instance, takes tuples (a,b) and yields lists [b]. Obviously there's always exactly one b element in such a tuple†.
You could in principle consider (Int, Int) as a special case where the elements have the same type and hence you can pick two instead of one, but such a special handling would require some highly awkward type-equality checking. And generally, special-case handling is not a good idea.
Arguably, it would have been better not to define the Foldable (a,) instance at all, to avoid this confusing behaviour. Then again, sometimes it's handy to use fold to just get rid of the first tuple element (e.g. some index).
†Why use b and not a? Kind of arbitrary? Well, not completely. (a,b) is actually syntactic sugar for (,) a b, hence you can consider (,) a as a functor (whose elements have type b), but you can't have a functor (`(,)`b) whose elements would have type a.
If you are planning to use homogeneous pairs heavily, you may want to declare a new type which will precisely correspond to them. This way you'll be able to have access to the toList you were expecting.
newtype Pair a = Pair { pair :: (a, a) }
instance Functor Pair where
fmap f (Pair (x, y)) = Pair (f x, f y)
instance Foldable Pair where
foldr f z (Pair (x, y)) = f x $ f y z
(a, b) is fundamentally different from Pair a or Constant (a, a) b and it is important to clearly document which one you mean in your code if you want typeclass resolution to pick the right instance.
newtype Constant a b = Constant a
instance Functor (Constant a) where
fmap f (Constant a) = Constant a
instance Foldable (Constant a) where
foldr f z _ = z
Examples:
length (Constant (1, 2)) == 0
length (1, 2) == 1
length (Pair (1, 2)) == 2
The results do make more sense when you interpret the function names like this:
length - how many values will be touched by fmap, fold etc.
toList - what elements will be touched by fmap, fold etc.
As long as
length x == length (toList x)
the world is fine.

Choosing the non-empty Monoid

I need a function which will choose a non-empty monoid. For a list this will mean the following behaviour:
> [1] `mor` []
[1]
> [1] `mor` [2]
[1]
> [] `mor` [2]
[2]
Now, I've actually implemented it but am wondering wether there exists some standard alternative, because it seems to be a kind of a common case. Unfortunately Hoogle doesn't help.
Here's my implementation:
mor :: (Eq a, Monoid a) => a -> a -> a
mor a b = if a /= mempty then a else b
If your lists contain at most one element, they're isomorphic to Maybe, and for that there's the "first non empty" monoid: First from Data.Monoid. It's a wrapper around Maybe a values, and mappend returns the first Just value:
import Data.Monoid
main = do
print $ (First $ Just 'a') <> (First $ Just 'b')
print $ (First $ Just 'a') <> (First Nothing)
print $ (First Nothing) <> (First $ Just 'b')
print $ (First Nothing) <> (First Nothing :: First Char)
==> Output:
First {getFirst = Just 'a'}
First {getFirst = Just 'a'}
First {getFirst = Just 'b'}
First {getFirst = Nothing}
Conversion [a] -> Maybe a is achieved using Data.Maybe.listToMaybe.
On a side note: this one does not constrain the typeclass of the wrapped type; in your question, you need an Eq instance to compare for equality with mempty. This comes at the cost of having the Maybe type, of course.
[This is really a long comment rather than an answer]
In my comment, when I said "monoidal things have no notion of introspection" - I meant that you can't perform analysis (pattern matching, equality, <, >, etc.) on monoids. This is obvious of course - the API of Monoids is only unit (mempty) and an operation mappend (more abstractly <>) that takes two monodial things and returns one. The definition of mappend for a type is free to use case analysis, but afterwards all you can do with monoidal things is use the Monoid API.
It's something of a folklore in the Haskell community to avoid inventing things, prefering instead to use objects from mathematics and computer science (including functional programming history). Combining Eq (which needs analysis of is arguments) and Monoid introduces a new class of things - monoids that support enough introspection to allow equality; and at this point there is a reasonable argument that an Eq-Monoid thing goes against the spirit of its Monoid superclass (Monoids are opaque). As this is both a new class of objects and potentially contentious - a standard implementation won't exist.
First, your mor function looks rather suspicious because it requires a Monoid but never uses mappend, and so it is significantly more constrained than necessary.
mor :: (Eq a, Monoid a) => a -> a -> a
mor a b = if a /= mempty then a else b
You could accomplish the same thing with merely a Default constraint:
import Data.Default
mor :: (Eq a, Default a) => a -> a -> a
mor a b = if a /= def then a else b
and I think that any use of Default should also be viewed warily because, as I believe many Haskellers complain, it is a class without principles.
My second thought is that it seems that the data type you're really dealing with here is Maybe (NonEmpty a), not [a], and the Monoid you're actually talking about is First.
import Data.Monoid
morMaybe :: Maybe a -> Maybe a -> Maybe a
morMaybe x y = getFirst (First x <> First y)
And so then we could use that with lists, as in your example, under the (nonEmpty, maybe [] toList) isomorphism between [a] and Maybe (NonEmpty a):
import Data.List.NonEmpty
morList :: [t] -> [t] -> [t]
morList x y = maybe [] toList (nonEmpty x `mor` nonEmpty y)
λ> mor'list [1] []
[1]
λ> mor'list [] [2]
[2]
λ> mor'list [1] [2]
[1]
(I'm sure that somebody more familiar with the lens library could provide a more impressive concise demonstration here, but I don't immediately know how.)
You could extend Monoid with a predicate to test whether an element is an identity.
class Monoid a => TestableMonoid a
where
isMempty :: a -> Bool
morTestable :: a -> a -> a
morTestable x y = if isMempty x then y else x
Not every monoid can have an instance of TestableMonoid, but plenty (including list) can.
instance TestableMonoid [a]
where
isMempty = null
We could even then write a newtype wrapper with a Monoid:
newtype Mor a = Mor { unMor :: a }
instance TestableMonoid a => Monoid (Mor a)
where
mempty = Mor mempty
Mor x `mappend` Mor y = Mor (morTestable x y)
λ> unMor (Mor [1] <> Mor [])
[1]
λ> unMor (Mor [] <> Mor [2])
[2]
λ> unMor (Mor [1] <> Mor [2])
[1]
So that leaves open the question of whether the TestableMonoid class deserves to exist. It certainly seems like a more "algebraically legitimate" class than Default, at least, because we can give it a law that relates it to Monoid:
isEmpty x iff mappend x = id
But I do question whether this actually has any common use cases. As I said earlier, the Monoid constraint is superfluous for your use case because you never mappend. So we should ask, then, can we envision a situation in which one might need both mappend and isMempty, and thus have a legitimate need for a TestableMonoid constraint? It's possible I'm being shortsighted here, but I can't envision a case.
I think this is because of something Stephen Tetley touched on when he said that this "goes against the spirit of its Monoid." Tilt your head at the type signature of mappend with a slightly different parenthesization:
mappend :: a -> (a -> a)
mappend is a mapping from members of a set a to functions a -> a. A monoid is a way of viewing values as functions over those values. The monoid is a view of the world of a only through the window of what these functions let us see. And functions are very limited in what they let us see. The only thing we ever do with them is apply them. We never ask anything else of a function (as Stephen said, we have no introspection into them). So although, yes, you can bolt anything you want onto a subclass, in this case the thing we're bolting on feels very different in character from the base class we are extending, and it feels unlikely that there would be much intersection between the use cases of functions and the use cases of things that have general equality or a predicate like isMempty.
So finally I want to come back around to the simple and precise way to write this: Write code at the value level and stop worrying classes. You don't need Monoid and you don't need Eq, all you need is an additional argument:
morSimple :: (t -> Bool) -- ^ Determine whether a value should be discarded
-> t -> t -> t
morSimple f x y = if f x then y else x
λ> morSimple null [1] []
[1]
λ> morSimple null [1] [2]
[1]
λ> morSimple null [] [2]
[2]

Can someone explain the traverse function in Haskell?

I am trying and failing to grok the traverse function from Data.Traversable. I am unable to see its point. Since I come from an imperative background, can someone please explain it to me in terms of an imperative loop? Pseudo-code would be much appreciated. Thanks.
traverse is the same as fmap, except that it also allows you to run effects while you're rebuilding the data structure.
Take a look at the example from the Data.Traversable documentation.
data Tree a = Empty | Leaf a | Node (Tree a) a (Tree a)
The Functor instance of Tree would be:
instance Functor Tree where
fmap f Empty = Empty
fmap f (Leaf x) = Leaf (f x)
fmap f (Node l k r) = Node (fmap f l) (f k) (fmap f r)
It rebuilds the entire tree, applying f to every value.
instance Traversable Tree where
traverse f Empty = pure Empty
traverse f (Leaf x) = Leaf <$> f x
traverse f (Node l k r) = Node <$> traverse f l <*> f k <*> traverse f r
The Traversable instance is almost the same, except the constructors are called in applicative style. This means that we can have (side-)effects while rebuilding the tree. Applicative is almost the same as monads, except that effects cannot depend on previous results. In this example it means that you could not do something different to the right branch of a node depending on the results of rebuilding the left branch for example.
For historical reasons, the Traversable class also contains a monadic version of traverse called mapM. For all intents and purposes mapM is the same as traverse - it exists as a separate method because Applicative only later became a superclass of Monad.
If you would implement this in an impure language, fmap would be the same as traverse, as there is no way to prevent side-effects. You can't implement it as a loop, as you have to traverse your data structure recursively. Here's a small example how I would do it in Javascript:
Node.prototype.traverse = function (f) {
return new Node(this.l.traverse(f), f(this.k), this.r.traverse(f));
}
Implementing it like this limits you to the effects that the language allows though. If you f.e. want non-determinism (which the list instance of Applicative models) and your language doesn't have it built-in, you're out of luck.
traverse turns things inside a Traversable into a Traversable of things "inside" an Applicative, given a function that makes Applicatives out of things.
Let's use Maybe as Applicative and list as Traversable. First we need the transformation function:
half x = if even x then Just (x `div` 2) else Nothing
So if a number is even, we get half of it (inside a Just), else we get Nothing. If everything goes "well", it looks like this:
traverse half [2,4..10]
--Just [1,2,3,4,5]
But...
traverse half [1..10]
-- Nothing
The reason is that the <*> function is used to build the result, and when one of the arguments is Nothing, we get Nothing back.
Another example:
rep x = replicate x x
This function generates a list of length x with the content x, e.g. rep 3 = [3,3,3]. What is the result of traverse rep [1..3]?
We get the partial results of [1], [2,2] and [3,3,3] using rep. Now the semantics of lists as Applicatives is "take all combinations", e.g. (+) <$> [10,20] <*> [3,4] is [13,14,23,24].
"All combinations" of [1] and [2,2] are two times [1,2]. All combinations of two times [1,2] and [3,3,3] are six times [1,2,3]. So we have:
traverse rep [1..3]
--[[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3]]
I think it's easiest to understand in terms of sequenceA, as traverse can be defined as
follows.
traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
traverse f = sequenceA . fmap f
sequenceA sequences together the elements of a structure from left to right, returning a structure with the same shape containing the results.
sequenceA :: (Traversable t, Applicative f) => t (f a) -> f (t a)
sequenceA = traverse id
You can also think of sequenceA as reversing the order of two functors, e.g. going from a list of actions into an action returning a list of results.
So traverse takes some structure, and applies f to transform every element in the structure into some applicative, it then sequences up the effects of those applicatives from left to right, returning a structure with the same shape containing the results.
You can also compare it to Foldable, which defines the related function traverse_.
traverse_ :: (Foldable t, Applicative f) => (a -> f b) -> t a -> f ()
So you can see that the key difference between Foldable and Traversable is that the latter allows you to preserve the shape of the structure, whereas the former requires you to fold the result up into some other value.
A simple example of its usage is using a list as the traversable structure, and IO as the applicative:
λ> import Data.Traversable
λ> let qs = ["name", "quest", "favorite color"]
λ> traverse (\thing -> putStrLn ("What is your " ++ thing ++ "?") *> getLine) qs
What is your name?
Sir Lancelot
What is your quest?
to seek the holy grail
What is your favorite color?
blue
["Sir Lancelot","to seek the holy grail","blue"]
While this example is rather unexciting, things get more interesting when traverse is used on other types of containers, or using other applicatives.
It's kind of like fmap, except that you can run effects inside the mapper function, which also changes the result type.
Imagine a list of integers representing user IDs in a database: [1, 2, 3]. If you want to fmap these user IDs to usernames, you can't use a traditional fmap, because inside the function you need to access the database to read the usernames (which requires an effect -- in this case, using the IO monad).
The signature of traverse is:
traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
With traverse, you can do effects, therefore, your code for mapping user IDs to usernames looks like:
mapUserIDsToUsernames :: (Num -> IO String) -> [Num] -> IO [String]
mapUserIDsToUsernames fn ids = traverse fn ids
There's also a function called mapM:
mapM :: (Traversable t, Monad m) => (a -> m b) -> t a -> m (t b)
Any use of mapM can be replaced with traverse, but not the other way around. mapM only works for monads, whereas traverse is more generic.
If you just want to achieve an effect and not return any useful value, there are traverse_ and mapM_ versions of these functions, both of which ignore the return value from the function and are slightly faster.
traverse is the loop. Its implementation depends on the data structure to be traversed. That might be a list, tree, Maybe, Seq(uence), or anything that has a generic way of being traversed via something like a for-loop or recursive function. An array would have a for-loop, a list a while-loop, a tree either something recursive or the combination of a stack with a while-loop; but in functional languages you do not need these cumbersome loop commands: you combine the inner part of the loop (in the shape of a function) with the data structure in a more directly manner and less verbose.
With the Traversable typeclass, you could probably write your algorithms more independent and versatile. But my experience says, that Traversable is usually only used to simply glue algorithms to existing data structures. It is quite nice not to need to write similar functions for different datatypes qualified, too.

Resources