I am trying and failing to grok the traverse function from Data.Traversable. I am unable to see its point. Since I come from an imperative background, can someone please explain it to me in terms of an imperative loop? Pseudo-code would be much appreciated. Thanks.
traverse is the same as fmap, except that it also allows you to run effects while you're rebuilding the data structure.
Take a look at the example from the Data.Traversable documentation.
data Tree a = Empty | Leaf a | Node (Tree a) a (Tree a)
The Functor instance of Tree would be:
instance Functor Tree where
fmap f Empty = Empty
fmap f (Leaf x) = Leaf (f x)
fmap f (Node l k r) = Node (fmap f l) (f k) (fmap f r)
It rebuilds the entire tree, applying f to every value.
instance Traversable Tree where
traverse f Empty = pure Empty
traverse f (Leaf x) = Leaf <$> f x
traverse f (Node l k r) = Node <$> traverse f l <*> f k <*> traverse f r
The Traversable instance is almost the same, except the constructors are called in applicative style. This means that we can have (side-)effects while rebuilding the tree. Applicative is almost the same as monads, except that effects cannot depend on previous results. In this example it means that you could not do something different to the right branch of a node depending on the results of rebuilding the left branch for example.
For historical reasons, the Traversable class also contains a monadic version of traverse called mapM. For all intents and purposes mapM is the same as traverse - it exists as a separate method because Applicative only later became a superclass of Monad.
If you would implement this in an impure language, fmap would be the same as traverse, as there is no way to prevent side-effects. You can't implement it as a loop, as you have to traverse your data structure recursively. Here's a small example how I would do it in Javascript:
Node.prototype.traverse = function (f) {
return new Node(this.l.traverse(f), f(this.k), this.r.traverse(f));
}
Implementing it like this limits you to the effects that the language allows though. If you f.e. want non-determinism (which the list instance of Applicative models) and your language doesn't have it built-in, you're out of luck.
traverse turns things inside a Traversable into a Traversable of things "inside" an Applicative, given a function that makes Applicatives out of things.
Let's use Maybe as Applicative and list as Traversable. First we need the transformation function:
half x = if even x then Just (x `div` 2) else Nothing
So if a number is even, we get half of it (inside a Just), else we get Nothing. If everything goes "well", it looks like this:
traverse half [2,4..10]
--Just [1,2,3,4,5]
But...
traverse half [1..10]
-- Nothing
The reason is that the <*> function is used to build the result, and when one of the arguments is Nothing, we get Nothing back.
Another example:
rep x = replicate x x
This function generates a list of length x with the content x, e.g. rep 3 = [3,3,3]. What is the result of traverse rep [1..3]?
We get the partial results of [1], [2,2] and [3,3,3] using rep. Now the semantics of lists as Applicatives is "take all combinations", e.g. (+) <$> [10,20] <*> [3,4] is [13,14,23,24].
"All combinations" of [1] and [2,2] are two times [1,2]. All combinations of two times [1,2] and [3,3,3] are six times [1,2,3]. So we have:
traverse rep [1..3]
--[[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3]]
I think it's easiest to understand in terms of sequenceA, as traverse can be defined as
follows.
traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
traverse f = sequenceA . fmap f
sequenceA sequences together the elements of a structure from left to right, returning a structure with the same shape containing the results.
sequenceA :: (Traversable t, Applicative f) => t (f a) -> f (t a)
sequenceA = traverse id
You can also think of sequenceA as reversing the order of two functors, e.g. going from a list of actions into an action returning a list of results.
So traverse takes some structure, and applies f to transform every element in the structure into some applicative, it then sequences up the effects of those applicatives from left to right, returning a structure with the same shape containing the results.
You can also compare it to Foldable, which defines the related function traverse_.
traverse_ :: (Foldable t, Applicative f) => (a -> f b) -> t a -> f ()
So you can see that the key difference between Foldable and Traversable is that the latter allows you to preserve the shape of the structure, whereas the former requires you to fold the result up into some other value.
A simple example of its usage is using a list as the traversable structure, and IO as the applicative:
λ> import Data.Traversable
λ> let qs = ["name", "quest", "favorite color"]
λ> traverse (\thing -> putStrLn ("What is your " ++ thing ++ "?") *> getLine) qs
What is your name?
Sir Lancelot
What is your quest?
to seek the holy grail
What is your favorite color?
blue
["Sir Lancelot","to seek the holy grail","blue"]
While this example is rather unexciting, things get more interesting when traverse is used on other types of containers, or using other applicatives.
It's kind of like fmap, except that you can run effects inside the mapper function, which also changes the result type.
Imagine a list of integers representing user IDs in a database: [1, 2, 3]. If you want to fmap these user IDs to usernames, you can't use a traditional fmap, because inside the function you need to access the database to read the usernames (which requires an effect -- in this case, using the IO monad).
The signature of traverse is:
traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
With traverse, you can do effects, therefore, your code for mapping user IDs to usernames looks like:
mapUserIDsToUsernames :: (Num -> IO String) -> [Num] -> IO [String]
mapUserIDsToUsernames fn ids = traverse fn ids
There's also a function called mapM:
mapM :: (Traversable t, Monad m) => (a -> m b) -> t a -> m (t b)
Any use of mapM can be replaced with traverse, but not the other way around. mapM only works for monads, whereas traverse is more generic.
If you just want to achieve an effect and not return any useful value, there are traverse_ and mapM_ versions of these functions, both of which ignore the return value from the function and are slightly faster.
traverse is the loop. Its implementation depends on the data structure to be traversed. That might be a list, tree, Maybe, Seq(uence), or anything that has a generic way of being traversed via something like a for-loop or recursive function. An array would have a for-loop, a list a while-loop, a tree either something recursive or the combination of a stack with a while-loop; but in functional languages you do not need these cumbersome loop commands: you combine the inner part of the loop (in the shape of a function) with the data structure in a more directly manner and less verbose.
With the Traversable typeclass, you could probably write your algorithms more independent and versatile. But my experience says, that Traversable is usually only used to simply glue algorithms to existing data structures. It is quite nice not to need to write similar functions for different datatypes qualified, too.
Related
Consider the following Haskell statement:
mapM print ["1", "2", "3"]
Indeed, this prints "1", "2", and "3" in order.
Question: How do you know that mapM will first print "1", and then print "2", and finally print "3". Is there any guarantee that it will do this? Or is it a coincidence of how it is implemented deep within GHC?
If you evaluate mapM print ["1", "2", "3"] by expanding the definition of mapM you will arrive at (ignoring some irrelevant details)
print "1" >> print "2" >> print "3"
You can think of print and >> as abstract constructors of IO actions that cannot be evaluated any further, just as a data constructor like Just cannot be evaluated any further.
The interpretation of print s is the action of printing s, and the interpretation of a >> b is the action that first performs a and then performs b. So, the interpretation of
mapM print ["1", "2", "3"] = print "1" >> print "2" >> print "3"
is to first print 1, then print 2, and finally print 3.
How this is actually implemented in GHC is entirely a different matter which you shouldn't worry about for a long time.
There is no guarantee on the order of the evaluation but there is a guarantee on the order of the effects. For more information see this answer that discusses forM.
You need to learn to make the following, tricky distinction:
The order of evaluation
The order of effects (a.k.a. "actions")
What
forM, sequence and similar functions promise is that the effects will
be ordered from left to right. So for example, the following is
guaranteed to print characters in the same order that they occur in
the string...
Note: "forM is mapM with its arguments flipped. For a version that ignores the results see forM_."
Preliminary note: The answers by Reid Barton and Dair are entirely correct and fully cover your practical concerns. I mention that because partway through this answer one might have the impression that it contradicts them, which is not the case, as will be clear by the time we get to the end. That being clear, it is time to indulge in some language lawyering.
Is there any guarantee that [mapM print] will [print the list elements in order]?
Yes, there is, as explained by the other answers. Here, I will discuss what might justify this guarantee.
In this day and age, mapM is, by default, merely traverse specialised to monads:
traverse
:: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
mapM
:: (Traversable t, Monad m) => (a -> m b) -> t a -> m (t b)
That being so, in what follows I will be primarily concerned with traverse, and how our expectations about the sequencing of effects relate to the Traversable class.
As far as the production of effects is concerned, traverse generates an Applicative effect for each value in the traversed container and combines all such effects through the relevant Applicative instance. This second part is clearly reflected by the type of sequenceA, through which the applicative context is, so to say, factored out of the container:
sequenceA :: (Traversable t, Applicative f) => t (f a) -> f (t a)
-- sequenceA and traverse are interrelated by:
traverse f = sequenceA . fmap f
sequenceA = traverse id
The Traversable instance for lists, for example, is:
instance Traversable [] where
{-# INLINE traverse #-} -- so that traverse can fuse
traverse f = List.foldr cons_f (pure [])
where cons_f x ys = (:) <$> f x <*> ys
It is plain to see that the combining, and therefore the sequencing, of effects is done through (<*>), so let's focus on it for a moment. Picking the IO applicative functor as an illustrative example, we can see (<*>) sequencing effects from left to right:
GHCi> -- Superfluous parentheses added for emphasis.
GHCi> ((putStrLn "Type something:" >> return reverse) <*> getLine) >>= putStrLn
Type something:
Whatever
revetahW
(<*>), however, sequences effects from left-to-right by convention, and not for any inherent reason. As witnessed by the Backwards wrapper from transformers, it is, in principle, always possible to implement (<*>) with right-to-left sequencing and still get a lawful Applicative instance. Without using the wrapper, it is also possible to take advantage of (<**>) from Control.Applicative to invert the sequencing:
(<**>) :: Applicative f => f a -> f (a -> b) -> f b
GHCi> import Control.Applicative
GHCi> (getLine <**> (putStrLn "Type something:" >> return reverse)) >>= putStrLn
Whatever
Type something:
revetahW
Given that it is so easy to flip the sequencing of Applicative effects, one might wonder whether this trick might transfer to Traversable. For instance, let's say we implement...
esrevart :: Applicative f => (a -> f b) -> [a] -> f [b]
... so that it is just like traverse for lists save for using Backwards or (<**>) to flip the sequencing of effects (I will leave that as an exercise for the reader). Would esrevart be a legal implementation of traverse? While we might figure it out by trying to prove the identity and composition laws of Traversable hold, that is actually not necessary: given that Backwards f for any applicative f is also applicative, an esrevart patterned after any lawful traverse will also follow the Traversable laws. The Reverse wrapper, also part of transformers, offers a general implementation of this reversal.
We have thus concluded that there can be legal Traversable instances that differ in the sequencing of effects. In particular, a list traverse that sequences effects from tail to head is conceivable. That doesn't make the possibility any less strange, though. To avoid utter bewilderment, Traversable instances are conventionally implemented with plain (<*>) and following the natural order in which the constructors are used to build the traversable container, which in the case of lists amounts to the expected head-to-tail sequencing of effects. One place where this convention shows up is in the automatic generation of instances by the DeriveTraversable extension.
A final, historical note. Couching this discussion, which is ultimately about mapM, in terms of the Traversable class would be a move of dubious relevance in a not so distant past. mapM was effectively subsumed by traverse only last year, but it has existed for much longer. For instance, the Haskell Report 1.3 from 1996, years before Applicative and Traversable came into being (not even ap is there, in fact), provides the following specification for mapM:
accumulate :: Monad m => [m a] -> m [a]
accumulate = foldr mcons (return [])
where mcons p q = p >>= \x -> q >>= \y -> return (x:y)
mapM :: Monad m => (a -> m b) -> [a] -> m [b]
mapM f as = accumulate (map f as)
The sequencing of effects, here enforced through (>>=), is left-to-right, for no other reason than it being the sensible thing to do.
P.S.: It is worth emphasising that, while it is possible to write a right-to-left mapM in terms of the Monad operations (in the Report 1.3 implementation quoted here, for instance, it merely requires exchanging p and q in the right-hand side of mcons), there is no such thing as a general Backwards for monads. Since f in x >>= f is a Monad m => a -> m b function which creates effects from values, the effects associated with f depend on x. As a consequence, a simple inversion of sequencing like that possible with (<*>) is not even guaranteed to be meaningful, let alone lawful.
There are quite a few of questions here about whether or not certain transformations of types that involve Monads are possible.
For instance, it's possible to make a function of type f :: Monad m => [m a] -> m [a], but impossible to make a function of type g :: Monad m => m [a] -> [m a] as a proper antifunction to the former. (IE: f . g = id)
I want to understand what rules one can use to determine if a function of that type can or cannot be constructed, and why these types cannot be constructed if they disobey these rules.
The way that I've always thought about monads is that a value of type Monad m => m a is some program of type m that executes and produces an a. The monad laws reinforce this notion by thinking of composition of these programs as "do thing one then do thing two", and produce some sort of combination of the results.
Right unit Taking a program and just returning its value should
be the same as just running the original program.
m >>= return = m
Left unit If you create a simple program that just returns a value,
and then pass that value to a function that creates a new program, then
the resulting program should just be as if you called the function on the
value.
return x >>= f = f x
Associativity If you execute a program m, feed its result into a function f that produces another program, and then feed that result into a third function g that also produces a program, then this is identical to creating a new function that returns a program based on feeding the result of f into g, and feeding the result of m into it.
(m >>= f) >>= g = m >>= (\x -> f x >>= g)
Using this intuition about a "program that creates a value" can come to some conclusions about what it means for the functions that you've provided in your examples.
Monad m => [m a] -> m [a] Deviating from the intuitive definition of what this function should do is hard: Execute each program in sequence and collect the results. This produces another program that produces a list of results.
Monad m => m [a] -> [m a] This doesn't really have a clear intuitive definition, since it's a program that produces a list. You can't create a list without getting access to the resulting values which in this case means executing a program. Certain monads, that have a clear way to extract a value from a program, and provide some variant of m a -> a, like the State monad, can have sane implementations of some function like this. It would have to be application specific though. Other monads, like IO, you cannot escape from.
Monad m => (a -> m b) -> m (a -> b) This also doesn't really have a clear intuitive definition. Here you have a function f that produces a program of type m b, but you're trying to return a function of type m (a -> b). Based on the a, f creates completely different programs with different executing semantics. You cannot encompass these variations in a single program of type m (a -> b), even if you can provide a proper mapping of a -> b in the programs resulting value.
This intuition doesn't really encompass the idea behind monads completely. For example, the monadic context of a list doesn't really behave like a program.
Something easy to remember is : "you can't escape from a Monad" (it's kind of design for it). Transforming m [a] to [m a] is a form of escape, so you can't.
On the other hand you can easily create a Monad from something (using return) so traversing ([m a] -> m [a]) is usually possible.
If you take a look at "Monad laws", monad only constrain you to define a composition function but not reverse function.
In the first example you can compose the list elements.
In the second one Monad m => m [a] -> [m a], you cannot split an action into multiple actions ( action composition is not reversible).
Example:
Let's say you have to read 2 values.
s1 <- action
s2 <- action
Doing so, action result s2 depends by the side effect made by s1.
You can bind these 2 actions in 1 action to be executed in the same order, but you cannot split them and execute action from s2 without s1 made the side effect needed by the second one.
Not really an answer, and much too informal for my linking, but nevertheless I have a few interesting observations that won't fit into a comment. First, let's consider this function you refer to:
f :: Monad m => [m a] -> m [a]
This signature is in fact stronger than it needs to be. The current generalization of this is the sequenceA function from Data.Traversable:
sequenceA :: (Traversable t, Applicative f) -> t (f a) -> f (t a)
...which doesn't need the full power of Monad, and can work with any Traversable and not just lists.
Second: the fact that Traversable only requires Applicative is I think really significant to this question, because applicative computations have a "list-like" structure. Every applicative computation can be rewritten to have the form f <$> a1 <*> ... <*> an for some f. Or, informally, every applicative computation can be seen as a list of actions a1, ... an (heterogeneous on the result type, homogeneous in the functor type), plus an n-place function to combine their results.
If we look at sequenceA through this lens, all it does is choose an f built out of the appropriate nested number of list constructors:
sequenceA [a1, ..., an] == f <$> a1 <*> ... <*> an
where f v1 ... vn = v1 : ... : vn : []
Now, I haven't had the chance to try and prove this yet, but my conjectures would be the following:
Mathematically speaking at least, sequenceA has a left inverse in free applicative functors. If you have a Functor f => [FreeA f a] and you sequenceA it, what you get is a list-like structure that contains those computations and a combining function that makes a list out of their results. I suspect however that it's not possible to write such a function in Haskell (unSequenceFreeA :: (Traversable t, Functor f) => FreeA f (t a) -> Maybe (t (Free f a))), because you can't pattern match on the structure of the combining function in the FreeA to tell that it's of the form f v1 ... vn = v1 : ... : vn : [].
sequenceA doesn't have a right inverse in a free applicative, however, because the combining function that produces a list out of the results from the a1, ... an actions may do anything; for example, return a constant list of arbitrary length (unrelated to the computations that the free applicative value performs).
Once you move to non-free applicative functors, there will no longer be a left inverse for sequenceA, because the non-free applicative functor's equations translate into cases where you can no longer tell apart which of two t (f a) "action lists" was the source for a given f (t a) "list-producing action."
Monads are usually explained in turns of return and bind. However, I gather you can also implement bind in terms of join (and fmap?)
In programming languages lacking first-class functions, bind is excruciatingly awkward to use. join, on the other hand, looks quite easy.
I'm not completely sure I understand how join works, however. Obviously, it has the [Haskell] type
join :: Monad m => m (m x) -> m x
For the list monad, this is trivially and obviously concat. But for a general monad, what, operationally, does this method actually do? I see what it does to the type signatures, but I'm trying to figure out how I'd write something like this in, say, Java or similar.
(Actually, that's easy: I wouldn't. Because generics is broken. ;-) But in principle the question still stands...)
Oops. It looks like this has been asked before:
Monad join function
Could somebody sketch out some implementations of common monads using return, fmap and join? (I.e., not mentioning >>= at all.) I think perhaps that might help it to sink in to my dumb brain...
Without plumbing the depths of metaphor, might I suggest to read a typical monad m as "strategy to produce a", so the type m value is a first class "strategy to produce a value". Different notions of computation or external interaction require different types of strategy, but the general notion requires some regular structure to make sense:
if you already have a value, then you have a strategy to produce a value (return :: v -> m v) consisting of nothing other than producing the value that you have;
if you have a function which transforms one sort of value into another, you can lift it to strategies (fmap :: (v -> u) -> m v -> m u) just by waiting for the strategy to deliver its value, then transforming it;
if you have a strategy to produce a strategy to produce a value, then you can construct a strategy to produce a value (join :: m (m v) -> m v) which follows the outer strategy until it produces the inner strategy, then follows that inner strategy all the way to a value.
Let's have an example: leaf-labelled binary trees...
data Tree v = Leaf v | Node (Tree v) (Tree v)
...represent strategies to produce stuff by tossing a coin. If the strategy is Leaf v, there's your v; if the strategy is Node h t, you toss a coin and continue by strategy h if the coin shows "heads", t if it's "tails".
instance Monad Tree where
return = Leaf
A strategy-producing strategy is a tree with tree-labelled leaves: in place of each such leaf, we can just graft in the tree which labels it...
join (Leaf tree) = tree
join (Node h t) = Node (join h) (join t)
...and of course we have fmap which just relabels leaves.
instance Functor Tree where
fmap f (Leaf x) = Leaf (f x)
fmap f (Node h t) = Node (fmap f h) (fmap f t)
Here's an strategy to produce a strategy to produce an Int.
Toss a coin: if it's "heads", toss another coin to decide between two strategies (producing, respectively, "toss a coin for producing 0 or producing 1" or "produce 2"); if it's "tails" produce a third ("toss a coin for producing 3 or tossing a coin for 4 or 5").
That clearly joins up to make a strategy producing an Int.
What we're making use of is the fact that a "strategy to produce a value" can itself be seen as a value. In Haskell, the embedding of strategies as values is silent, but in English, I use quotation marks to distinguish using a strategy from just talking about it. The join operator expresses the strategy "somehow produce then follow a strategy", or "if you are told a strategy, you may then use it".
(Meta. I'm not sure whether this "strategy" approach is a suitably generic way to think about monads and the value/computation distinction, or whether it's just another crummy metaphor. I do find leaf-labelled tree-like types a useful source of intuition, which is perhaps not a surprise as they're the free monads, with just enough structure to be monads at all, but no more.)
PS The type of "bind"
(>>=) :: m v -> (v -> m w) -> m w
says "if you have a strategy to produce a v, and for each v a follow-on strategy to produce a w, then you have a strategy to produce a w". How can we capture that in terms of join?
mv >>= v2mw = join (fmap v2mw mv)
We can relabel our v-producing strategy by v2mw, producing instead of each v value the w-producing strategy which follows on from it — ready to join!
join = concat -- []
join f = \x -> f x x -- (e ->)
join f = \s -> let (f', s') = f s in f' s' -- State
join (Just (Just a)) = Just a; join _ = Nothing -- Maybe
join (Identity (Identity a)) = Identity a -- Identity
join (Right (Right a)) = Right a; join (Right (Left e)) = Left e;
join (Left e) = Left e -- Either
join ((a, m), m') = (a, m' `mappend` m) -- Writer
-- N.B. there is a non-newtype-wrapped Monad instance for tuples that
-- behaves like the Writer instance, but with the tuple order swapped
join f = \k -> f (\f' -> f' k) -- Cont
Calling fmap (f :: a -> m b) (x ::ma) produces values (y ::m(m b)) so it is a very natural thing to use join to get back values (z :: m b).
Then bind is defined simply as bind ma f = join (fmap f ma), thus achieving the Kleisly compositionality of functions of (:: a -> m b) variety, which is what it is really all about:
ma `bind` (f >=> g) = (ma `bind` f) `bind` g -- bind = (>>=)
= (`bind` g) . (`bind` f) $ ma
= join . fmap g . join . fmap f $ ma
And so, with flip bind = (=<<), we have
((g <=< f) =<<) = (g =<<) . (f =<<) = join . (g <$>) . join . (f <$>)
OK, so it's not really good form to answer your own question, but I'm going to note down my thinking in case it enlightens anybody else. (I doubt it...)
If a monad can be thought of as a "container", then both return and join have pretty obvious semantics. return generates a 1-element container, and join turns a container of containers into a single container. Nothing hard about that.
So let us focus on monads which are more naturally thought of as "actions". In that case, m x is some sort of action which yields a value of type x when you "execute" it. return x does nothing special, and then yields x. fmap f takes an action that yields an x, and constructs an action that computes x and then applies f to it, and returns the result. So far, so good.
It's fairly obvious that if f itself generates an action, then what you end up with is m (m x). That is, an action that computes another action. In a way, that's maybe even simpler to wrap your mind around than the >>= function which takes an action and a "function that produces an action" and so on.
So, logically speaking, it seems join would run the first action, take the action it produces, and then run that. (Or rather, join would return an action that does what I just described, if you want to split hairs.)
That seems to be the central idea. To implement join, you want to run an action, which then gives you another action, and then you run that. (Whatever "run" happens to mean for this particular monad.)
Given this insight, I can take a stab at writing some join implementations:
join Nothing = Nothing
join (Just mx) = mx
If the outer action is Nothing, return Nothing, else return the inner action. Then again, Maybe is more of a container than an action, so let's try something else...
newtype Reader s x = Reader (s -> x)
join (Reader f) = Reader (\ s -> let Reader g = f s in g s)
That was... painless. A Reader is really just a function that takes a global state and only then returns its result. So to unstack, you apply the global state to the outer action, which returns a new Reader. You then apply the state to this inner function as well.
In a way, it's perhaps easier than the usual way:
Reader f >>= g = Reader (\ s -> let x = f s in g x)
Now, which one is the reader function, and which one is the function that computes the next reader...?
Now let's try the good old State monad. Here every function takes an initial state as input but also returns a new state along with its output.
data State s x = State (s -> (s, x))
join (State f) = State (\ s0 -> let (s1, State g) = f s0 in g s1)
That wasn't too hard. It's basically run followed by run.
I'm going to stop typing now. Feel free to point out all the glitches and typos in my examples... :-/
I've found many explanations of monads that say "you don't have to know anything about category theory, really, just think of monads as burritos / space suits / whatever".
Really, the article that demystified monads for me just said what categories were, described monads (including join and bind) in terms of categories, and didn't bother with any bogus metaphors:
http://en.wikibooks.org/wiki/Haskell/Category_theory
I think the article is very readable without much math knowledge required.
Asking what a type signature in Haskell does is rather like asking what an interface in Java does.
It, in some literal sense, "doesn't". (Though, of course, you will typically have some sort of purpose associated with it, that's mostly in your mind, and mostly not in the implementation.)
In both cases you are declaring legal sequences of symbols in the language which will be used in later definitions.
Of course, in Java, I suppose you could say that an interface corresponds to a type signature which is going to be implemented literally in the VM. You can get some polymorphism this way -- you can define a name that accepts an interface, and you can provide a different definition for the name which accepts a different interface. Something similar happens in Haskell, where you can provide a declaration for a name which accepts one type and then another declaration for that name which treats a different type.
This is Monad explained in one picture. The 2 functions in the green category are not composable, when being mapped to the blue category with join . fmap (strictly speaking, they are one category), they become composable. Monad is about turning a function of type T -> Monad<U> into a function of type Monad<T> -> Monad<U>.
I am looking for an elegant solution to the following problem. I have two lists of the following types:
[Float] and, [[Float]]
The first list contains an infinite amount of random values. The second list contains values I no longer care about. Its structure is finite and must be preserved. The values of the first list needs to be replacing those of the second.
Obviously, since the first list contains random values, I do not want to use them twice. Can anyone help me do this in a clear, concise, and terse way?
scramble :: [Float] -> [[Float]] -> [[Float]]
Give me your best shot
Using the split package for splitting:
import Data.List.Split (splitPlaces)
scramble x y = splitPlaces (map length y) x
Will this do?
flip . (evalState .) . traverse . traverse . const . state $ head &&& tail
EDIT: let me expand on the construction...
The essential centre of it is traverse . traverse. If you stare at the problem with sufficiently poor spectacles, you can see that it's "do something with the elements of a container of containers". For that sort of thing, traverse (from Data.Traversable) is a very useful gadget (ok, I'm biased).
traverse :: (Traversable f, Applicative a) => (s -> a t) -> f s -> a (f t)
or, if I change to longer but more suggestive type variables
traverse :: (Traversable containerOf, Applicative doingSomethingToGet) =>
(s -> doingSomethingToGet t) ->
containerOf s -> doingSomethingToGet (containerOf t)
Crucially, traverse preserves the structure of the container it operates on, whatever that might be. If you view traverse as a higher-order function, you can see that it gives back an operator on containers whose type fits with the type of operators on elements it demands. That's to say (traverse . traverse) makes sense, and gives you structure-preserving operations on two layers of container.
traverse . traverse ::
(Traversable g, Traversable f, Applicative a) => (s -> a t) -> g (f s) -> a (g (f t))
So we've got the key gadget for structure-preserving "do something" operations on lists of lists. The length and splitAt approach works fine for lists (the structure of a list is given by its length), but the essential characteristic of lists which enables that approach is already pretty much bottled by the Traversable class.
Now we need to figure out how to "do something". We want to replace the old elements with new things drawn successively from a supply stream. If we were allowed the side-effect of updating the supply, we could say what to do at each element: "return head of supply, updating supply with its tail". The State s monad (in Control.Monad.State which is an instance of Applicative, from Control.Applicative) lets us capture that idea. The type State s a represents computations which deliver a value of type a whilst mutating a state of type s. Typical such computations are made by this gadget.
state :: (s -> (a, s)) -> State s a
That's to say, given an initial state, just compute the value and the new state. In our case, s is a stream, head gets the value, tail gets the new state. The &&& operator (from Control.Arrow) is a nice way to glue two functions on the same data to get a function making a pair. So
head &&& tail :: [x] -> (x, [x])
which makes
state $ head &&& tail :: State [x] x
and thus
const . state $ head &&& tail :: u -> State [x] x
explains what to "do" with each element of the old container, namely ignore it and take a new element from the head of the supply stream.
Feeding that into (traverse . traverse) gives us a big mutatey traversal of type
f (g u) -> State [x] (f (g x))
where f and g are any Traversable structures (e.g. lists).
Now, to extract the function we want, taking the initial supply stream, we need to unpack the state-mutating computation as a function from initial state to final value. That's what this does:
evalState :: State s a -> s -> a
So we end up with something in
f (g u) -> [x] -> f (g x)
which had better get flipped if it's to match the original spec.
tl;dr The State [x] monad is a readymade tool for describing computations which read and update an input stream. The Traversable class captures a readymade notion of structure-preserving operation on containers. The rest is plumbing (and/or golf).
This is the obvious way to do it, but I take it this isn't terse enough?
scramble :: [a] -> [[a]] -> [[a]]
scramble _ [] = []
scramble xs (y : ys) = some : scramble rest ys
where (some, rest) = splitAt (length y) xs
Im trying to understand what a functor is, i found this tutorial/example:
http://en.wikibooks.org/wiki/Haskell/Solutions/Applicative_Functors
data Tree a = Node a [Tree a]
The functor for the above type being:
instance Functor Tree where
fmap f (Node a ts) = Node (f a) (map (fmap f) ts)
could someone help explain what exactly they have done and why they have done it? My understanding is that a functor allows you to iterate over a data type. I cant seem to understand the syntax used though?
A Functor is useful for mapping between two data representations. Sometimes that might resemble iteration, sometimes not. Having this common Functor typeclass allows us to ignore the actual structure of the data type (Maybe, List, Tree) and focus only on the data they contain. The author of that data type should know how that data structure might be traversed/iterated so he should provide the implementation for that (in the form of a Functor instance of that data type). All we have to provide is that function f which takes an a and maps it to a b. For example:
import Data.Char (toLower)
data Tree a = Node a [Tree a]
deriving Show
instance Functor Tree where
fmap f (Node a ts) = Node (f a) (map (fmap f) ts)
main :: IO ()
main = do print (toLower `fmap` (Node 'F' [])) -- Node 'f' []
print (toLower `fmap` (Just 'F')) -- Just 'f'
print (toLower `fmap` "FOO") -- "foo"
We were able to lowercase those chars using the same code toLower combined with fmap.
So, what you should do when defining that Functor instance is to extract inner data using pattern matching, and the apply the received callback function f to each of these results.
An infix synonym for fmap can be found in the Control.Applicative module, called <$>.
main :: IO ()
main = do print (toLower <$> (Node 'F' [])) -- Node 'f' []
print (toLower <$> (Just 'F')) -- Just 'f'
print (toLower <$> "FOO") -- "foo"
I've never seen Haskell before in my life, but I'm guessing that it's defining a data type (called Tree) that consists of a Node that contains a value, and an array of Trees (which would be the branches of the original tree). It then defines a function that operates on a function and a tree, and creates a new tree by applying the function to the Node's value, and applying itself recursively to all the branches in the array (using the map function as a shortcut).
Basically, in Haskell, you can think of a functor as:
a box that contains a value in a special context (IO, Maybe, Either a)
a structure that holds multiple values (Tree, Map a, List)
Additionally, a functor has an operation -- fmap -- that understands its specific structure.
Using fmap, you can easily apply a structure-preserving transformation to a functor. For example, fmap (+ 1) is a function that adds 1 to any functor:
Prelude> fmap (+ 1) [1,2 ] -- using a list functor
[2,3]
Prelude> fmap (+ 1) (Just 2) -- using a maybe functor
Just 3
In the example you've given, Tree is given a Functor instance -- an implementation of fmap -- that understands the structure of a tree, and abstracts that away for you.
A great resource for Functors, Applicative Functors, Monads, Monoids is Learn You A Haskell.