How can Data.IntMap.Strict be kept strict after a traversal? - haskell

Data.IntMap.Strict docs say:
Be aware that the Functor, Traversable and Data instances are the same
as for the Data.IntMap.Lazy module, so if they are used on strict
maps, the resulting maps will be lazy.
I use IntMap.traverseWithKey (Functor f => Applicative f) because I want a mapWithKey and a maprWithKey which doesn't exist. Instead I use the Backwards functor on a strict map. How can I keep the map strict after an Applicative use?

You can use mapAccumWithKey and mapAccumRWithKey, without using the accumulator argument. After optimization it's most likely exactly as fast as a mapWithKey function would be.
EDIT: if you're doing monadic traversals, and would like to make the possibly resulting IntMap-s strict, then you can achieve that by returning strict values inside the monadic action.
import Data.IntMap.Strict
import Control.Applicative
m :: IntMap Int
m = fromList $ zip [0..] (replicate 10 0)
traverse (\n -> Just (n + 100)) m returns a Just m map that contains n + 100 thunks. traverse (\n -> Just $! n + 100) m returns a map containing evaluated Int-s. Likewise, use return $! x in other monads to get strict results.

Related

Can you determine the min or max of a list using only the list monad?

Trying to understand the relation between Monad and Foldable. I am aware that that part of the value of the Monad, Applicative and Functor typeclasses is their ability to lift functions over structure, but what if I wanted to generate a summary value (e.g. min or max) for the values contained in a Monad?
This would be impossible without an accumulator right (like in foldable)? And to have an accumulator you have to inject or destroy structure?
min :: Ord a => a -> a -> a
foldMin :: (Foldable t, Ord a) => t a -> Maybe a
foldMin t = foldr go Nothing t
where
go x Nothing = Just x
go x (Just y) = Just (min x y)
Here, the Nothing value is the accumulator. So it would not be possible to do an operation that produces a summary value like this within the confines of a do block?
I'm not entirely sure I understand the question, so forgive me if this isn't a useful answer, but as I understand it, the core of the question is this:
So it would not be possible to do an operation that produces a summary value like this within the confines of a do block?
Correct, that would not be possible. Haskell's do notation is syntactic sugar over Monad, so basically syntactic sugar over >>= and return.
return, as you know, doesn't let you 'access' the contents of the Monad, so the only access to the contents you have is via >>=, and in the case of the list monad, for instance, that only gives you one value at a time.
Notice that Foldable doesn't even require that the data container is a Functor (much less a Monad). Famously, Set isn't a Functor instance, but it is a Foldable instance.
You can, for example, find the minimum value in a set:
Prelude Data.Foldable Set> foldr (\x -> Just . maybe x (min x)) Nothing $ Set.fromList [42, 1337, 90125, 2112]
Just 42
The contrived and inefficient code below is the closest I can get to "using only the list monad". This is probably not what the OP is looking for, but here it is.
I also exploit head (which you can replace with listToMaybe, if we want totality), and null. I also use empty (which you can replace with []).
The code works by non deterministically picking an element m and then checking that no greater elements exist. This has a quadratic complexity.
import Control.Applicative
maximum :: Ord a => [a] -> a
maximum xs = head maxima
where
isMax m = null $ do
x <- xs
if x > m
then return x
else empty
maxima = do
m <- xs -- non deterministically pick a maximum
if isMax m
then return m
else empty
I'm also not sure, what the actual question ist, but the need for an accumulator can be hidden with a Monoid instance. Then - for your minimum example - you can use use foldMap from Data.Foldable to map and merge all values of your Foldable. E.g.:
data Min a = Min { getMin :: Maybe a } deriving Show
instance Ord a => Monoid (Min a) where
mempty = Min Nothing
mappend a (Min Nothing) = a
mappend (Min Nothing) b = b
mappend (Min (Just a)) (Min (Just b)) = Min (Just (min a b))
foldMin :: (Foldable t, Ord a) => t a -> Maybe a
foldMin = getMin . foldMap (Min . Just)

Haskell's (<-) in Terms of the Natural Transformations of Monad

So I'm playing around with the hasbolt module in GHCi and I had a curiosity about some desugaring. I've been connecting to a Neo4j database by creating a pipe as follows
ghci> pipe <- connect $ def {credentials}
and that works just fine. However, I'm wondering what the type of the (<-) operator is (GHCi won't tell me). Most desugaring explanations describe that
do x <- a
return x
desugars to
a >>= (\x -> return x)
but what about just the line x <- a?
It doesn't help me to add in the return because I want pipe :: Pipe not pipe :: Control.Monad.IO.Class.MonadIO m => m Pipe, but (>>=) :: Monad m => m a -> (a -> m b) -> m b so trying to desugar using bind and return/pure doesn't work without it.
Ideally it seems like it'd be best to just make a Comonad instance to enable using extract :: Monad m => m a -> a as pipe = extract $ connect $ def {creds} but it bugs me that I don't understand (<-).
Another oddity is that, treating (<-) as haskell function, it's first argument is an out-of-scope variable, but that wouldn't mean that
(<-) :: a -> m b -> b
because not just anything can be used as a free variable. For instance, you couldn't bind the pipe to a Num type or a Bool. The variable has to be a "String"ish thing, except it never is actually a String; and you definitely can't try actually binding to a String. So it seems as if it isn't a haskell function in the usual sense (unless there is a class of functions that take values from the free variable namespace... unlikely). So what is (<-) exactly? Can it be replaced entirely by using extract? Is that the best way to desugar/circumvent it?
I'm wondering what the type of the (<-) operator is ...
<- doesn't have a type, it's part of the syntax of do notation, which as you know is converted to sequences of >>= and return during a process called desugaring.
but what about just the line x <- a ...?
That's a syntax error in normal haskell code and the compiler would complain. The reason the line:
ghci> pipe <- connect $ def {credentials}
works in ghci is that the repl is a sort of do block; you can think of each entry as a line in your main function (it's a bit more hairy than that, but that's a good approximation). That's why you need (until recently) to say let foo = bar in ghci to declare a binding as well.
Ideally it seems like it'd be best to just make a Comonad instance to enable using extract :: Monad m => m a -> a as pipe = extract $ connect $ def {creds} but it bugs me that I don't understand (<-).
Comonad has nothing to do with Monads. In fact, most Monads don't have any valid Comonad instance. Consider the [] Monad:
instance Monad [a] where
return x = [x]
xs >>= f = concat (map f xs)
If we try to write a Comonad instance, we can't define extract :: m a -> a
instance Comonad [a] where
extract (x:_) = x
extract [] = ???
This tells us something interesting about Monads, namely that we can't write a general function with the type Monad m => m a -> a. In other words, we can't "extract" a value from a Monad without additional knowledge about it.
So how does the do-notation syntax do {x <- [1,2,3]; return [x,x]} work?
Since <- is actually just syntax sugar, just like how [1,2,3] actually means 1 : 2 : 3 : [], the above expression actually means [1,2,3] >>= (\x -> return [x,x]), which in turn evaluates to concat (map (\x -> [[x,x]]) [1,2,3])), which comes out to [1,1,2,2,3,3].
Notice how the arrow transformed into a >>= and a lambda. This uses only built-in (in the typeclass) Monad functions, so it works for any Monad in general.
We can pretend to extract a value by using (>>=) :: Monad m => m a -> (a -> m b) -> m b and working with the "extracted" a inside the function we provide, like in the lambda in the list example above. However, it is impossible to actually get a value out of a Monad in a generic way, which is why the return type of >>= is m b (in the Monad)
So what is (<-) exactly? Can it be replaced entirely by using extract? Is that the best way to desugar/circumvent it?
Note that the do-block <- and extract mean very different things even for types that have both Monad and Comonad instances. For instance, consider non-empty lists. They have instances of both Monad (which is very much like the usual one for lists) and Comonad (with extend/=>> applying a function to all suffixes of the list). If we write a do-block such as...
import qualified Data.List.NonEmpty as N
import Data.List.NonEmpty (NonEmpty(..))
import Data.Function ((&))
alternating :: NonEmpty Integer
alternating = do
x <- N.fromList [1..6]
-x :| [x]
... the x in x <- N.fromList [1..6] stands for the elements of the non-empty list; however, this x must be used to build a new list (or, more generally, to set up a new monadic computation). That, as others have explained, reflects how do-notation is desugared. It becomes easier to see if we make the desugared code look like the original one:
alternating :: NonEmpty Integer
alternating =
N.fromList [1..6] >>= \x ->
-x :| [x]
GHCi> alternating
-1 :| [1,-2,2,-3,3,-4,4,-5,5,-6,6]
The lines below x <- N.fromList [1..6] in the do-block amount to the body of a lambda. x <- in isolation is therefore akin to a lambda without body, which is not a meaningful thing.
Another important thing to note is that x in the do-block above does not correspond to any one single Integer, but rather to all Integers in the list. That already gives away that <- does not correspond to an extraction function. (With other monads, the x might even correspond to no values at all, as in x <- Nothing or x <- []. See also Lazersmoke's answer.)
On the other hand, extract does extract a single value, with no ifs or buts...
GHCi> extract (N.fromList [1..6])
1
... however, it is really a single value: the tail of the list is discarded. If we want to use the suffixes of the list, we need extend/(=>>)...
GHCi> N.fromList [1..6] =>> product =>> sum
1956 :| [1236,516,156,36,6]
If we had a co-do-notation for comonads (cf. this package and the links therein), the example above might get rewritten as something in the vein of:
-- codo introduces a function: x & f = f x
N.fromList [1..6] & codo xs -> do
ys <- product xs
sum ys
The statements would correspond to plain values; the bound variables (xs and ys), to comonadic values (in this case, to list suffixes). That is exactly the opposite of what we have with monadic do-blocks. All in all, as far as your question is concerned, switching to comonads just swaps which things we can't refer to outside of the context of a computation.

Haskell -- Is there a monad sequence function for tuples?

Suppose I have a value of type Monad m => (m a, m a), and I want to "sequence" the pair to create a value of type Monad m => m (a, a) that combines the monadic context of the two values in the same way the "sequence" function does. Is there some standard function or standard way of doing this? And does this operation even make sense?
ghci> import Control.Lens
ghci> sequenceOf both (getLine, getLine)
Apples
Bananas
("Apples","Bananas")
There wouldn't be a single function for all the different tuple types, since it wouldn't have a single type.
You could define a family of functions like:
ts0 = return
ts2 = uncurry $ liftM2 (,)
ts3 = uncurr3 $ liftM3 (,,)
{- ... -}
uncurr3 f (x, y, z) = f x y z
Of course, sequence in general is better applied to applicatives instead of monads, which is why it is part of the Traversable typeclass. It would be possible to make homogenous tuples [(a,a,a) but not (a,b,a)] an instance of MonoTraversable, I believe.
You should also see another answer which indicates there is already a library containing this family of functions.
The tuple package has Data.Tuple.Sequence.sequenceT which is overloaded for up to 32-tuples.

Haskell - Is effect order deterministic in case of Applicative?

When executing the IO action defined by someFun <$> (a :: IO ()) <$> (b :: IO ()), is the execution of the a and b actions ordered? That is, can I count on that a is executed before b is?
For GHC, I can see the IO is implemented using State, and also see here that it is an Applicative instance, but can't find the source of the actual instance declaration. Being implemented through State suggests that different IO effects need to be sequential, but doesn't necessary defines their ordering.
Playing around in GHCi seems that Appliative retains effect order, but is that some universal guarantee, or GHC specific? I would be interested in details.
import System.Time
import Control.Concurrent
import Data.Traversable
let prec (TOD a b) = b
fmap (map prec) (sequenceA $ replicate 5 (threadDelay 1000 >> getClockTime))
[641934000000,642934000000,643934000000,644934000000,645934000000]
Thanks!
It's certainly deterministic, yes. It will always do the same thing for any specific instance. However, there's no inherent reason to choose left-to-right over right-to-left for the order of effects.
However, from the documentation for Applicative:
If f is also a Monad, it should satisfy pure = return and (<*>) = ap (which implies that pure and <*> satisfy the applicative functor laws).
The definition of ap is this, from Control.Monad:
ap :: (Monad m) => m (a -> b) -> m a -> m b
ap = liftM2 id
And liftM2 is defined in the obvious way:
liftM2 f m1 m2 = do { x1 <- m1; x2 <- m2; return (f x1 x2) }
What this means is that, for any functor that is a Monad as well as an Applicative, it is expected (by specification, since this can't be enforced in the code), that Applicative will work left-to-right, so that the do block in liftM2 does the same thing as liftA2 f x y = f <$> x <*> y.
Because of the above, even for Applicative instances without a corresponding Monad, by convention the effects are usually ordered left-to-right as well.
More broadly, because the structure of an Applicative computation is necessarily independent of the "effects", you can usually analyze the meaning of a program independently of how Applicative effects are sequenced. For example, if the instance for [] were changed to sequence right-to-left, any code using it would give the same results, just with the list elements in a different order.
Yes, the order is predefined by the Monad-Applicative correspondence. This is easy to see: The (*>) combinator needs to correspond to the (>>) combinator in a well-behaved Applicative instance for a monad, and its definition is:
a *> b = liftA2 (const id) a b
In other words, if b were executed before a, the Applicative instance would be ill-behaving.
Edit: As a side note: This is not explicitly specified anywhere, but you can find many other similar correspondences like liftM2 = liftA2, etc.
For the IO Applicative, this is certainly the case. But check out the async package for an example of an Applicative where in f <$> a <*> b the effects of a and b happen in parallel.

Can someone explain the traverse function in Haskell?

I am trying and failing to grok the traverse function from Data.Traversable. I am unable to see its point. Since I come from an imperative background, can someone please explain it to me in terms of an imperative loop? Pseudo-code would be much appreciated. Thanks.
traverse is the same as fmap, except that it also allows you to run effects while you're rebuilding the data structure.
Take a look at the example from the Data.Traversable documentation.
data Tree a = Empty | Leaf a | Node (Tree a) a (Tree a)
The Functor instance of Tree would be:
instance Functor Tree where
fmap f Empty = Empty
fmap f (Leaf x) = Leaf (f x)
fmap f (Node l k r) = Node (fmap f l) (f k) (fmap f r)
It rebuilds the entire tree, applying f to every value.
instance Traversable Tree where
traverse f Empty = pure Empty
traverse f (Leaf x) = Leaf <$> f x
traverse f (Node l k r) = Node <$> traverse f l <*> f k <*> traverse f r
The Traversable instance is almost the same, except the constructors are called in applicative style. This means that we can have (side-)effects while rebuilding the tree. Applicative is almost the same as monads, except that effects cannot depend on previous results. In this example it means that you could not do something different to the right branch of a node depending on the results of rebuilding the left branch for example.
For historical reasons, the Traversable class also contains a monadic version of traverse called mapM. For all intents and purposes mapM is the same as traverse - it exists as a separate method because Applicative only later became a superclass of Monad.
If you would implement this in an impure language, fmap would be the same as traverse, as there is no way to prevent side-effects. You can't implement it as a loop, as you have to traverse your data structure recursively. Here's a small example how I would do it in Javascript:
Node.prototype.traverse = function (f) {
return new Node(this.l.traverse(f), f(this.k), this.r.traverse(f));
}
Implementing it like this limits you to the effects that the language allows though. If you f.e. want non-determinism (which the list instance of Applicative models) and your language doesn't have it built-in, you're out of luck.
traverse turns things inside a Traversable into a Traversable of things "inside" an Applicative, given a function that makes Applicatives out of things.
Let's use Maybe as Applicative and list as Traversable. First we need the transformation function:
half x = if even x then Just (x `div` 2) else Nothing
So if a number is even, we get half of it (inside a Just), else we get Nothing. If everything goes "well", it looks like this:
traverse half [2,4..10]
--Just [1,2,3,4,5]
But...
traverse half [1..10]
-- Nothing
The reason is that the <*> function is used to build the result, and when one of the arguments is Nothing, we get Nothing back.
Another example:
rep x = replicate x x
This function generates a list of length x with the content x, e.g. rep 3 = [3,3,3]. What is the result of traverse rep [1..3]?
We get the partial results of [1], [2,2] and [3,3,3] using rep. Now the semantics of lists as Applicatives is "take all combinations", e.g. (+) <$> [10,20] <*> [3,4] is [13,14,23,24].
"All combinations" of [1] and [2,2] are two times [1,2]. All combinations of two times [1,2] and [3,3,3] are six times [1,2,3]. So we have:
traverse rep [1..3]
--[[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3],[1,2,3]]
I think it's easiest to understand in terms of sequenceA, as traverse can be defined as
follows.
traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
traverse f = sequenceA . fmap f
sequenceA sequences together the elements of a structure from left to right, returning a structure with the same shape containing the results.
sequenceA :: (Traversable t, Applicative f) => t (f a) -> f (t a)
sequenceA = traverse id
You can also think of sequenceA as reversing the order of two functors, e.g. going from a list of actions into an action returning a list of results.
So traverse takes some structure, and applies f to transform every element in the structure into some applicative, it then sequences up the effects of those applicatives from left to right, returning a structure with the same shape containing the results.
You can also compare it to Foldable, which defines the related function traverse_.
traverse_ :: (Foldable t, Applicative f) => (a -> f b) -> t a -> f ()
So you can see that the key difference between Foldable and Traversable is that the latter allows you to preserve the shape of the structure, whereas the former requires you to fold the result up into some other value.
A simple example of its usage is using a list as the traversable structure, and IO as the applicative:
λ> import Data.Traversable
λ> let qs = ["name", "quest", "favorite color"]
λ> traverse (\thing -> putStrLn ("What is your " ++ thing ++ "?") *> getLine) qs
What is your name?
Sir Lancelot
What is your quest?
to seek the holy grail
What is your favorite color?
blue
["Sir Lancelot","to seek the holy grail","blue"]
While this example is rather unexciting, things get more interesting when traverse is used on other types of containers, or using other applicatives.
It's kind of like fmap, except that you can run effects inside the mapper function, which also changes the result type.
Imagine a list of integers representing user IDs in a database: [1, 2, 3]. If you want to fmap these user IDs to usernames, you can't use a traditional fmap, because inside the function you need to access the database to read the usernames (which requires an effect -- in this case, using the IO monad).
The signature of traverse is:
traverse :: (Traversable t, Applicative f) => (a -> f b) -> t a -> f (t b)
With traverse, you can do effects, therefore, your code for mapping user IDs to usernames looks like:
mapUserIDsToUsernames :: (Num -> IO String) -> [Num] -> IO [String]
mapUserIDsToUsernames fn ids = traverse fn ids
There's also a function called mapM:
mapM :: (Traversable t, Monad m) => (a -> m b) -> t a -> m (t b)
Any use of mapM can be replaced with traverse, but not the other way around. mapM only works for monads, whereas traverse is more generic.
If you just want to achieve an effect and not return any useful value, there are traverse_ and mapM_ versions of these functions, both of which ignore the return value from the function and are slightly faster.
traverse is the loop. Its implementation depends on the data structure to be traversed. That might be a list, tree, Maybe, Seq(uence), or anything that has a generic way of being traversed via something like a for-loop or recursive function. An array would have a for-loop, a list a while-loop, a tree either something recursive or the combination of a stack with a while-loop; but in functional languages you do not need these cumbersome loop commands: you combine the inner part of the loop (in the shape of a function) with the data structure in a more directly manner and less verbose.
With the Traversable typeclass, you could probably write your algorithms more independent and versatile. But my experience says, that Traversable is usually only used to simply glue algorithms to existing data structures. It is quite nice not to need to write similar functions for different datatypes qualified, too.

Resources