fmap print value doesn't print anything - haskell

Why does the following doesn't print anything:
λ> fmap print (pure 2)
Whereas something like this works:
λ> fmap id (pure 2)
2

Follow the types:
fmap print (pure 2) :: Applicative f => f (IO ())
fmap id (pure 2) :: (Num b, Applicative f) => f b
Lets replace f with IO:
fmap print (pure 2) :: IO (IO ()) -- #1
fmap id (pure 2) :: (Num b) => IO b -- #2
Now you can clearly see that #2 is an action with a numeric result, whereas #1 is an action with another action as a result.
Also, GHCi has the following rules about interactive evaluation:
2.4. Interactive evaluation at the prompt
When you type an expression at the prompt, GHCi immediately evaluates and prints the result:
2.4.1. I/O actions at the prompt
GHCi does more than simple expression evaluation at the prompt. If you type something of type IO a for some a, then GHCi executes it as an IO-computation.
Furthermore, GHCi will print the result of the I/O action if (and only if):
The result type is an instance of Show.
The result type is not ().
Since IO a is not an instance of Show, it won't print the result of the IO action.

Your expression return a computation (no perform it).
You want
fmap print (pure 2) >>= id
with that, we get your computation and run it inside monad.
Look your types
fmap print (pure 2) :: Applicative f => f (IO ())
Note
fmap print (pure 2) >>= id -- #1
is very different (but same "visual" result) than
fmap id (pure 2) -- #2
with #1 we take a computation and run it inside monad printing a number
with #2 we take a number and get value through fmap returning a number
(looking types again
fmap id (pure 2) :: (Num b, Applicative f) => f b

Related

Standard combinator to get first "non-empty" value from a set of monadic actions

I'm sure I am missing something very obvious here. Here's what I'm trying to achieve at a conceptual level:
action1 :: (MonadIO m) => m [a]
action1 = pure []
action2 :: (MonadIO m) => m [a]
action2 = pure [1, 2, 3]
action3 :: (MonadIO m) => m [a]
action3 = error "should not get evaluated"
someCombinator [action1, action2, action3] == m [1, 2, 3]
Does this hypothetical someCombinator exist? I have tried playing round with <|> and msum but couldn't get what I want.
I guess, this could be generalised in two ways:
-- Will return the first monadic value that is NOT an mempty
-- (should NOT blindly execute all monadic actions)
-- This is something like the msum function
someCombinator :: (Monoid a, Monad m, Traversable t, Eq a) => t m a -> m a
-- OR
-- this is something like the <|> operator
someCombinator :: (Monad m, Alternative f) => m f a -> m f a -> m f a
I'm not aware of a library that provides this, but it's not hard to implement:
someCombinator :: (Monoid a, Monad m, Foldable t, Eq a) => t (m a) -> m a
someCombinator = foldr f (pure mempty)
where
f item next = do
a <- item
if a == mempty then next else pure a
Note that you don't even need Traversable: Foldable is enough.
On an abstract level, the first non-empty value is a Monoid called First. It turns out, however, that if you just naively lift your IO values into First, you'll have a problem with action3, since the default monoidal append operation is strict under IO.
You can get lazy monoidal computation using the FirstIO type from this answer. It's not going to be better than Fyodor Soikin's answer, but it highlights (I hope) how you can compose behaviour from universal abstractions.
Apart from the above-mentioned FirstIO wrapper, you may find this function useful:
guarded :: Alternative f => (a -> Bool) -> a -> f a
guarded p x = if p x then pure x else empty
I basically just copied it from Protolude since I couldn't find one in base that has the desired functionality. You can use it to wrap your lists in Maybe so that they'll fit with FirstIO:
> guarded (not . null) [] :: Maybe [Int]
Nothing
> guarded (not . null) [1, 2, 3] :: Maybe [Int]
Just [1,2,3]
Do this for each action in your list of actions and wrap them in FirstIO.
> :t (firstIO . fmap (guarded (not . null))) <$> [action1, action2, action3]
(firstIO . fmap (guarded (not . null))) <$> [action1, action2, action3]
:: Num a => [FirstIO [a]]
In the above GHCi snippet, I'm only showing the type with :t. I can't show the value, since FirstIO has no Show instance. The point, however, is that you now have a list of FirstIO values from which mconcat will pick the first non-empty value:
> getFirstIO $ mconcat $ (firstIO . fmap (guarded (not . null))) <$> [action1, action2, action3]
Just [1,2,3]
If you want to unpack the Maybe, you can use fromMaybe from Data.Maybe:
answer :: IO [Integer]
answer =
fromMaybe [] <$>
(getFirstIO $ mconcat $ (firstIO . fmap (guarded (not . null))) <$> [action1, action2, action3])
This is clearly more complex than Fyodor Soikin's answer, but I'm fascinated by how Haskell enables you to assembling desired functionality by sort of 'clicking together' existing things, almost like Lego bricks.
So, to the question of does this combinator already exist? the answer is that it sort of does, but some assembly is required.

Keeping IO lazy under append

I may have been under the false impression that Haskell is lazier than it is, but I wonder if there's a way to get the best of both worlds...
Data.Monoid and Data.Semigroup define two variations of First. The monoidal version models the leftmost, non-empty value, whereas the semigroup version simply models the leftmost value.
This works fine for pure value values, but consider impure values:
x = putStrLn "x" >> return 42
y = putStrLn "y" >> return 1337
Both of these values have the type Num a => IO a. IO a is a Semigroup instance when a is:
instance Semigroup a => Semigroup (IO a)
-- Defined in `Data.Orphans'
This means that it's possible to combine two IO (First a) values:
Prelude Data.Semigroup Data.Orphans> fmap First x <> fmap First y
x
y
First {getFirst = 42}
As we can see, though, both x and y produce their respective side-effects, even though y is never required.
The same applies for Data.Monoid:
Prelude Data.Monoid> fmap (First . Just) x <> fmap (First . Just) y
x
y
First {getFirst = Just 42}
I think I understand why this happens, given that both the Semigroup and Monoid instances use liftA2, which seems to ultimately be based on IO bind, which is strict, as far as I understand it.
If I dispense with the First abstraction(s), however, I can get lazier evaluation:
first x _ = x
mfirst x y = do
x' <- x
case x' of
(Just _) -> return x'
Nothing -> y
Using both of those ignores y:
Prelude> first x y
x
42
Prelude> mfirst (fmap Just x) (fmap Just y)
x
Just 42
In both of these cases, y isn't printed.
My question is, then:
Can I get the best of both worlds? Is there a way that I can retain the Semigroup or Monoid abstraction, while still get lazy IO?
Is there, for example, some sort of LazyIO container that I can wrap First values in, so that I get the lazy IO I'd like to have?
The actual scenario I'm after is that I'd like to query a prioritised list of IO resources for data, and use the first one that gives me a useful response. I don't, however, want to perform redundant queries (for performance reasons).
The Alternative instance for the MaybeT monad transformer returns the first successful result, and does not execute the rest of the operations. In combination with the asum function, we can write something like:
import Data.Foldable (asum)
import Control.Applicative
import Control.Monad.Trans.Maybe
action :: Char -> IO Char
action c = putChar c *> return c
main :: IO ()
main = do
result <- runMaybeT $ asum $ [ empty
, MaybeT $ action 'x' *> return Nothing
, liftIO $ action 'v'
, liftIO $ action 'z'
]
print result
where the final action 'z' won't be executed.
We can also write a newtype wrapper with a Monoid instance which mimics the Alternative:
newtype FirstIO a = FirstIO (MaybeT IO a)
firstIO :: IO (Maybe a) -> FirstIO a
firstIO ioma = FirstIO (MaybeT ioma)
getFirstIO :: FirstIO a -> IO (Maybe a)
getFirstIO (FirstIO (MaybeT ioma)) = ioma
instance Monoid (FirstIO a) where
mempty = FirstIO empty
FirstIO m1 `mappend` FirstIO m2 = FirstIO $ m1 <|> m2
The relationship between Alternative and Monoid is explained in this other SO question.
Is there a way that I can retain the Semigroup or Monoid abstraction, while still get lazy IO?
Somewhat, but there are drawbacks. The udnerlying problem for our instances is that a generic instance for an Applicative will look like
instance Semigroup a => Semigroup (SomeApplicative a) where
x <> y = (<>) <$> x <*> y
We're here at the mercy of (<*>), and usually the second argument y will be at least in WHNF. For example in Maybe's implementation the first line will work fine and the second line will error:
liftA2 (<>) Just (First 10) <> Just (error "never shown")
liftA2 (<>) Just (First 10) <> error "fire!"
IO's (<*>) is implemented in terms of ap, so the second action will always be executed before <> is applied.
A First-like variant is possible with ExceptT or similar, essentially any data type that has a Left k >>= _ = Left k like case so that we can stop the computation at that point. Although ExceptT is meant for exceptions, it may work well for your use-case. Alternatively, one of the Alternative transformers (MaybeT, ExceptT) together with <|> instead of <> might suffice.
A almost completely lazy IO type is also possible, but must be handled with care:
import Control.Applicative (liftA2)
import System.IO.Unsafe (unsafeInterleaveIO)
newtype LazyIO a = LazyIO { runLazyIO :: IO a }
instance Functor LazyIO where
fmap f = LazyIO . fmap f . runLazyIO
instance Applicative LazyIO where
pure = LazyIO . pure
f <*> x = LazyIO $ do
f' <- unsafeInterleaveIO (runLazyIO f)
x' <- unsafeInterleaveIO (runLazyIO x)
return $ f' x'
instance Monad LazyIO where
return = pure
f >>= k = LazyIO $ runLazyIO f >>= runLazyIO . k
instance Semigroup a => Semigroup (LazyIO a) where
(<>) = liftA2 (<>)
instance Monoid a => Monoid (LazyIO a) where
mempty = pure mempty
mappend = liftA2 mappend
unsafeInterleaveIO will enable the behaviour you want (and is used in getContents and other lazy IO Prelude functions), but it must be used with care. The order of IO operations is completely off at that point. Only when we inspect the values we will trigger the original IO:
ghci> :module +Data.Monoid Control.Monad
ghci> let example = fmap (First . Just) . LazyIO . putStrLn $ "example"
ghci> runLazyIO $ fmap mconcat $ replicateM 100 example
First {getFirst = example
Just ()}
Note that we only got our example once in the output, but at a completely random place, since the putStrLn "example" and print result got interleaved, since
print (First x) = putStrLn (show (First x))
= putStrLn ("First {getFirst = " ++ show x ++ "}")
and show x will finally put the IO necessary to get x in action. The action will get called only once if we use the result multiple times:
ghci> :module +Data.Monoid Control.Monad
ghci> let example = fmap (First . Just) . LazyIO . putStrLn $ "example"
ghci> result <- runLazyIO $ fmap mconcat $ replicateM 100 example
ghci> result
First {getFirst = example
Just ()}
ghci> result
First {getFirst = Just ()}
You could write a finalizeLazyIO function that either evaluates or seq's x though:
finalizeLazyIO :: LazyIO a -> IO a
finalizeLazyIO k = do
x <- runLazyIO k
x `seq` return x
If you were to publish a module with this functions, I'd recommend to only export the type constructor LazyIO, liftIO :: IO a -> LazyIO a and finalizeLazyIO.

Does the expressiveness of monads come at the expense of code reuse?

When I compare the binary operations of the Applicative and Monad type class
(<*>) :: Applicative f => f (a -> b) -> f a -> f b
(=<<) :: Monad m => (a -> m b) -> m a -> m b
two differences become apparent:
ap expects a normal, pure function, whereas bind expects a monadic action, which must return a monad of the same type
with ap the sequence of actions is determined by the applicative, whereas with bind the monadic action can determine the control flow
So monads give me additional expressive power. However, since they no longer accept normal, pure functions, this expressiveness seems to come at the expense of code reuse.
My conclusion might be somewhat naive or even false, since I have merely little experience with Haskell and monadic computations. Any light in the dark is appreciated!
If you have pure function g :: a -> b you can make it be either Applicative version by
pure g :: Applicative f => f (a -> b)
or Monadish by
pure . g :: Applicative f => a -> f b
So you don't lose any code reuse in your sense.
Code reuse is only an advantage if you can reuse code to do what you actually want.
GHCi> putStrLn =<< getLine
Sulfur
Sulfur
GHCi>
Here, (=<<) picks the String result produced in an IO context by getLine and feeds it to putStrLn, which then prints said result.
GHCi> :t getLine
getLine :: IO String
GHCi> :t putStrLn
putStrLn :: String -> IO ()
GHCi> :t putStrLn =<< getLine
putStrLn =<< getLine :: IO ()
Now, in the type of fmap/(<$>)...
GHCi> :t (<$>)
(<$>) :: Functor f => (a -> b) -> f a -> f b
... it is perfectly possible for b to be IO (), and therefore nothing stops us from using putStrLn with it. However...
GHCi> putStrLn <$> getLine
Sulfur
GHCi>
... nothing will be printed.
GHCi> :t putStrLn <$> getLine
putStrLn <$> getLine :: IO (IO ())
Executing an IO (IO ()) action won't execute the inner IO () action. To do that, we need the additional power of Monad, either by replacing (<$>) with (=<<) or, equivalently, by using join on the IO (IO ()) value:
GHCi> :t join
join :: Monad m => m (m a) -> m a
GHCi> join (putStrLn <$> getLine)
Sulfur
Sulfur
GHCi>
Like chi, I also have trouble understanding the premises of your question. You seem to expect that one of Functor, Applicative and Monad will turn out to be better than the others. That is not the case. We can do more things with Applicative than with Functor, and even more with Monad. If you need the additional power, use a suitably powerful class. If not, using a less powerful class will lead to simpler, clearer code and a broader range of available instances.
The reason for this question was my too schematic understanding of the relations between the functor, applicative and monad class. I thought it was only about reusing pure functions.
(<*>) essentially says give me an applicative of functions and a bunch of applicatives of values and I will apply them according to my rules.
(=<<) essentially says give me a function (a -> m b) and a monad and I will feed (a -> m b) with the value of the monad and leave it to (a -> m b) to produce and return a transformed value wrapped in the same monad.
Of course, the applicative code will be more concise and better reusable, because the mechanism of how a sequence of actions is executed is defined exclusively within (<*>). However, applicative sequences are also somehow mechanic. So when you want to control the flow of the sequence or the "shape" of the resulting structure, you need the extra power of monads, which leads to more verbose code.
I think this question isn't particularly helpful and if people vote for closing, I would have no problem deleting it.

Difference between function composition operator (.) and fmap (<$>)

Currently reading through this article (which is pretty brilliant btw) and have a pretty simple question:
If I combine two functions like (+3) and (+2) with <$>, it seems to give me a new function that adds 5 to whatever is passed to it. If I do the same with the function composition operator, i.e. (+3) . (+2), would it not do the same thing? If that is true, is there a relationship here between these two operators such that they do the same thing in this simple case?
Is this even an intelligent question?
The functions fmap and <$> both have the same type:
> :t fmap
fmap :: Functor f => (a -> b) -> f a -> f b
> :t (<$>)
(<$>) :: Functor f => (a -> b) -> f a -> f b
While the function . is
> :t (.)
(.) :: (b -> c) -> (a -> b) -> a -> c
So how is it possible that we can use fmap on a function and end up with .? I'm assuming you understand what a Functor is, so now you have to understand that "functions" are Functors. How so?
> :i (->)
data (->) a b -- Defined in `GHC.Prim'
instance Monad ((->) r) -- Defined in `GHC.Base'
instance Functor ((->) r) -- Defined in `GHC.Base'
instance Applicative ((->) a) -- Defined in `Control.Applicative'
Unlike Just, [] and Left, functions do not have a constructor that can be used. The Functor instance is applied to the syntax itself. We can see from :info in ghci that the syntactic arrow -> actually has an instance for functor.
What happens when we look at the type of +3?
> :t (+3)
(+3) :: Num a => a -> a
So the function (+3) is a Functor that accepts an a and returns an a. When we use fmap on a Functor and that also gives us back a Functor, we get nested Functors:
> :t fmap Just (Just 3)
fmap Just (Just 3) :: Num a => Maybe (Maybe a)
> :t fmap (replicate 5) [1,2,3]
fmap (replicate 5) [1,2,3] :: Num a => [[a]]
Likewise, when we apply fmap to two functions we get a function inside a function. The only difference is that they are fused together:
> :t (fmap (+3) (+2))
(fmap (+3) (+2)) :: Num a => a -> a
Why doesn't this result in the type (->) (->) a a? We have to remember that the first argument of fmap is a function (a -> b) and not necessarily a Functor. So when we do fmap g (Just 5) we can have any transformation. But whenever we perform fmap on a function we know that it will always result with a function inside of a function.
Thus fmap (+3) (+2) evaluates to something like this: \x -> (\x' -> x' + 3) (x + 2). That is a really roundabout way of writing (+3) . (+2).
> :t (fmap (+3) (+2))
(fmap (+3) (+2)) :: Num a => a -> a
> :t ((.) (+3) (+2))
((.) (+3) (+2)) :: Num a => a -> a
Normally to get around the concat problem (Maybe (Maybe a)) or [[a]] we actually need to rely on it being a Monad a, so that we can use a bind >>=. But functions (->) are a special case because we know that every single time we use fmap on a function, it will always give us a function in side of a function. This cannot be said for any other Functor except ->. As such we can be sure to always concatenate fmap on functions.
Therefore any f <$> g == f . g
Edit: A quick side note, if you do this fmap (+) (+0) you end up with a function inside a function. In this case the monadic bind (>>=) is actually needed to concatenate the functions:
> :t fmap (+) (+0)
fmap (+) (+0) :: Num a => a -> a -> a
> :t (+0) >>= (+)
(+0) >>= (+) :: Num b => b -> b
> let bindfunc = (+0) >>= (+)
> bindfunc 5
10
Which is not entirely unlike the behaviour we get when we do [1,2] >>= replicate 5:
> [1,2] >>= replicate 5
[1,1,1,1,1,2,2,2,2,2]
To find information about the Functor instance for functions, match up the types to find the relevant instance:
fmap :: (a -> b) -> f a -> f b
Then here a ~ Int, b ~ Int and f ~ (->) Int.
You can see all of the Functor instances that come with GHC here. (->) is just an infix type operator with two type parameters. We usually see it applied as Int -> Int, but this is equivalent to (->) Int Int.
There is a Functor instance for the (partially applied) type (->) r (for any type r::*).
Looking at the ((->) r) instance for Functor, we see that fmap = (.), so there is no practical difference between (+3) . (+2) and fmap (+3) (+2) (same as (+3) <$> (+2).

ghci special case for Applicative?

In ghci:
λ> :t (pure 1)
(pure 1) :: (Applicative f, Num a) => f a
λ> show (pure 1)
<interactive>:1:1:
No instance for (Show (f0 a0))
arising from a use of `show'
Possible fix: add an instance declaration for (Show (f0 a0))
In the expression: show (pure 1)
In an equation for `it': it = show (pure 1)
λ> pure 1
1
Does this mean that ghci execute Applicative and displays the result, just like IO?
Note that pure () and pure (+1) don't print anything.
You get the same behaviour if you use return instead of pure. To find out what to do, ghci must choose a type for the given expression. ghci's defaulting rules are such that absent other constraints, it chooses IO for an Applicative or Monad instance. Thus it interprets pure 1 as an expression of type IO Integer. Expressions of type IO a entered at the prompt are executed and their results are printed, if 1. a has a Show instance and 2. a is not (). Thus entering pure 1 at the prompt results in
v <- return (1 :: Integer)
print v
return v
being executed (and the magic variable it bound to the returned v). For pure (), the special case applies since () is considered uninteresting, thus only return () is executed and it bound to (), for pure (+1), a function is returned, there's no Show instance for functions in scope, so nothing is printed. However,
Prelude Control.Applicative> :m +Text.Show.Functions
Prelude Control.Applicative Text.Show.Functions> pure (+1)
<function>
it :: Integer -> Integer
Prelude Control.Applicative Text.Show.Functions> it 3
4
it :: Integer
with a Show instance for functions in scope, it gets printed (not that it's informative), and the function can then be used (the latter is independent of a Show instance being in scope, of course).

Resources