I was just writing a quick bit of code, and I wanted to use the guard function in the IO Monad. However, there is no definition of MonadPlus for IO which means that we cannot use guard in IO land. I have seen an example of using the MabyeT transformer to use guard in the Maybe Monad and then lifting all of the IO actions but I do not really want to do that if I do not have to.
Some example of what I want might be:
handleFlags :: [Flag] -> IO ()
handleFlags flags = do
when (Help `elem` flags) (putStrLn "Usage: program_name options...")
guard (Help `elem` flags)
... do stuff ...
return ()
I was wondering if there was a nice way to get a guard function (or something similar) in the IO Monad through a declaration for MonadPlus or otherwise. Or perhaps I am doing it wrong; is there a better way to write that help message in the function above? Thanks.
(P.S. I could use if-then-else statements but it seems to defeat the point somehow. Not to mention that for a lot of options it will result in a huge amount of nesting.)
Consider the definition of MonadPlus:
class Monad m => MonadPlus m where
mzero :: m a
mplus :: m a -> m a -> m a
How would you implement mzero for IO? A value of type IO a represents an IO computation that returns something of type a, so mzero would have to be an IO computation returning something of any possible type. Clearly, there's no way to conjure up a value for some arbitrary type, and unlike Maybe there's no "empty" constructor we can use, so mzero would necessarily represent an IO computation that never returns.
How do you write an IO computation that never returns? Either go into an infinite loop or throw a runtime error, basically. The former is of dubious utility, so the latter is what you're stuck with.
In short, to write an instance of MonadPlus for IO what you'd do is this: Have mzero throw a runtime exception, and have mplus evaluate its first argument while catching any exceptions thrown by mzero. If no exceptions are raised, return the result. If an exception is raised, evaluate mplus's second argument instead while ignoring exceptions.
That said, runtime exceptions are often considered undesirable, so I'd hesitate before going down that path. If you do want to do it that way (and don't mind increasing the chance that your program may crash at runtime) you'll find everything you need to implement the above in Control.Exception.
In practice, I'd probably either use the monad transformer approach if I wanted a lot of guarding on the result of evaluating monadic expressions, or if most of the conditionals depend on pure values provided as function arguments (which the flags in your example are) use the pattern guards as in #Anthony's answer.
I do this sort of thing with guards.
handleFlags :: [Flag] -> IO ()
handleFlags flags
| Help `elem` flags = putStrLn "Usage: program_name options..."
| otherwise = return ()
There're functions precisely made for this: in Control.Monad, the functions when and its counterpart unless.
Anthony's answer can be rewritten as such:
handleFlags :: [Flag] -> IO ()
handleFlags flags =
when (Help `elem` flags) $ putStrLn "Usage: program_name options..."
specifications:
when :: (Applicative m) => Bool -> m () -> m ()
unless bool = when (not bool)
Link to docs on hackage.haskell.org
If more is needed, here's a link to another package, specifically monad-oriented and with several more utilities: Control.Monad.IfElse
Related
I've looked over https://www.fpcomplete.com/blog/2017/06/tale-of-two-brackets, though skimming some parts, and I still don't quite understand the core issue "StateT is bad, IO is OK", other than vaguely getting the sense that Haskell allows one to write bad StateT monads (or in the ultimate example in the article, MonadBaseControl instead of StateT, I think).
In the haddocks, the following law must be satisfied:
askUnliftIO >>= (\u -> liftIO (unliftIO u m)) = m
So this appears to be saying that state is not mutated in the monad m when using askUnliftIO. But to my mind, in IO, the entire world can be the state. I could be reading and writing to a text file on disk, for instance.
To quote another article by Michael,
False purity We say WriterT and StateT are pure, and technically they
are. But let's be honest: if you have an application which is entirely
living within a StateT, you're not getting the benefits of restrained
mutation that you want from pure code. May as well call a spade a
spade, and accept that you have a mutable variable.
This makes me think this is indeed the case: with IO we are being honest, with StateT, we are not being honest about mutability ... but that seems another issue than what the law above is trying to show; after all, MonadUnliftIO is assuming IO. I'm having trouble understanding conceptually how IO is more restrictive than something else.
Update 1
After sleeping (some), I am still confused but am gradually getting less so as the day wears on. I worked out the law proof for IO. I realized the presence of id in the README. In particular,
instance MonadUnliftIO IO where
askUnliftIO = return (UnliftIO id)
So askUnliftIO would appear to return an IO (IO a) on an UnliftIO m.
Prelude> fooIO = print 5
Prelude> :t fooIO
fooIO :: IO ()
Prelude> let barIO :: IO(IO ()); barIO = return fooIO
Prelude> :t barIO
barIO :: IO (IO ())
Back to the law, it really appears to be saying that state is not mutated in the monad m when doing a round trip on the transformed monad (askUnliftIO), where the round trip is unLiftIO -> liftIO.
Resuming the example above, barIO :: IO (), so if we do barIO >>= (u -> liftIO (unliftIO u m)), then u :: IO () and unliftIO u == IO (), then liftIO (IO ()) == IO (). **So since everything has basically been applications of id under the hood, we can see that no state was changed, even though we are using IO. Crucially, I think, what is important is that the value in a is never run, nor is any other state modified, as a result of using askUnliftIO. If it did, then like in the case of randomIO :: IO a, we would not be able to get the same value had we not run askUnliftIO on it. (Verification attempt 1 below)
But, it still seems like we could do the same for other Monads, even if they do maintain state. But I also see how, for some monads, we may not be able to do so. Thinking of a contrived example: each time we access the value of type a contained in the stateful monad, some internal state is changed.
Verification attempt 1
> fooIO >> askUnliftIO
5
> fooIOunlift = fooIO >> askUnliftIO
> :t fooIOunlift
fooIOunlift :: IO (UnliftIO IO)
> fooIOunlift
5
Good so far, but confused about why the following occurs:
> fooIOunlift >>= (\u -> unliftIO u)
<interactive>:50:24: error:
* Couldn't match expected type `IO b'
with actual type `IO a0 -> IO a0'
* Probable cause: `unliftIO' is applied to too few arguments
In the expression: unliftIO u
In the second argument of `(>>=)', namely `(\ u -> unliftIO u)'
In the expression: fooIOunlift >>= (\ u -> unliftIO u)
* Relevant bindings include
it :: IO b (bound at <interactive>:50:1)
"StateT is bad, IO is OK"
That's not really the point of the article. The idea is that MonadBaseControl permits some confusing (and often undesirable) behaviors with stateful monad transformers in the presence of concurrency and exceptions.
finally :: StateT s IO a -> StateT s IO a -> StateT s IO a is a great example. If you use the "StateT is attaching a mutable variable of type s onto a monad m" metaphor, then you might expect that the finalizer action gets access to the most recent s value when an exception was thrown.
forkState :: StateT s IO a -> StateT s IO ThreadId is another one. You might expect that the state modifications from the input would be reflected in the original thread.
lol :: StateT Int IO [ThreadId]
lol = do
for [1..10] $ \i -> do
forkState $ modify (+i)
You might expect that lol could be rewritten (modulo performance) as modify (+ sum [1..10]). But that's not right. The implementation of forkState just passes the initial state to the forked thread, and then can never retrieve any state modifications. The easy/common understanding of StateT fails you here.
Instead, you have to adopt a more nuanced view of StateT s m a as "a transformer that provides a thread-local immutable variable of type s which is implicitly threaded through a computation, and it is possible to replace that local variable with a new value of the same type for future steps of the computation." (more or less a verbose english retelling of the s -> m (a, s)) With this understanding, the behavior of finally becomes a bit more clear: it's a local variable, so it does not survive exceptions. Likewise, forkState becomes more clear: it's a thread-local variable, so obviously a change to a different thread won't affect any others.
This is sometimes what you want. But it's usually not how people write code IRL and it often confuses people.
For a long time, the default choice in the ecosystem to do this "lowering" operation was MonadBaseControl, and this had a bunch of downsides: hella confusing types, difficult to implement instances, impossible to derive instances, sometimes confusing behavior. Not a great situation.
MonadUnliftIO restricts things to a simpler set of monad transformers, and is able to provide relatively simple types, derivable instances, and always predictable behavior. The cost is that ExceptT, StateT, etc transformers can't use it.
The underlying principle is: by restricting what is possible, we make it easier to understand what might happen. MonadBaseControl is extremely powerful and general, and quite difficult to use and confusing as a result. MonadUnliftIO is less powerful and general, but it's much easier to use.
So this appears to be saying that state is not mutated in the monad m when using askUnliftIO.
This isn't true - the law is stating that unliftIO shouldn't do anything with the monad transformer aside from lowering it into IO. Here's something that breaks that law:
newtype WithInt a = WithInt (ReaderT Int IO a)
deriving newtype (Functor, Applicative, Monad, MonadIO, MonadReader Int)
instance MonadUnliftIO WithInt where
askUnliftIO = pure (UnliftIO (\(WithInt readerAction) -> runReaderT 0 readerAction))
Let's verify that this breaks the law given: askUnliftIO >>= (\u -> liftIO (unliftIO u m)) = m.
test :: WithInt Int
test = do
int <- ask
print int
pure int
checkLaw :: WithInt ()
checkLaw = do
first <- test
second <- askUnliftIO >>= (\u -> liftIO (unliftIO u test))
when (first /= second) $
putStrLn "Law violation!!!"
The value returned by test and the askUnliftIO ... lowering/lifting are different, so the law is broken. Furthermore, the observed effects are different, which isn't great either.
Though it is okay to bind IO [[Char]] and IO () but its not allowed to bind Maybe with IO. Can someone give an example how this relaxation would lead to a bad design? Why freedom in the polymorphic type of Monad is allowed though not the Monad itself?
There are a lot of good theoretical reasons, including "that's not what Monad is." But let's step away from that for a moment and just look at the implementation details.
First off - Monad isn't magic. It's just a standard type class. Instances of Monad only get created when someone writes one.
Writing that instance is what defines how (>>) works. Usually it's done implicitly through the default definition in terms of (>>=), but that just is evidence that (>>=) is the more general operator, and writing it requires making all the same decisions that writing (>>) would take.
If you had a different operator that worked on more general types, you have to answer two questions. First, what would the types be? Second, how would you go about providing implementations? It's really not clear what the desired types would be, from your question. One of the following, I guess:
class Poly1 m n where
(>>) :: m a -> n b -> m b
class Poly2 m n where
(>>) :: m a -> n b -> n b
class Poly3 m n o | m n -> o where
(>>) :: m a -> n b -> o b
All of them could be implemented. But you lose two really important factors for using them practically.
You need to write an instance for every pair of types you plan to use together. This is a massively more complex undertaking than just an instance for each type. Something about n vs n^2.
You lose predictability. What does the operation even do? Here's where theory and practice intersect. The theory behind Monad places a lot of restrictions on the operations. Those restrictions are referred to as the "monad laws". They are beyond the ability to verify in Haskell, but any Monad instance that doesn't obey them is considered to be buggy. The end result is that you quickly can build an intuition for what the Monad operations do and don't do. You can use them without looking up the details of every type involved, because you know properties that they obey. None of those possible classes I suggested give you any kind of assurances like that. You just have no idea what they do.
I’m not sure that I understand your question correctly, but it’s definitely possible to compose Maybe with IO or [] in the same sense that you can compose IO with [].
For example, if you check the types in GHCI using :t,
getContents >>= return . lines
gives you an IO [String]. If you add
>>= return . map Text.Read.readMaybe
you get a type of IO [Maybe a], which is a composition of IO, [] and Maybe. You could then pass it to
>>= return . Data.Maybe.catMaybes
to flatten it to an IO [a]. Then you might pass the list of parsed valid input lines to a function that flattens it again and computes an output.
Putting this together, the program
import Text.Read (readMaybe)
import Data.Maybe (catMaybes)
main :: IO ()
main = getContents >>= -- IO String
return . lines >>= -- IO [String]
return . map readMaybe >>= -- IO [Maybe Int]
return . catMaybes >>= -- IO [Int]
return . (sum :: [Int] -> Int) >>= -- IO Int
print -- IO ()
with the input:
1
2
Ignore this!
3
prints 6.
It would also be possible to work with an IO (Maybe [String]), a Maybe [IO String], etc.
You can do this with >> as well. Contrived example: getContents >> (return . Just) False reads the input, ignores it, and gives you back an IO (Maybe Bool).
From the document:
when :: (Monad m) => Bool -> m () -> m ()
when p s = if p then s else return ()
The when function takes a boolean argument and a monadic computation with unit () type and performs the computation only when the boolean argument is True.
===
As a Haskell newbie, my problem is that for me m () is some "void" data, but here the document mentions it as computation. Is it because of the laziness of Haskell?
Laziness has no part in it.
The m/Monad part is often called computation.
The best example might be m = IO:
Look at putStrLn "Hello" :: IO () - This is a computation that, when run, will print "Hello" to your screen.
This computation has no result - so the return type is ()
Now when you write
hello :: Bool -> IO ()
hello sayIt =
when sayIt (putStrLn "Hello")
then hello True is a computation that, when run, will print "Hello"; while hello False is a computation that when run will do nothing at all.
Now compare it to getLine :: IO String - this is a computation that, when run, will prompt you for an input and will return the input as a String - that's why the return type is String.
Does this help?
for me "m ()" is some "void" data
And that kinda makes sense, in that a computation is a special kind of data. It has nothing to do with laziness - it's associated with context.
Let's take State as an example. A function of type, say, s -> () in Haskell can only produce one value. However, a function of type s -> ((), s) is a regular function doing some transformation on s. The problem you're having is that you're only looking at the () part, while the s -> s part stays hidden. That's the point of State - to hide the state passing.
Hence State s () is trivially convertible to s -> ((), s) and back, and it still is a Monad (a computation) that produces a value of... ().
If we look at practical use, now:
(flip runState 10) $ do
modify (+1)
This expression produces a tuple of ((), Int); the Int part is hidden
It will modify the state, adding 1 to it. It produces the intermediate value of (), though, which fits your when:
when (5 > 3) $ modify (+1)
Monads are notably abstract and mathematical, so intuitive statements about them are often made in language that is rather vague. So values of a monadic type are often informally labeled as "computations," "actions" or (less often) "commands" because it's an analogy that sometimes help us reason about them. But when you dig deeper, these turn out to be empty words when used this way; ultimately what they mean is "some value of a type that provides the Monad interface."
I like the word "action" better for this, so let me go with that. The intuition for the use for that word in Haskell is this: the language makes a distinction between functions and actions:
Functions can't have any side effects, and their types look like a -> b.
Actions may have side effects, and their types look like IO a.
A consequence of this: an action of type IO () produces an uninteresting result value, and therefore it's either a no-op (return ()) or an action that is only interesting because of its side effects.
Monad then is the interface that allows you to glue actions together into complex actions.
This is all very intuitive, but it's an analogy that becomes rather stretched when you try to apply it to many monads other than the IO type. For example, lists are a monad:
instance Monad [] where
return a = [a]
as >>= f = concatMap f as
The "actions" or "computations" of the list monad are... lists. How is a list an "action" or a "computation"? The analogy is rather weak in this case, isn't it?
So I'd say that this is the best advice:
Understand that "action" and "computation" are analogies. There's no strict definition.
Understand that these analogies are stronger for some monad instances, and weak for others.
The ultimate barometer of how things work are the Monad laws and the definitions of the various functions that work with Monad.
I read in learn you haskell that
Enum members are sequentially ordered types ... Types in this class:
(), Bool, Char ...
Also it appears in some signatures:
putChar :: Char -> IO ()
It is very difficult to find info about it in Google as the answers refer to problems of the "common parentheses" (use in function calls, precedence problems and the like).
Therefore, what does the expression () means? Is it a type? What are variables of type ()? What is used for and when is it needed?
It is both a type and a value. () is a special type "pronounced" unit, and it has one value: (), also pronounced unit. It is essentially the same as the type void in Java or C/C++. If you're familiar with Python, think of it as the NoneType which has the singleton None.
It is useful when you want to denote an action that doesn't return anything. It is most commonly used in the context of Monads, such as the IO monad. For example, if you had the following function:
getVal :: IO Int
getVal = do
putStrLn "Enter an integer value:"
n <- getLine
return $ read n
And for some reason you decided that you just wanted to annoy the user and throw away the number they just passed in:
getValAnnoy :: IO ()
getValAnnoy = do
_ <- getVal
return () -- Returns nothing
However, return is just a Monad function, so we could abstract this a bit further
throwAwayResult :: Monad m => m a -> m ()
throwAwayResult action = do
_ <- action
return ()
Then
getValAnnoy = throwAwayResult getVal
However, you don't have to write this function yourself, it already exists in Control.Monad as the function void that is even less constraining and works on Functors:
void :: Functor f => f a -> f ()
void fa = fmap (const ()) fa
Why does it work on Functor instead of Monad? Well, for each Monad instance, you can write the Functor instance as
instance Monad m => Functor m where
fmap f m = m >>= return . f
But you can't make a Monad out of every Functor. It's like how a square is a rectangle but a rectangle isn't always a square, Monads form a subset of Functors.
As others have said, it's the unit type which has one value called unit. In Haskell syntax this is easily, if confusingly, expressed as () :: (). We can make our own quite easily as well.
data Unit = Unit
>>> :t Unit :: Unit
Unit :: Unit
>>> :t () :: ()
() :: ()
It's written as () because it behaves very much like an "empty tuple" would. There are theoretical reasons why this holds, but honestly it makes a lot of simple intuitive sense too.
It's often used as the argument to a type constructor like IO or ST when its the context of the value that's interesting, not the value itself. This is intuitively true because if I tell you have I have a value of type () then you don't need to know anything more---there's only one of them!
putStrLn :: String -> IO () -- the return type is unimportant,
-- we just want the *effect*
map (const ()) :: [a] -> [()] -- this destroys all information about a list
-- keeping only the *length*
>>> [ (), (), () ] :: [()] -- this might as well just be the number `3`
-- there is nothing else interesting about it
forward :: [()] -> Int -- we have a pair of isomorphisms even
forward = length
backward :: Int -> [()]
backward n = replicate n ()
It is both a type and a value.
It is unit type, the type that has only one value. In Haskell its name and only value looks like empty tuple : ().
As others have said, () in Haskell is both the name of the "unit" type, and the only value of said type.
One of the confusing things in moving from imperative programming to Haskell is that the way the languages deal with the concept of "nothing" is different. What's more confusing is the vocabulary, because imperative languages and Haskell use the term "void" to mean diametrically different things.
In an imperative language, a "function" (which may not be a true mathematical function) may have "void" as its return type, as in this pseudocode example:
void sayHello() {
printLn("Hello!");
}
In this case, void means that the "function," if it returns, will not produce a result value. (The other possibility is that they function may not return—it may loop forever, or fail with an error or exception.)
In Haskell, however, all functions (and IO actions) must must produce a result. So when we write an IO action that doesn't produce any interesting return value, we make it return ():
sayHello :: IO ()
sayHello = putStrLn "Hello!"
Later actions will just ignore the () result value.
Now, you probably don't need to worry too much about this, but there is one place where this gets confusing, which is that in Haskell there is a type called Void, but it means something completely different from the imperative programming void. Because of this, the word "void" becomes a minefield when comparing Haskell and imperative languages, because the meaning is completely different when you switch paradigms.
In Haskell, Void is a type that doesn't have any values. The largest consequence of this is that in Haskell a function with return type Void can never return, it can only fail with an error or loop forever. Why? Because the function would have produce a value of type Void in order to return, but there isn't such a value.
This is however not relevant until you're working with some more advanced techniques, so you don't need to worry about it other than to beware of the word "void."
But the bigger lesson is that the imperative and the Haskell concepts of "no return value" are different. Haskell distinguishes between:
Things that may return but whose result won't have any information (the () type);
Things that cannot return, no matter what (the Void type).
The imperative void corresponds to the former, and not the latter.
I have some code that currently uses a ST monad for evaluation. I like not putting IO everywhere because the runST method produces a pure result, and indicates that such result is safe to call (versus unsafePerformIO). However, as some of my code has gotten longer, I do want to put debugging print statements in.
Is there any class that provides a dual-personality monad [or typeclass machinery], one which can be a ST or an IO (depending on its type or a "isDebug" flag)? I recall SPJ introduced a "Mutation" class in his "Fun with Type Functions" paper, which used associative types to relate IO to IORef and ST to STRef. Does such exist as a package somewhere?
Edit / solution
Thanks very much [the nth time], C.A. McCann! Using that solution, I was able to introduce an additional class for monads which support a pdebug function. The ST monad will ignore these calls, whereas IO will run putStrLn.
class DebugMonad m where
pdebug :: String -> m ()
instance DebugMonad (ST s) where
pdebug _ = return ()
instance DebugMonad IO where
pdebug = putStrLn
test initV = do
v <- newRef initV
modifyRef v (+1)
pdebug "debug"
readRef v
testR v = runST $ test v
This has a very fortunate consequence in ghci. Since it expects expressions to be IO types by default, running something like "test 3" will result in the IO monad being run, so you can debug it easily, and then call it with something like "testR" when you actually want to run it.
If you want a unified interface to IORef and STRef, have you looked at the stateref package? It has type classes for "references to mutable data", separated for readable, writable, etc., with instances for IORef and STRef, as well as things like TVar, MVar, ForeignPtr, etc.
Have you considered Debug.Trace.trace instead?
http://www.haskell.org/haskellwiki/Debugging