How to interact with pure algorithm in IO code - haskell

To illustrate the point with a trivial example, say I have implemented filter:
filter :: (a -> Bool) -> [a] -> [a]
And I have a predicate p that interacts with the real world:
p :: a -> IO Bool
How do it make it work with filter without writing a separate implementation:
filterIO :: (a -> IO Bool) -> [a] -> IO [a]
Presumably if I can turn p into p':
p': IO (a -> Bool)
Then I can do
main :: IO ()
main = do
p'' <- p'
print $ filter p'' [1..100]
But I haven't been able to find the conversion.
Edited:
As people have pointed out in the comment, such a conversion doesn't make sense as it would break the encapsulation of the IO Monad.
Now the question is, can I structure my code so that the pure and IO versions don't completely duplicate the core logic?

How do it make it work with filter without writing a separate implementation
That isn't possible and the fact this sort of thing isn't possible is by design - Haskell places firm limits on its types and you have to obey them. You cannot sprinkle IO all over the place willy-nilly.
Now the question is, can I structure my code so that the pure and IO versions don't completely duplicate the core logic?
You'll be interested in filterM. Then, you can get both the functionality of filterIO by using the IO monad and the pure functionality using the Identity monad. Of course, for the pure case, you now have to pay the extra price of wrapping/unwrapping (or coerceing) the Identity wrapper. (Side remark: since Identity is a newtype this is only a code readability cost, not a runtime one.)
ghci> data Color = Red | Green | Blue deriving (Read, Show, Eq)
Here is a monadic example (note that the lines containing only Red, Blue, and Blue are user-entered at the prompt):
ghci> filterM (\x -> do y<-readLn; pure (x==y)) [Red,Green,Blue]
Red
Blue
Blue
[Red,Blue] :: IO [Color]
Here is a pure example:
ghci> filterM (\x -> Identity (x /= Green)) [Red,Green,Blue]
Identity [Red,Blue] :: Identity [Color]

As already said, you can use filterM for this specific task. However, it is usually better to keep with Haskell's characteristic strict seperation of IO and calculations. In your case, you can just tick off all necessary IO in one go and then do the interesting filtering in nice, reliable, easily testable pure code (i.e. here, simply with the normal filter):
type A = Int
type Annotated = (A, Bool)
p' :: Annotated -> Bool
p' = snd
main :: IO ()
main = do
candidates <- forM [1..100] $ \n -> do
permitted <- p n
return (n, permitted)
print $ fst <$> filter p' candidates
Here, we first annotate each number with a flag indicating what the environment says. This flag can then simply be read out in the actual filtering step, without requiring any further IO.
In short, this would be written:
main :: IO ()
main = do
candidates <- forM [1..100] $ \n -> (n,) <$> p n
print $ fst <$> filter snd candidates
While it is not feasible for this specific task, I'd also add that you can in principle achieve the IO seperation with something like your p'. This requires that the type A is “small enough” that you can evaluate the predicate with all values that are possible at all. For instance,
import qualified Data.Map as Map
type A = Char
p' :: IO (A -> Bool)
p' = (Map.!) . Map.fromList <$> mapM (\c -> (c,) <$> p c) ['\0'..]
This evaluates the predicate once for all of the 1114112 chars there are and stores the results in a lookup table.

Related

What is the logic behind allowing only same Monad types to be concatenated with `>>` operator?

Though it is okay to bind IO [[Char]] and IO () but its not allowed to bind Maybe with IO. Can someone give an example how this relaxation would lead to a bad design? Why freedom in the polymorphic type of Monad is allowed though not the Monad itself?
There are a lot of good theoretical reasons, including "that's not what Monad is." But let's step away from that for a moment and just look at the implementation details.
First off - Monad isn't magic. It's just a standard type class. Instances of Monad only get created when someone writes one.
Writing that instance is what defines how (>>) works. Usually it's done implicitly through the default definition in terms of (>>=), but that just is evidence that (>>=) is the more general operator, and writing it requires making all the same decisions that writing (>>) would take.
If you had a different operator that worked on more general types, you have to answer two questions. First, what would the types be? Second, how would you go about providing implementations? It's really not clear what the desired types would be, from your question. One of the following, I guess:
class Poly1 m n where
(>>) :: m a -> n b -> m b
class Poly2 m n where
(>>) :: m a -> n b -> n b
class Poly3 m n o | m n -> o where
(>>) :: m a -> n b -> o b
All of them could be implemented. But you lose two really important factors for using them practically.
You need to write an instance for every pair of types you plan to use together. This is a massively more complex undertaking than just an instance for each type. Something about n vs n^2.
You lose predictability. What does the operation even do? Here's where theory and practice intersect. The theory behind Monad places a lot of restrictions on the operations. Those restrictions are referred to as the "monad laws". They are beyond the ability to verify in Haskell, but any Monad instance that doesn't obey them is considered to be buggy. The end result is that you quickly can build an intuition for what the Monad operations do and don't do. You can use them without looking up the details of every type involved, because you know properties that they obey. None of those possible classes I suggested give you any kind of assurances like that. You just have no idea what they do.
I’m not sure that I understand your question correctly, but it’s definitely possible to compose Maybe with IO or [] in the same sense that you can compose IO with [].
For example, if you check the types in GHCI using :t,
getContents >>= return . lines
gives you an IO [String]. If you add
>>= return . map Text.Read.readMaybe
you get a type of IO [Maybe a], which is a composition of IO, [] and Maybe. You could then pass it to
>>= return . Data.Maybe.catMaybes
to flatten it to an IO [a]. Then you might pass the list of parsed valid input lines to a function that flattens it again and computes an output.
Putting this together, the program
import Text.Read (readMaybe)
import Data.Maybe (catMaybes)
main :: IO ()
main = getContents >>= -- IO String
return . lines >>= -- IO [String]
return . map readMaybe >>= -- IO [Maybe Int]
return . catMaybes >>= -- IO [Int]
return . (sum :: [Int] -> Int) >>= -- IO Int
print -- IO ()
with the input:
1
2
Ignore this!
3
prints 6.
It would also be possible to work with an IO (Maybe [String]), a Maybe [IO String], etc.
You can do this with >> as well. Contrived example: getContents >> (return . Just) False reads the input, ignores it, and gives you back an IO (Maybe Bool).

What is the intuitive meaning of "join"?

What is the intuitive meaning of join for a Monad?
The monads-as-containers analogies make sense to me, and inside these analogies join makes sense. A value is double-wrapped and we unwrap one layer. But as we all know, a monad is not a container.
How might one write sensible, understandable code using join in normal circumstances, say when in IO?
An action :: IO (IO a) is a way of producing a way of producing an a. join action, then, is a way of producing an a by running the outermost producer of action, taking the producer it produced and then running that as well, to finally get to that juicy a.
join collapses consecutive layers of the type constructor.
A valid join must satisfy the property that, for any number of consecutive applications of the type constructor, it shouldn't matter the order in which we collapse the layers.
For example
ghci> let lolol = [[['a'],['b','c']],[['d'],['e']]]
ghci> lolol :: [[[Char]]]
ghci> lolol :: [] ([] ([] Char)) -- the type can also be expressed like this
ghci> join (fmap join lolol) -- collapse inner layers first
"abcde"
ghci> join (join lolol) -- collapse outer layers first
"abcde"
(We used fmap to "get inside" the outer monadic layer so that we could collapse the inner layers first.)
A small non container example where join is useful: for the function monad (->) a, join is equivalent to \f x -> f x x, a function of type (a -> a -> b) -> a -> b that applies two times the same argument to another function.
For the List monad, join is simply concat, and concatMap is join . fmap.
So join implicitly appears in any list expression which uses concat
or concatMap.
Suppose you were asked to find all of the numbers which are divisors of any
number in an input list. If you have a divisors function:
divisors :: Int -> [Int]
divisors n = [ d | d <- [1..n], mod n d == 0 ]
you might solve the problem like this:
foo xs = concat $ (map divisors xs)
Here we are thinking of solving the problem by first mapping the
divisors function over all of the input elements and then concatenating
all of the resulting lists. You might even think that this is a very
"functional" way of solving the problem.
Another approch would be to write a list comprehension:
bar xs = [ d | x <- xs, d <- divisors x ]
or using do-notation:
bar xs = do x <- xs
d <- divisors
return d
Here it might be said we're thinking a little more
imperatively - first draw a number from the list xs; then draw
a divisors from the divisors of the number and yield it.
It turns out, though, that foo and bar are exactly the same function.
Morever, these two approaches are exactly the same in any monad.
That is, for any monad, and appropriate monadic functions f and g:
do x <- f
y <- g x is the same as: (join . fmap g) f
return y
For instance, in the IO monad if we set f = getLine and g = readFile,
we have:
do x <- getLine
y <- readFile x is the same as: (join . fmap readFile) getLine
return y
The do-block is a more imperative way of expressing the action: first read a
line of input; then treat returned string as a file name, read the contents
of the file and finally return the result.
The equivalent join expression seems a little unnatural in the IO-monad.
However it shouldn't be as we are using it in exactly the same way as we
used concatMap in the first example.
Given an action that produces another action, run the action and then run the action that it produces.
If you imagine some kind of Parser x monad that parses an x, then Parser (Parser x) is a parser that does some parsing, and then returns another parser. So join would flatten this into a Parser x that just runs both actions and returns the final x.
Why would you even have a Parser (Parser x) in the first place? Basically, because fmap. If you have a parser, you can fmap a function that changes the result over it. But if you fmap a function that itself returns a parser, you end up with a Parser (Parser x), where you probably want to just run both actions. join implements "just run both actions".
I like the parsing example because a parser typically has a runParser function. And it's clear that a Parser Int is not an integer. It's something that can parse an integer, after you give it some input to parse from. I think a lot of people end up thinking of an IO Int as being just a normal integer but with this annoying IO bit that you can't get rid of. It isn't. It's an unexecuted I/O operation. There's no integer "inside" it; the integer doesn't exist until you actually perform the I/O.
I find these things easier to interpret by writing out the types and refactoring them a bit to reveal what the functions do.
Reader monad
The Reader type is defined thus, and its join function has the type shown:
newtype Reader r a = Reader { runReader :: r -> a }
join :: Reader r (Reader r a) -> Reader r a
Since this is a newtype, this means that the type Reader r a is isomorphic to r -> a. So we can refactor the type definition to give us this type that, albeit it's not the same, it's really "the same" with scare quotes:
In the (->) r monad, which is isomorphic to Reader r, join is the function:
join :: (r -> r -> a) -> r -> a
So the Reader join is the function that takes a two-place function (r -> r -> a) and applies to the same value at both its argument positions.
Writer monad
Since the Writer type has this definition:
newtype Writer w a = Writer { runWriter :: (a, w) }
...then when we remove the newtype, its join function has a type isomorphic to:
join :: Monoid w => ((a, w), w) -> (a, w)
The Monoid constraint needs to be there because the Monad instance for Writer requires it, and it lets us guess right away what the function does:
join ((a, w0), w1) = (a, w0 <> w1)
State monad
Similarly, since State has this definition:
newtype State s a = State { runState :: s -> (a, s) }
...then its join is like this:
join :: (s -> (s -> (a, s), s)) -> s -> (a, s)
...and you can also venture just writing it directly:
join f s0 = (a, s2)
where
(g, s1) = f s0
(a, s2) = g s1
{- Here's the "map" to the variable names in the function:
f g s2 s1 s0 s2
join :: (s -> (s -> (a, s ), s )) -> s -> (a, s )
-}
If you stare at this type a bit, you might think that it bears some resemblance to both the Reader and Writer's types for their join operations. And you'd be right! The Reader, Writer and State monads are all instances of a more general pattern called update monads.
List monad
join :: [[a]] -> [a]
As other people have pointed out, this is the type of the concat function.
Parsing monads
Here comes a really neat thing to realize. Very often, "fancy" monads turn out to be combinations or variants of "basic" ones like Reader, Writer, State or lists. So often what I do when confronted with a novel monad is ask: which of the basic monads does it resemble, and how?
Take for example parsing monads, which have been brought up in other answers here. A simplistic parser monad (with no support for important things like error reporting) looks like this:
newtype Parser a = Parser { runParser :: String -> [(a, String)] }
A Parser is a function that takes a string as input, and returns a list of candidate parses, where each candidate parse is a pair of:
A parse result of type a;
The leftovers (the suffix of the input string that was not consumed in that parse).
But notice that this type looks very much like the state monad:
newtype Parser a = Parser { runParser :: String -> [(a, String)] }
newtype State s a = State { runState :: s -> (a, s) }
And this is no accident! Parser monads are nondeterministic state monads, where the state is the unconsumed portion of the input string, and parse steps generate alternatives that may be later rejected in light of further input. List monads are often called "nondeterminism" monads, so it's no surprise that a parser resembles a mix of the state and list monads.
And this intuition can be systematized by using monad transfomers. The state monad transformer is defined like this:
newtype StateT s m a = StateT { runStateT :: s -> m (a, s) }
Which means that the Parser type from above can be written like this as well:
type Parser a = StateT String [] a
...and its Monad instance follows mechanically from those of StateT and [].
The IO monad
Imagine we could enumerate all of the possible primitive IO actions, somewhat like this:
{-# LANGUAGE GADTs #-}
data Command a where
-- An action that writes a char to stdout
putChar :: Char -> Command ()
-- An action that reads a char from stdin
getChar :: Command Char
-- ...
Then we could think of the IO type as this (which I've adapted from the highly-recommended Operational monad tutorial):
data IO a where
-- An `IO` action that just returns a constant value.
Return :: a -> IO a
-- An action that binds the result of a `Command` to
-- a function that computes the next step after it.
Bind :: Command x -> (x -> IO a) -> IO a
instance Monad IO where ...
Then join action would then look like this:
join :: IO (IO a) -> IO a
-- If the action is just `Return`, then its payload already
-- is what we need to return.
join (Return ioa) = ioa
-- If the action is a `Bind`, then its "next step" function
-- `f` produces `IO (IO a)`, so we can just recursively stick
-- a `join` to its result end.
join (Bind cmd f) = Bind cmd (join . f)
So all that the join does here is "chase down" the IO action until it sees a result that fits the pattern Return (ma :: IO a), and strip out the outer Return.
So what did I do here? Just like for parser monads, I just defined (or rather copied) a toy model of the IO type that has the virtue of being transparent. Then I work out the behavior of join from the toy model.

When is unsafeInterleaveIO unsafe?

Unlike other unsafe* operations, the documentation for unsafeInterleaveIO is not very clear about its possible pitfalls. So exactly when is it unsafe? I would like to know the condition for both parallel/concurrent and the single threaded usage.
More specifically, are the two functions in the following code semantically equivalent? If not, when and how?
joinIO :: IO a -> (a -> IO b) -> IO b
joinIO a f = do !x <- a
!x' <- f x
return x'
joinIO':: IO a -> (a -> IO b) -> IO b
joinIO' a f = do !x <- unsafeInterleaveIO a
!x' <- unsafeInterleaveIO $ f x
return x'
Here's how I would use this in practice:
data LIO a = LIO {runLIO :: IO a}
instance Functor LIO where
fmap f (LIO a) = LIO (fmap f a)
instance Monad LIO where
return x = LIO $ return x
a >>= f = LIO $ lazily a >>= lazily . f
where
lazily = unsafeInterleaveIO . runLIO
iterateLIO :: (a -> LIO a) -> a -> LIO [a]
iterateLIO f x = do
x' <- f x
xs <- iterateLIO f x' -- IO monad would diverge here
return $ x:xs
limitLIO :: (a -> LIO a) -> a -> (a -> a -> Bool) -> LIO a
limitLIO f a converged = do
xs <- iterateLIO f a
return . snd . head . filter (uncurry converged) $ zip xs (tail xs)
root2 = runLIO $ limitLIO newtonLIO 1 converged
where
newtonLIO x = do () <- LIO $ print x
LIO $ print "lazy io"
return $ x - f x / f' x
f x = x^2 -2
f' x = 2 * x
converged x x' = abs (x-x') < 1E-15
Although I would rather avoid using this code in serious applications because of the terrifying unsafe* stuff, I could at least be lazier than would be possible with the stricter IO monad in deciding what 'convergence' means, leading to (what I think is) more idiomatic Haskell. And this brings up another question:why is it not the default semantics for Haskell's (or GHC's?) IO monad? I've heard some resource management issues for lazy IO (which GHC only provides by a small fixed set of commands), but the examples typically given somewhat resemble like a broken makefile:a resource X depends on a resource Y, but if you fail to specify the dependency, you get an undefined status for X. Is lazy IO really the culprit for this problem? (On the other hand, if there is a subtle concurrency bug in the above code such as deadlocks I would take it as a more fundamental problem.)
Update
Reading Ben's and Dietrich's answer and his comments below, I have briefly browsed the ghc source code to see how the IO monad is implemented in GHC. Here I summerize my few findings.
GHC implements Haskell as an impure, non-referentially-transparent language. GHC's runtime operates by successively evaluating impure functions with side effects just like any other functional languages. This is why the evaluation order matters.
unsafeInterleaveIO is unsafe because it can introduce any kind of concurrency bugs even in a sigle-threaded program by exposing the (usually) hidden impurity of GHC's Haskell. (iteratee seems to be a nice and elegant solution for this, and I will certainly learn how to use it.)
the IO monad must be strict because a safe, lazy IO monad would require a precise (lifted) representation of the RealWorld, which seems impossible.
It's not just the IO monad and unsafe functions that are unsafe. The whole Haskell (as implemented by GHC) is potentially unsafe, and 'pure' functions in (GHC's) Haskell are only pure by convention and the people's goodwill. Types can never be a proof for purity.
To see this, I demonstrate how GHC's Haskell is not referentially transparent regardless of the IO monad, regardless of the unsafe* functions,etc.
-- An evil example of a function whose result depends on a particular
-- evaluation order without reference to unsafe* functions or even
-- the IO monad.
{-# LANGUAGE MagicHash #-}
{-# LANGUAGE UnboxedTuples #-}
{-# LANGUAGE BangPatterns #-}
import GHC.Prim
f :: Int -> Int
f x = let v = myVar 1
-- removing the strictness in the following changes the result
!x' = h v x
in g v x'
g :: MutVar# RealWorld Int -> Int -> Int
g v x = let !y = addMyVar v 1
in x * y
h :: MutVar# RealWorld Int -> Int -> Int
h v x = let !y = readMyVar v
in x + y
myVar :: Int -> MutVar# (RealWorld) Int
myVar x =
case newMutVar# x realWorld# of
(# _ , v #) -> v
readMyVar :: MutVar# (RealWorld) Int -> Int
readMyVar v =
case readMutVar# v realWorld# of
(# _ , x #) -> x
addMyVar :: MutVar# (RealWorld) Int -> Int -> Int
addMyVar v x =
case readMutVar# v realWorld# of
(# s , y #) ->
case writeMutVar# v (x+y) s of
s' -> x + y
main = print $ f 1
Just for easy reference, I collected some of the relevant definitions
for the IO monad as implemented by GHC.
(All the paths below are relative to the top directory of the ghc's source repository.)
-- Firstly, according to "libraries/base/GHC/IO.hs",
{-
The IO Monad is just an instance of the ST monad, where the state is
the real world. We use the exception mechanism (in GHC.Exception) to
implement IO exceptions.
...
-}
-- And indeed in "libraries/ghc-prim/GHC/Types.hs", We have
newtype IO a = IO (State# RealWorld -> (# State# RealWorld, a #))
-- And in "libraries/base/GHC/Base.lhs", we have the Monad instance for IO:
data RealWorld
instance Functor IO where
fmap f x = x >>= (return . f)
instance Monad IO where
m >> k = m >>= \ _ -> k
return = returnIO
(>>=) = bindIO
fail s = failIO s
returnIO :: a -> IO a
returnIO x = IO $ \ s -> (# s, x #)
bindIO :: IO a -> (a -> IO b) -> IO b
bindIO (IO m) k = IO $ \ s -> case m s of (# new_s, a #) -> unIO (k a) new_s
unIO :: IO a -> (State# RealWorld -> (# State# RealWorld, a #))
unIO (IO a) = a
-- Many of the unsafe* functions are defined in "libraries/base/GHC/IO.hs":
unsafePerformIO :: IO a -> a
unsafePerformIO m = unsafeDupablePerformIO (noDuplicate >> m)
unsafeDupablePerformIO :: IO a -> a
unsafeDupablePerformIO (IO m) = lazy (case m realWorld# of (# _, r #) -> r)
unsafeInterleaveIO :: IO a -> IO a
unsafeInterleaveIO m = unsafeDupableInterleaveIO (noDuplicate >> m)
unsafeDupableInterleaveIO :: IO a -> IO a
unsafeDupableInterleaveIO (IO m)
= IO ( \ s -> let
r = case m s of (# _, res #) -> res
in
(# s, r #))
noDuplicate :: IO ()
noDuplicate = IO $ \s -> case noDuplicate# s of s' -> (# s', () #)
-- The auto-generated file "libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs"
-- list types of all the primitive impure functions. For example,
data MutVar# s a
data State# s
newMutVar# :: a -> State# s -> (# State# s,MutVar# s a #)
-- The actual implementations are found in "rts/PrimOps.cmm".
So, for example, ignoring the constructor and assuming referential transparency,
we have
unsafeDupableInterleaveIO m >>= f
==> (let u = unsafeDupableInterleaveIO)
u m >>= f
==> (definition of (>>=) and ignore the constructor)
\s -> case u m s of
(# s',a' #) -> f a' s'
==> (definition of u and let snd# x = case x of (# _,r #) -> r)
\s -> case (let r = snd# (m s)
in (# s,r #)
) of
(# s',a' #) -> f a' s'
==>
\s -> let r = snd# (m s)
in
case (# s, r #) of
(# s', a' #) -> f a' s'
==>
\s -> f (snd# (m s)) s
This is not what we would normally get from binding usual lazy state monads.
Assuming the state variable s carries some real meaning (which it does not), it looks more like a concurrent IO (or interleaved IO as the function rightly says) than a lazy IO as we would normally mean by 'lazy state monad' wherein despite the laziness the states are properly threaded by an associative operation.
I tried to implement a truely lazy IO monad, but soon realized that in order to define a lazy monadic composition for the IO datatype, we need to be able to lift/unlift the RealWorld. However this seems impossible because there is no constructor for both State# s and RealWorld. And even if that were possible, I would then have to represent the precise, functional represenation of our RealWorld which is impossible,too.
But I'm still not sure whether the standard Haskell 2010 breaks referential transparency or the lazy IO is bad in itself. At least it seems entirely possible to build a small model of the RealWorld on which the lazy IO is perfectly safe and predictable. And there might be a good enough approximation that serves many practical purposes without breaking the referential transparency.
At the top, the two functions you have are always identical.
v1 = do !a <- x
y
v2 = do !a <- unsafeInterleaveIO x
y
Remember that unsafeInterleaveIO defers the IO operation until its result is forced -- yet you are forcing it immediately by using a strict pattern match !a, so the operation is not deferred at all. So v1 and v2 are exactly the same.
In general
In general, it is up to you to prove that your use of unsafeInterleaveIO is safe. If you call unsafeInterleaveIO x, then you have to prove that x can be called at any time and still produce the same output.
Modern sentiment about Lazy IO
...is that Lazy IO is dangerous and a bad idea 99% of the time.
The chief problem that it is trying to solve is that IO has to be done in the IO monad, but you want to be able to do incremental IO and you don't want to rewrite all of your pure functions to call IO callbacks to get more data. Incremental IO is important because it uses less memory, allowing you to operate on data sets that don't fit in memory without changing your algorithms too much.
Lazy IO's solution is to do IO outside of the IO monad. This is not generally safe.
Today, people are solving the problem of incremental IO in different ways by using libraries like Conduit or Pipes. Conduit and Pipes are much more deterministic and well-behaved than Lazy IO, solve the same problems, and do not require unsafe constructs.
Remember that unsafeInterleaveIO is really just unsafePerformIO with a different type.
Example
Here is an example of a program that is broken due to lazy IO:
rot13 :: Char -> Char
rot13 x
| (x >= 'a' && x <= 'm') || (x >= 'A' && x <= 'M') = toEnum (fromEnum x + 13)
| (x >= 'n' && x <= 'z') || (x >= 'N' && x <= 'Z') = toEnum (fromEnum x - 13)
| otherwise = x
rot13file :: FilePath -> IO ()
rot13file path = do
x <- readFile path
let y = map rot13 x
writeFile path y
main = rot13file "test.txt"
This program will not work. Replacing the lazy IO with strict IO will make it work.
Links
From Lazy IO breaks purity by Oleg Kiselyov on the Haskell mailing list:
We demonstrate how lazy IO breaks referential transparency. A pure
function of the type Int->Int->Int gives different integers depending
on the order of evaluation of its arguments. Our Haskell98 code uses
nothing but the standard input. We conclude that extolling the purity
of Haskell and advertising lazy IO is inconsistent.
...
Lazy IO should not be considered good style. One of the common
definitions of purity is that pure expressions should evaluate to the
same results regardless of evaluation order, or that equals can be
substituted for equals. If an expression of the type Int evaluates to
1, we should be able to replace every occurrence of the expression with
1 without changing the results and other observables.
From Lazy vs correct IO by Oleg Kiselyov on the Haskell mailing list:
After all, what could be more against
the spirit of Haskell than a `pure' function with observable side
effects. With Lazy IO, one indeed has to choose between correctness
and performance. The appearance of such code is especially strange
after the evidence of deadlocks with Lazy IO, presented on this list
less than a month ago. Let alone unpredictable resource usage and
reliance on finalizers to close files (forgetting that GHC does not
guarantee that finalizers will be run at all).
Kiselyov wrote the Iteratee library, which was the first real alternative to lazy IO.
Laziness means that when (and whether) exactly a computation is actually carried out depends on when (and whether) the runtime implementation decides it needs the value. As a Haskell programmer you completely relinquish control over the evaluation order (except by the data dependencies inherent in your code, and when you start playing with strictness to force the runtime to make certain choices).
That's great for pure computations, because the result of a pure computation will be exactly the same whenever you do it (except that if you carry out computations that you don't actually need, you might encounter errors or fail to terminate, when another evaluation order might allow the program to terminate successfully; but all non-bottom values computed by any evaluation order will be the same).
But when you're writing IO-dependent code, evaluation order matters. The whole point of IO is to provide a mechanism for building computations whose steps depend on and affect the world outside the program, and an important part of doing that is that those steps are explicitly sequenced. Using unsafeInterleaveIO throws away that explicit sequencing, and relinquishes control of when (and whether) the IO operation is actually carried out to the runtime system.
This is unsafe in general for IO operations, because there may be dependencies between their side-effects which cannot be inferred from the data dependencies inside the program. For example, one IO action might create a file with some data in it, and another IO action might read the same file. If they're both executed "lazily", then they'll only get run when the resulting Haskell value is needed. Creating the file is probably IO () though, and it's quite possible that the () is never needed. That could mean that the read operation is carried out first, either failing or reading data that was already in the file, but not the data that should have been put there by the other operation. There's no guarantee that the runtime system will execute them in the right order. To program correctly with a system that always did this for IO you'd have to be able to accurately predict the order in which the Haskell runtime will choose to perform the various IO actions.
Treat unsafeInterlaveIO as promise to the compiler (which it cannot verify, it's just going to trust you) that it doesn't matter when the IO action is carried out, or whether it's elided entirely. This is really what all the unsafe* functions are; they provide facilities that are not safe in general, and for which safety cannot be automatically checked, but which can be safe in particular instances. The onus is on you to ensure that your use of them is in fact safe. But if you make a promise to the compiler, and your promise is false, then unpleasant bugs can be the result. The "unsafe" in the name is to scare you into thinking about your particular case and deciding whether you really can make the promise to the compiler.
Basically everything under "Update" in the question is so confused it's not even wrong, so please try to forget it when you're trying to understand my answer.
Look at this function:
badLazyReadlines :: Handle -> IO [String]
badLazyReadlines h = do
l <- unsafeInterleaveIO $ hGetLine h
r <- unsafeInterleaveIO $ badLazyReadlines h
return (l:r)
In addition to what I'm trying to illustrate: the above function also doesn't handle reaching the end of the file. But ignore that for now.
main = do
h <- openFile "example.txt" ReadMode
lns <- badLazyReadlines h
putStrLn $ lns ! 4
This will print the first line of "example.txt", because the 5th element in the list is actually the first line that's read from the file.
Your joinIO and joinIO' are not semantically equivalent. They will usually be the same, but there's a subtlety involved: a bang pattern makes a value strict, but that's all it does. Bang patterns are implemented using seq, and that does not enforce a particular evaluation order, in particular the following two are semantically equivalent:
a `seq` b `seq` c
b `seq` a `seq` c
GHC can evaluate either b or a first before returning c. Indeed, it can evaluate c first, then a and b, then return c. Or, if it can statically prove a or b are non-bottom, or that c is bottom, it doesn't have to evaluate a or b at all. Some optimisations do genuinely make use of this fact, but it doesn't come up very often in practice.
unsafeInterleaveIO, by contrast, is sensitive to all or any of those changes – it does not depend on the semantic property of how strict some function is, but the operational property of when something is evaluated. So all of the above transformations are visible to it, which is why it's only reasonable to view unsafeInterleaveIO as performing its IO non-deterministically, more or less whenever it feels appropriate.
This is, in essence, why unsafeInterleaveIO is unsafe - it is the only mechanism in normal use that can detect transformations that ought to be meaning-preserving. It's the only way you can detect evaluation, which by rights ought to be impossible.
As an aside, it's probably fair to mentally prepend unsafe to every function from GHC.Prim, and probably several other GHC. modules as well. They're certainly not ordinary Haskell.

Haskell data serialization of some data implementing a common type class

Let's start with the following
data A = A String deriving Show
data B = B String deriving Show
class X a where
spooge :: a -> Q
[ Some implementations of X for A and B ]
Now let's say we have custom implementations of show and read, named show' and read' respectively which utilize Show as a serialization mechanism. I want show' and read' to have types
show' :: X a => a -> String
read' :: X a => String -> a
So I can do things like
f :: String -> [Q]
f d = map (\x -> spooge $ read' x) d
Where data could have been
[show' (A "foo"), show' (B "bar")]
In summary, I wanna serialize stuff of various types which share a common typeclass so I can call their separate implementations on the deserialized stuff automatically.
Now, I realize you could write some template haskell which would generate a wrapper type, like
data XWrap = AWrap A | BWrap B deriving (Show)
and serialize the wrapped type which would guarantee that the type info would be stored with it, and that we'd be able to get ourselves back at least an XWrap... but is there a better way using haskell ninja-ery?
EDIT
Okay I need to be more application specific. This is an API. Users will define their As, and Bs and fs as they see fit. I don't ever want them hacking through the rest of the code updating their XWraps, or switches or anything. The most i'm willing to compromise is one list somewhere of all the A, B, etc. in some format. Why?
Here's the application. A is "Download a file from an FTP server." B is "convert from flac to mp3". A contains username, password, port, etc. information. B contains file path information. There could be MANY As and Bs. Hundreds. As many as people are willing to compile into the program. Two was just an example. A and B are Xs, and Xs shall be called "Tickets." Q is IO (). Spooge is runTicket. I want to read the tickets off into their relevant data types and then write generic code that will runTicket on the stuff read' from the stuff on disk. At some point I have to jam type information into the serialized data.
I'd first like to stress for all our happy listeners out there that XWrap is a very good way, and a lot of the time you can write one yourself faster than writing it using Template Haskell.
You say you can get back "at least an XWrap", as if that meant you couldn't recover the types A and B from XWrap or you couldn't use your typeclass on them. Not true! You can even define
separateAB :: [XWrap] -> ([A],[B])
If you didn't want them mixed together, you should serialise them seperately!
This is nicer than haskell ninja-ery; maybe you don't need to handle arbitrary instances, maybe just the ones you made.
Do you really need your original types back? If you feel like using existential types because you just want to spooge your deserialised data, why not either serialise the Q itself, or have some intermediate data type PoisedToSpooge that you serialise, which can deserialise to give you all the data you need for a really good spooging. Why not make it an instance of X too?
You could add a method to your X class that converts to PoisedToSpooge.
You could call it something fun like toPoisedToSpooge, which trips nicely off the tongue, don't you think? :)
Anyway this would remove your typesystem complexity at the same time as resolving the annoying ambiguous type in
f d = map (\x -> spooge $ read' x) d -- oops, the type of read' x depends on the String
You can replace read' with
stringToPoisedToSpoogeToDeserialise :: String -> PoisedToSpooge -- use to deserialise
and define
f d = map (\x -> spooge $ stringToPoisedToSpoogeToDeserialise x) -- no ambiguous type
which we could of course write more succincly as
f = map (spooge.stringToPoisedToSpoogeToDeserialise)
although I recognise the irony here in suggesting making your code more succinct. :)
If what you really want is a heterogeneous list then use existential types. If you want serialization then use Cereal + ByteString. If you want dynamic typing, which is what I think your actual goal is, then use Data.Dynamic. If none of this is what you want, or you want me to expand please press the pound key.
Based on your edit, I don't see any reason a list of thunks won't work. In what way does IO () fail to represent both the operations of "Download a file from an FTP server" and "convert from flac to MP3"?
I'll assume you want to do more things with deserialised Tickets
than run them, because if not you may as well ask the user to supply a bunch of String -> IO()
or similar, nothing clever needed at all.
If so, hooray! It's not often I feel it's appropriate to recommend advanced language features like this.
class Ticketable a where
show' :: a -> String
read' :: String -> Maybe a
runTicket :: a -> IO ()
-- other useful things to do with tickets
This all hinges on the type of read'. read' :: Ticket a => String -> a isn't very useful,
because the only thing it can do with invalid data is crash.
If we change the type to read' :: Ticket a => String -> Maybe a this can allow us to read from disk and
try all the possibilities or fail altogether.
(Alternatively you could use a parser: parse :: Ticket a => String -> Maybe (a,String).)
Let's use a GADT to give us ExistentialQuantification without the syntax and with nicer error messages:
{-# LANGUAGE GADTs #-}
data Ticket where
MkTicket :: Ticketable a => a -> Ticket
showT :: Ticket -> String
showT (MkTicket a) = show' a
runT :: Ticket -> IO()
runT (MkTicket a) = runTicket a
Notice how the MkTicket contstuctor supplies the context Ticketable a for free! GADTs are great.
It would be nice to make Ticket and instance of Ticketable, but that won't work, because there would be
an ambiguous type a hidden in it. Let's take functions that read Ticketable types and make them read
Tickets.
ticketize :: Ticketable a => (String -> Maybe a) -> (String -> Maybe Ticket)
ticketize = ((.).fmap) MkTicket -- a little pointfree fun
You could use some unusual sentinel string such as
"\n-+-+-+-+-+-Ticket-+-+-+-Border-+-+-+-+-+-+-+-\n" to separate your serialised data or better, use separate files
altogether. For this example, I'll just use "\n" as the separator.
readTickets :: [String -> Maybe Ticket] -> String -> [Maybe Ticket]
readTickets readers xs = map (foldr orelse (const Nothing) readers) (lines xs)
orelse :: (a -> Maybe b) -> (a -> Maybe b) -> (a -> Maybe b)
(f `orelse` g) x = case f x of
Nothing -> g x
just_y -> just_y
Now let's get rid of the Justs and ignore the Nothings:
runAll :: [String -> Maybe Ticket] -> String -> IO ()
runAll ps xs = mapM_ runT . catMaybes $ readTickets ps xs
Let's make a trivial ticket that just prints the contents of some directory
newtype Dir = Dir {unDir :: FilePath} deriving Show
readDir xs = let (front,back) = splitAt 4 xs in
if front == "dir:" then Just $ Dir back else Nothing
instance Ticketable Dir where
show' (Dir p) = "dir:"++show p
read' = readDir
runTicket (Dir p) = doesDirectoryExist p >>= flip when
(getDirectoryContents >=> mapM_ putStrLn $ p)
and an even more trivial ticket
data HelloWorld = HelloWorld deriving Show
readHW "HelloWorld" = Just HelloWorld
readHW _ = Nothing
instance Ticketable HelloWorld where
show' HelloWorld = "HelloWorld"
read' = readHW
runTicket HelloWorld = putStrLn "Hello World!"
and then put it all together:
myreaders = [ticketize readDir,ticketize readHW]
main = runAll myreaders $ unlines ["HelloWorld",".","HelloWorld","..",",HelloWorld"]
Just use Either. Your users don't even have to wrap it themselves. You have your deserializer wrap it in the Either for you. I don't know exactly what your serialization protocol is, but I assume that you have some way to detect which kind of request, and the following example assumes the first byte distinguishes the two requests:
deserializeRequest :: IO (Either A B)
deserializeRequest = do
byte <- get1stByte
case byte of
0 -> do
...
return $ Left $ A <A's fields>
1 -> do
...
return $ Right $ B <B's fields>
Then you don't even need to type-class spooge. Just make it a function of Either A B:
spooge :: Either A B -> Q

Haskell: how to deal with IO monad inside another IO monad?

just started learning haskell - love it after a week. At the moment going through the monads pain, not there yet but hopefully it will click.
I am trying to put together a function similar to pythons walk() but simpler.
Given a path I want to generate list of tuples. A tuple for each sub directory (lets just assume that there will be only directories). The tuple would contain path of the directory as its first element and list of files the directory contains as the second element.
I don't know if I explained it correctly but here is the code:
walkDir :: String -> IO [IO (FilePath, [FilePath])]
walkDir path = do
dir <- getDirectoryContents path
let nd = [x | x <- dir, notElem x [".",".."]]
return (map getcont nd)
where
getcont path = do
cont <- getDirectoryContents path
return (path,cont)
My concern is IO inside IO and how to deal with it?? Is it possible to cancel them out?
Is it possible to unwrap at least the internal IO? Is this kind of return normal?
I can not even print this kind of return. Do I have to create an instance of show for this to be printed correctly?
There most likely exist a similar function in some haskell library but this is for educational purpose. I want to learn. So any suggestions welcome.
Thank you very much.
Check out Control.Monad for mapM.
Then this
return (map getcont nd)
becomes
mapM getcont nd
Let's look at the types.
map :: (a -> b) -> [a] -> [b]
getCont :: FilePath -> IO (FilePath, [FilePath])
nd :: [FilePath]
map getCont nd :: [IO (FilePath, FilePath)]
Now, at this point, the structure looks inside-out. We have [IO a] but we want IO [a]. Stop! Hoogle time. Generalizing for any ol' monad, we hoogle [m a] -> m [a]. Lo and behold, sequence has that precise type signature. So instead of return :: a -> m a you should use sequence :: [m a] -> m [a], like this:
sequence (map getCont nd)
Then you'll be all set. Notice that this is essentially identical to Kurt S's solution, because
mapM f xs = sequence (map f xs)

Resources