I have a function that returns an IO action,
f :: Int -> IO Int
I would like to compute this function in parallel for multiple values of the argument. My naive implementation was as follows:
import Control.Parallel.Strategies
vals = [1..10]
main = do
results <- mapM f vals
let results' = results `using` parList rseq
mapM_ print results'
My reasoning for this was that the first mapM binds something of type IO [Int] to results, results' applies a parallel strategy to the contained list, and the mapM_ finally requests the actual values by printing them - but what is to be printed is already sparked in parallel, so the program should parallelize.
After being happy that it does indeed use all my CPUs, I noticed that the program is less effective (as in wall clock time) when being run with +RTS -N8 than without any RTS flags. The only explanation I can think of is that the first mapM has to sequence - i.e. perform - all the IO actions already, but that would not lead to ineffectivity, but make the N8 execution as effective as the unparallelized one, because all the work is done by the master thread. Running the program with +RTS -N8 -s yields SPARKS: 36 (11 converted, 0 overflowed, 0 dud, 21 GC'd, 4 fizzled), which surely isn't optimal, but unfortunately I can't make any sense of it.
I suppose I've found one of the beginner's stepping stones in Haskell parallelization or the internals of the IO monad. What am I doing wrong?
Background info: f n is a function that returns the solution for Project Euler problem n. Since many of them have data to read, I put the result into the IO monad. An example of how it may look like is
-- Problem 13: Work out the first ten digits of the sum of one-hundred 50-digit numbers.
euler 13 = fmap (first10 . sum) numbers
where
numbers = fmap (map read . explode '\n') $ readFile "problem_13"
first10 n
| n < 10^10 = n -- 10^10 is the first number with 11 digits
| otherwise = first10 $ n `div` 10
The full file can be found here (It's a bit long, but the first few "euler X" functions should be representative enough), the main file where I do the parallelism is this one.
Strategies are for parallel execution of pure computations. If it really is mandatory that your f returns an IO value, then consider using the async package instead. It provides useful combinators for running IO actions concurrently.
For your use case, mapConcurrently looks useful:
import Control.Concurrent.Async
vals = [1..10]
main = do
results <- mapConcurrently f vals
mapM_ print results
(I haven't tested though, because I don't know what your f is exactly.)
Try the parallel-io package. It allows you to change any mapM_ into parallel_.
Related
main = do
input <- sequence [getLine, getLine, getLine]
mapM_ print input
Let's see this program in action:
m#m-X555LJ:~$ runhaskell wtf.hs
asdf
jkl
powe
"asdf"
"jkl"
"powe"
Surprisingly to me, there seems to be no laziness here. Instead, all 3 getLines are evaluated eagerly, the read values are stored in memory and then, not before, all are printed.
Compare to this:
main = do
input <- fmap lines getContents
mapM_ print input
Let's see this in action:
m#m-X555LJ:~$ runhaskell wtf.hs
asdf
"asdf"
lkj
"lkj"
power
"power"
Totally different stuff. Lines are read one by one and printed one by one. Which is odd to me because I don't really see any differences between these two programs.
From LearnYouAHaskell:
When used with I/O actions, sequenceA is the same thing as sequence!
It takes a list of I/O actions and returns an I/O action that will
perform each of those actions and have as its result a list of the
results of those I/O actions. That's because to turn an [IO a] value
into an IO [a] value, to make an I/O action that yields a list of
results when performed, all those I/O actions have to be sequenced so
that they're then performed one after the other when evaluation is
forced. You can't get the result of an I/O action without performing
it.
I'm confused. I don't need to perform ALL IO actions to get the results of just one.
A few paragraphs earlier the book shows a definition of sequence:
sequenceA :: (Applicative f) => [f a] -> f [a]
sequenceA [] = pure []
sequenceA (x:xs) = (:) <$> x <*> sequenceA xs
Nice recursion; nothing here hints me that this recursion should not be lazy;just like in any other recursion, to get the head of the returned list Haskell doesn't have to go down through ALL steps of recursion!
Compare:
rec :: Int -> [Int]
rec n = n:(rec (n+1))
main = print (head (rec 5))
In action:
m#m-X555LJ:~$ runhaskell wtf.hs
5
m#m-X555LJ:~$
Clearly, the recursion here is performed lazily, not eagerly.
Then why is the recursion in the sequence [getLine, getLine, getLine] example performed eagerly?
As to why it is important that IO actions are run in order
regardless of the results: Imagine an action createFile :: IO () and
writeToFile :: IO (). When I do a sequence [createFile,
writeToFile] I'd hope that they're both done and in order, even
though I don't care about their actual results (which are both the
very boring value ()) at all!
I'm not sure how this applies to this Q.
Maybe I'll word my Q this way...
In my mind this:
do
input <- sequence [getLine, getLine, getLine]
mapM_ print input
should detoriate to something like this:
do
input <- do
input <- concat ( map (fmap (:[])) [getLine, getLine, getLine] )
return input
mapM_ print input
Which, in turn, should detoriate to something like this (pseudocode, sorry):
do
[ perform print on the result of getLine,
perform print on the result of getLine,
perform print on the result of getLine
] and discard the results of those prints since print was applied with mapM_ which discards the results unlike mapM
getContents is lazy, getLine isn't. Lazy IO isn't a feature of Haskell per se, it's a feature of some particular IO actions.
I'm confused. I don't need to perform ALL IO actions to get the results of just one.
Yes you do! That is one of the most important features of IO, that if you write a >> b or equivalently,
do a
b
then you can be sure that a is definitely "run" before b (see footnote). getContents is actually the same, it "runs" before whatever comes after it... but the result it returns is a sneaky result that sneakily does more IO when you try to evaluate it. That is actually the surprising bit, and it can lead to some very interesting results in practice (like the file you're reading the contents of being deleted or changed while you're processing the results of getContents), so in practical programs you probably shouldn't be using it, it mostly exists for convenience in programs where you don't care about such things (Code Golf, throwaway scripts or teaching for instance).
As to why it is important that IO actions are run in order regardless of the results: Imagine an action createFile :: IO () and writeToFile :: IO (). When I do a sequence [createFile, writeToFile] I'd hope that they're both done and in order, even though I don't care about their actual results (which are both the very boring value ()) at all!
Addressing the edit:
should detoriate to something like this:
do
input <- do
input <- concat ( map (fmap (:[])) [getLine, getLine, getLine] )
return input
mapM_ print input
No, it actually turns into something like this:
do
input <- do
x <- getLine
y <- getLine
z <- getLine
return [x,y,z]
mapM_ print input
(the actual definition of sequence is more or less this:
sequence [] = return []
sequence (a:as) = do
x <- a
fmap (x:) $ sequence as
Technically, in
sequenceA (x:xs) = (:) <$> x <*> sequenceA xs
we find <*>, which first runs the action on the left, then the action on the right, and finally applies their result together. This is what makes the first effect in the list to be occur first, and so on.
Indeed, on monads, f <*> x is equivalent to
do theF <- f
theX <- x
return (theF theX)
More in general, note that all the IO actions are generally executed in order, first to last (see below for a few rare exceptions). Doing IO in a completely lazy way would be a nightmare for the programmer. For instance, consider:
do let aX = print "x" >> return 4
aY = print "y" >> return 10
x <- aX
y <- aY
print (x+y)
Haskell guarantees that the output is x y 14, in that order. If we had completely lazy IO we could also get y x 14, depending on which argument is forced first by +. In such case, we would need to know exactly the order in which the lazy thunks are demanded by every operation, which is something the programmer definitely does not want to care about. Under such detailed semantics, x + y is no longer equivalent to y + x, breaking equational reasoning in many cases.
Now, if we wanted to force IO to be lazy we could use one of the forbidden functions, e.g.
do let aX = unsafeInterleaveIO (print "x" >> return 4)
aY = unsafeInterleaveIO (print "y" >> return 10)
x <- aX
y <- aY
print (x+y)
The above code makes aX and aY lazy IO actions, and the order of the output is now at the whim of the compiler and the library implementation of +. This is in general dangerous, hence the unsafeness of lazy IO.
Now, about the exceptions. Some IO actions which only read from the environment, like getContents were implemented with lazy IO (unsafeInterleaveIO). The designers felt that for such reads, lazy IO can be acceptable, and that the precise timing of the reads is not that important in many cases.
Nowadays, this is controversial. While it can be convenient, lazy IO can be too unpredictable in many cases. For instance, we can't know where the file will be closed, and that could matter if we're reading from a socket. We also need to be very careful not to force the reads too early: that often leads to a deadlock when reading from a pipe. Today, it is usually preferred to avoid lazy IO, and resort to some library like pipes or conduit for "streaming"-like operations, where there is no ambiguity.
consider the following simple IO function:
req :: IO [Integer]
req = do
print "x"
return [1,2,3]
In reality this might be a http request, which returns a list after parsing it's result.
I'm trying to concatenate the results of several calls of that function in a lazy way.
In simple terms, the following should print the 'x' only two times:
fmap (take 4) req'
--> [1, 2, 3, 4]
I thought this might be solved with sequence or mapM, however my approach fails in terms of laziness:
import Control.Monad
req' :: IO [Integer]
req' = fmap concat $ mapM req [1..1000] -- should be infinite..
This yields the right result, however the IO function req is called 1000 times instead of the necessary 2 times. When implementing the above with a map over an infinite list, the evaluation does not terminate at all.
Short version:
You shouldn't do this, look into a streaming IO library such as pipes or conduit instead.
Long version:
You can't. Or at least, you really shouldn't. Allowing lazily evaluated code to have side effects is generally a very bad idea. Not only does it very quickly become hard to reason about wich effects are performed when and how many times, but even worse, effects may not be performed in the order you expect them to! With pure code, this is not a big deal. With side-effecting code, this is a disaster.
Imagine that you want to read a value from a reference and then replace the value with an updated value. In the IO monad, where the order of computation is well defined, this is easy:
main = do
yesterdaysDate <- readIORef ref
writeIORef ref todaysDate
However, if the above code were instead to be evaluated lazily, there would be no guarantee that the reference was read before it was written - or even that both computations would be executed at all. The semantics of the program would depend entirely on if and when we needed the results of the computations. This is one of the reasons for coming up with monads in the first place: to give programmers a way to write code with side effects, which execute in a well-defined and easily understood order.
Now, it is actually possible to lazily concatenate the lists, if you create them using unsafeInterleaveIO:
import System.IO.Unsafe
req :: IO [Integer]
req = unsafeInterleaveIO $ do
print "x"
return [1,2,3]
req' :: IO [Integer]
req' = fmap concat $ mapM (const req) [1..1000]
This will cause each application of req to be deferred until the corresponding sublist is needed. However, lazily performing IO like this may lead to interesting race conditions and resource leaks, and is generally frowned upon. The recommended alternative would be to use a streaming IO library such as conduit or pipes, which are mentioned in the comments.
Here is how you would do something like this with the streaming and pipes libraries. Pipes programs will be somewhat similar those written with conduit especially in this sort of case. conduit uses different names, and pipes and conduit have somewhat fancier types and operators than streaming; but it's really a matter of indifference which you use. streaming is I think fundamentally simpler in this sort of case; the formulation will be structurally similar to the corresponding IO [a] program and indeed frequently simpler. The essential point is that a Stream (Of Integer) IO () is exactly like list of Integers but it is built in that the elements of the list or stream can arise from successive IO actions.
I gave req an argument in the following, since that seemed to be what you had in mind.
import Streaming
import qualified Streaming.Prelude as S
import Streaming.Prelude (for, each)
req :: Integer -> Stream (Of Integer) IO ()
req x = do -- this 'stream' is just a list of Integers arising in IO
liftIO $ putStr "Sending request #" >> print x
each [x..x+2]
req' :: Stream (Of Integer) IO ()
req' = for (S.each [1..]) req -- An infinite succession of requests
-- each yielding three numbers. Here we are not
-- actually using IO to get each but we could.
main = S.print $ S.take 4 req'
-- >>> main
-- Sending request #1
-- 1
-- 2
-- 3
-- Sending request #2
-- 2
To get our four desired values we had to send two "requests"; we of course don't end up applying req to all Integers! S.take doesn't permit any further development of the infinite stream req' it takes as argument; so only the first element from the second request is ever calculated. Then everything shuts down. The fancy signature Stream (Of Int) IO () could be replaced by a synonymn
type List a = Stream (Of a) IO ()
and you would barely notice the difference from Haskell lists, except you don't get the apocalypses you noticed. The extra moveable parts in the actual signature are distracting here, but make it possible to replicate the whole API of Data.List in basically every detail while permitting IO and avoidance of accumulation everywhere. (Without the further moveable parts it is e.g. impossible to write splitAt, partition and chunksOf, and indeed you will find stack overflow is awash with questions how to do these obvious things with e.g. conduit.)
The pipes equivalent is this
import Pipes
import qualified Pipes.Prelude as P
req :: Integer -> Producer Integer IO ()
req x = do
liftIO $ putStr "Sending request #" >> print x
each [x..x+2]
req' = for (each [1..]) req
main = runEffect $ req' >-> P.take 4 >-> P.print
-- >>> main
-- Sending request #1
-- 1
-- 2
-- 3
-- Sending request #2
-- 2
it differs by treating take and print as pipes, rather than as ordinary functions on streams as they are with Data.List. This has charm but is not needed in the present context where the conception of the stream as an effectful list predominates. Intuitively takeing and printing are things we do to a list, even if it is an effectful list as in this case, and the piping and conduiting aspect is a distraction (in bread-and-butter cases it also nearly doubles the time needed for calculation, due to the cost of >-> and .| which is akin to that of say map.)
It might help understanding if we note that req above could have been written
req x = do
liftIO $ putStr "Sending request #" >> print x
yield x -- yield a >> yield b == each [a,b]
yield (x+1)
yield (x+2)
this will be word for word the same in streaming pipes and conduit. yield a >> rest is the same as a:rest The difference is that a yield a line (in a do block) can be preceded by a bit of IO, e.g. a <- liftIO readLn; yield a
In general list mapM replicateM traverse and sequence should be avoided - except for short lists - for the reasons you mention. sequence is at the bottom of them all and it basically has to constitute the whole list before it can proceed. (Note sequence = mapM id; mapM f = sequence . map f) Thus we see
>>> sequence [getChar,getChar,getChar] >>= mapM_ print
abc'a' -- here and below I just type abc, ghci prints 'a' 'b' 'c'
'b'
'c'
but with a streaming library we see stuff like
>>> S.mapM_ print $ S.sequence $ S.each [getChar,getChar,getChar]
a'a'
b'b'
c'c'
Similarly
>>> replicateM 3 getChar >>= mapM_ print
abc'a'
'b'
'c'
is a mess - nothing happens till the whole list is constructed, then each of the collected Chars is printed in succession. But with a streaming library we write the simpler
>>> S.mapM_ print $ S.replicateM 3 getChar
a'a'
b'b'
c'c'
and the outputs are in sync with the inputs. In particular, no more than one character is in memory at a time. replicateM_, mapM_ and sequence_ by contrast don't accumulate lists aren't a problem. It's the others that should prompt one to think of a streaming library, any streaming library. A monad-general sequence can't do any better than this, as you can see by reflecting on
>>> sequence [Just 1, Just 2, Just 3]
Just [1,2,3]
>>> sequence [Just 1, Just 2, Nothing]
Nothing
If the list were a million Maybe Ints long, it would all have to be remembered and left unused while waiting to see if last item is Nothing. Since sequence, mapM, replicateM, traverse and company are monad general, what goes for Maybe goes for IO.
Continuing above, we can similarly collect the list as you seemed to want to do:
main = S.toList_ (S.take 4 req') >>= print
-- >>> main
-- Sending request #1
-- Sending request #2
-- [1,2,3,2]
or, in the pipes version:
main = P.toListM (req' >-> P.take 4) >>= print
-- >>> main
-- Sending request #1
-- Sending request #2
-- [1,2,3,2]
Or to pile on possibilities, we can do IO with each element, while collecting them in a list or vector or whatever
main = do
ls <- S.toList_ $ S.print $ S.copy $ S.take 4 req'
print ls
-- >>> main
-- Sending request #1
-- 1
-- 2
-- 3
-- Sending request #2
-- 2
-- [1,2,3,2]
Here I print the copies and save the 'originals' for a list. The games we are playing here start to come upon the limits of pipes and conduit, though this particular program can be replicated with them.
As far as I know, what you're looking for shouldn't/can't be done using mapM and should probably use some form of streaming. In case it's helpful, an example using io-streams:
import qualified System.IO.Streams as Streams
import qualified System.IO.Streams.Combinators as Streams
req :: IO (Maybe [Integer])
req = do
print "x"
return (Just [1,2,3])
req' :: IO [Integer]
req' = Streams.toList =<< Streams.take 4 =<< Streams.concatLists =<< Streams.makeInputStream req
The working version of your code:
module Foo where
req :: Integer -> IO [Integer]
req _x = do
print "x"
return [1,2,3]
req' :: IO [Integer]
req' = concat <$> mapM req [1..1000]
(Note: I replaced fmap concat with concat <$>.)
When you evalute fmap (take 4) req', the mapM expression's value is needed, which, in turn, needs the value of the [1..1000] list. So, a 1000 element list is generated and mapM applies the req function to each element -- hence, the 1000 'x'-es printed. concat then has to supply a value to the (take 4) section, which produces [1,2,3] repeated 1000 times. Then, and only then, can (take 4) take the first four elements.
All of these computations occur because a value is needed by ghci, if you're at the interpreter's REPL prompt. Otherwise, in an executing program, take 4 is simply stacked in a waiting thunk until its value is actually needed.
Best to think about this as a tree where expressions are pushed onto the root of the tree, replacing the root each time (root becomes a leaf in another expression that needs its value.) When the value at the root of the tree is needed, evaluate from the bottom up.
Now, if you really only wanted req evaluated once and only once because it is truly a constant value, here's the code:
module Foo where
req2 :: IO [Integer]
req2 = do
print "x"
return [1,2,3]
req2' :: IO [Integer]
req2' = concat <$> mapM (const req2) ([1..1000] :: [Integer])
req2' is evaluated only once because it evaluates to a constant (no function parameters guarantees this.) Admittedly, though, that's probably not what you really intended.
This is what the pipes and conduit ecosystems were designed for. Here's an example for pipes.
#!/usr/bin/env stack
--stack runghc --resolver=lts-7.16 --package pipes
module Main where
import Control.Monad (forever)
import Pipes as P
import qualified Pipes.Prelude as P
req :: Producer Int IO ()
req = forever $ do
liftIO $ putStrLn "Making a request."
mapM_ yield [1,2,3]
main :: IO ()
main = P.toListM (req >-> P.take 4) >>= print
Note that normally you don't collapse a result into a list using pipes, but that seems to be your use case.
I have following problem: Given a [String] and String->IO Int. So I can make a transformation (map) and get [IO Int]. Now, I have to do two things -- perfrorm that actions, from start, until result is positive and I need to know, was all list processed.
I am forbidded to process after first non-positive result.
takeWhileM do not answer second question(length compraison is too impractical), and spanM perform forbidden IO.
Of course, I can write recursive function myself, but I want to do it in Haskell way, with all good of high-order functions.
Suggestions? Probably, use completely another approach?
Task above is a bit simplified task from my project.
You can use allM from the monad-loops package:
Prelude Control.Monad.Loops> let xs = ["a", "bb", "ccc", "dddd", "eeeee"]
Prelude Control.Monad.Loops> let f x = putStrLn x >> return (length x)
Prelude Control.Monad.Loops> let p x = x < 2
Prelude Control.Monad.Loops> allM (fmap p . f) xs
a
bb
False
There's also an allM in Control.Monad.ListM, but it's not appropriately lazy—it will continue to perform computations after you hit a positive result.
(I'm with you on this, by the way—I hate writing one-off recursive functions.)
I'm not familiar with the functions takeWhileM and spanM (and neither is hoogle) (edit: as per comment, they can be found in Control.Monad.ListM).
Given that, I think the best thing for you to do is to make a one-off function to perform this task. If it later turns out that you need to write code to do something similar, then you can factor out the common parts and re-use them. There's nothing wrong with writing one-off code in general, it's code duplication that's bad.
There are a few ways to write the function you want - one possible way is like this:
process :: [IO Int] -> IO Bool
process [] = return True
process [a] = a >> return True
process (a:as) = do
n <- a
if n > 0
then return False
else process as
#illusionoflife: I don't see how using takeWhileM would improve on #Chris's solution.
For example:
import Control.Monad.ListM
process :: [IO Int] -> IO Bool
process as = do
taken <- takeWhileM (>>= return . (<= 0)) as
return (length taken >= length as - 1)
(Code not verified!)
#Chris's looks more readable, among other things because in his solution we don't need to figure out if we should use >= or ==. Besides, since I call length we can't use it on an infinite input list.
Recently I was trying to determine the time needed to calculate a waveform using the vector storage type.
I wanted to do so without requiring to print the length or something like that. Finally I came up with the following two definitions. It seems simple enough, and from what I can tell it prints a non-zero computation time as expected the first time I run the function, but I'm wondering if there are any laziness caveats here that I've missed.
import System.IO
import System.CPUTime
import qualified Data.Vector.Storable as V
timerIO f = do
start <- getCPUTime
x <- f
let !y = x
end <- getCPUTime
let diff = (fromIntegral (end - start)) / (10^12)
print $ "Computation time: " ++ show diff ++ " sec\n"
timer f = timerIO $ do return f
main :: IO ()
main = do
let sr = 1000.0
time = V.map (/ sr) $ V.enumFromN 0 120000 :: V.Vector Float
wave = V.map (\x -> sin $ x * 2 * pi * 10) time :: V.Vector Float
timer wave
timer wave
prints,
Computation time: 0.16001 sec
Computation time: 0.0 sec
Are there any hidden bugs here? I'm really not sure that the let with strictness flag is really the best way to go here. Is there a more concise way to write this? Are there any standard functions that already do this that I should know about?
Edit: I should mention that I had read about criterion but in this case I was not looking for a robust way to calculate average timing for profiling-only purposes; rather I was looking for a simple / low-overhead way to integrate single timers into my program for tracing the timing of some computations during normal running of the application. Criterion is cool, but this was a slightly different use case.
If evaluating to weak head normal form is enough - for strict Vectors or UArrays it is -, then your timing code works well¹, however, instead of the bang pattern in the let-binding, you could put a bang on the monadic bind,
start <- getCPUTime
!x <- f
end <- getCPUTime
which to me looks nicer, or you could use Control.Exception.evaluate
start <- getCPUTime
evaluate f
end <- getCPUTime
which has the advantage of (supposed) portability, whereas bang patterns are a GHC extension. If WHNF is not enough, you would need to force full evaluation, for example using rnf or deepseq, like
start <- getCPUTime
!x <- rnf `fmap` f
end <- getCPUTime
However, repeatedly timing the same computation with that is hairy. If, as in your example, you give the thing a name, and call it
timer wave
timer wave
the compiler shares the computation, so it's only done once and all but the first timer calls return zero (or very close to zero) times. If you call it with code instead of a name,
timer (V.map (\x -> sin $ x * 2 * pi * 10) time :: V.Vector Float)
timer (V.map (\x -> sin $ x * 2 * pi * 10) time :: V.Vector Float)
the compiler can still share the computation, if it does common subexpression elimination. And although GHC doesn't do much CSE, it does some and I'm rather confident it would spot and share this (when compiling with optimisations). To reliably make the compiler repeat the computations, you need to hide the fact that they are the same from it (or use some low-level internals), which is not easy to do without influencing the time needed for the computation.
¹ It works well if the computation takes a significant amount of time. If it takes only a short time, the jitter introduced by outside influences (CPU load, scheduling, ...) will make single timings far too unreliable. Then you should do multiple measurements, and for that, as has been mentioned elsewhere, the criterion library is an excellent way to relieve you of the burden of writing robust timing code.
Are you familiar with the deepseq package? It's used by the criterion package for pretty much the purpose you describe.
Speaking of which, you may want to consider whether criterion itself does what you need anyway.
I have:
main :: IO ()
main = do
iniciofibonaccimap <- getCPUTime
let fibonaccimap = map fib listaVintesete
fimfibonaccimap <- getCPUTime
let difffibonaccimap = (fromIntegral (fimfibonaccimap - iniciofibonaccimap)) / (10^12)
printf "Computation time fibonaccimap: %0.3f sec\n" (difffibonaccimap :: Double)
listaVintesete :: [Integer]
listaVintesete = replicate 100 27
fib :: Integer -> Integer
fib 0 = 0
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
But
*Main> main
Computation time fibonaccimap: 0.000 sec
I do not understand why this happens.
Help-me thanks.
As others have said, this is due to lazy evaluation. To force evaluation you should use the deepseq package and BangPatterns:
{-# LANGUAGE BangPatterns #-}
import Control.DeepSeq
import Text.Printf
import System.CPUTime
main :: IO ()
main = do
iniciofibonaccimap <- getCPUTime
let !fibonaccimap = rnf $ map fib listaVintesete
fimfibonaccimap <- getCPUTime
let difffibonaccimap = (fromIntegral (fimfibonaccimap - iniciofibonaccimap)) / (10^12)
printf "Computation time fibonaccimap: %0.3f sec\n" (difffibonaccimap :: Double)
...
In the above code you should notice three things:
It compiles (modulo the ... of functions you defined above). When you post code for questions please make sure it runs (iow, you should include imports)
The use of rnf from deepseq. This forces the evaluation of each element in the list.
The bang pattern on !fibonaccimap, meaning "do this now, don't wait". This forces the list to be evaluated to weak-head normal form (whnf, basically just the first constructor (:)). Without this the rnf function would itself remain unevaluated.
Resulting in:
$ ghc --make ds.hs
$ ./ds
Computation time fibonaccimap: 6.603 sec
If you're intending to do benchmarking you should also use optimization (-O2) and the Criterion package instead of getCPUTime.
Haskell is lazy. The computation you request in the line
let fibonaccimap = map fib listaVintesete
doesn't actually happen until you somehow use the value of fibonaccimap. Thus to measure the time used, you'll need to introduce something that will force the program to perform the actual computation.
ETA: I originally suggested printing the last element to force evaluation. As TomMD points out, this is nowhere near good enough -- I strongly recommend reading his response here for an actually working way to deal with this particular piece of code.
I suspect you are a "victim" of lazy evaluation. Nothing forces the evaluation of fibonaccimap between the timing calls, so it's not computed.
Edit
I suspect you're trying to benchmark your code, and in that case it should be pointed out that there are better ways to do this more reliably.
10^12 is an integer, which forces the value of fromIntegral to be an integer, which means difffibonaccimap is assigned a rounded value, so it's 0 if the time is less than half a second. (That's my guess, anyway. I don't have time to look into it.)
Lazy evaluation has in fact bitten you, as the other answers have said. Specifically, 'let' doesn't force the evaluation of an expression, it just scopes a variable. The computation won't actually happen until its value is demanded by something, which probably won't happen until an actual IO action needs its value. So you need to put your print statement between your getCPUTime evaluations. Of course, this will also get the CPU time used by print in there, but most of print's time is waiting on IO. (Terminals are slow.)