Frege's equivalent of Haskell's getLine and read - haskell

Is there any Frege's equivalent of Haskell's getLine and read to parse input from the console in the standard library?
Currently I am doing it like this:
import frege.IO
getLine :: IO String
getLine = do
isin <- stdin
isrin <- IO.InputStreamReader.new isin
brin <- IO.BufferedReader.fromISR isrin
line <- brin.readLine
return $ fromExceptionMaybe line
fromExceptionMaybe :: Exception (Maybe a) -> a
fromExceptionMaybe (Right (Just r)) = r
fromExceptionMaybe (Right _) = error "Parse error on input"
fromExceptionMaybe (Left l) = error l.getMessage
pure native parseInt java.lang.Integer.parseInt :: String -> Int
main _ = do
line <- getLine
println $ parseInt line
Update:
Frege has been evolved so now we have getLine in the standard library itself. As for read, we have conversion methods on String. Now the original problem is simply,
main _ = do
line <- getLine
println line.atoi
See Ingo's answer below for more details.

Update: I/O support in more recent versions of Frege
As of version 3.21.80, we have better I/O support in the standard libraries:
The runtime provides stdout and stderr (buffered, UTF8 encoding java.io.PrintWriters wrapped around java.lang.System.out and java.lang.System.err) and stdin (UTF8 decoding java.io.BufferedReader wrapped around java.lang.System.in)
Functions print, println, putStr, putChar write to stdout
getChar and getLine read from stdin and throw exceptions on end of file.
The Frege equivalents for Java classes like PrintWriter, BufferedWriter etc. are defined in module Java.IO, which is automatically imported. With this, more basic functionality is supported. For example, BufferedReader.readLine has a return type of IO (Maybe String) and does signal the end of file by returning Nothing, like its Java counterpart, who returns null in such cases.
Here is a short example program that implements a basic grep:
--- A simple grep
module examples.Grep where
--- exception thrown when an invalid regular expression is compiled
data PatternSyntax = native java.util.regex.PatternSyntaxException
derive Exceptional PatternSyntax
main [] = stderr.println "Usage: java examples.Grep regex [files ...]"
main (pat:xs) = do
rgx <- return (regforce pat)
case xs of
[] -> grepit rgx stdin
fs -> mapM_ (run rgx) fs
`catch` badpat where
badpat :: PatternSyntax -> IO ()
badpat pse = do
stderr.println "The regex is not valid."
stderr.println pse.getMessage
run regex file = do
rdr <- utf8Reader file
grepit regex rdr
`catch` fnf where
fnf :: FileNotFoundException -> IO ()
fnf _ = stderr.println ("Could not read " ++ file)
grepit :: Regex -> BufferedReader -> IO ()
grepit pat rdr = loop `catch` eof `finally` rdr.close
where
eof :: EOFException -> IO ()
eof _ = return ()
loop = do
line <- rdr.getLine
when (line ~ pat) (println line)
loop
Because Frege is still quite new, the library support is admittedly still lacking, despite the progress that is already done in the most basic areas, like Lists and Monads.
In addition, while the intent is to have a high degree of compatibility to Haskell, especially in the IO system and generally in the low level system related topics, there is a tension: Should we rather go the Java way or should we really try to emulate Haskell's way (which is in turn obviously influenced by what is available in the standard C/POSIX libraries).
Anyway, the IO thing is probably the most underdeveloped area of the Frege library, unfortunately. This is also because it is relatively easy to quickly write native function declarations for a handful of Java methods one would need in an ad hoc manner, instead of taking the time to develop a well though out library.
Also, a Read class does not exist up to now. As a substiutute until this has been fixed, the String type has functions to parse all number types (based on the Java parseXXX() methods).
(Side note: Because my days also have only 24h and I have a family, a dog and a job to care about, I would be very happy to have more contributors that help making the Frege system better.)
Regarding your code: Yes, I feel it is right to do all character based I/O through the Reader and Writer interfaces. Your example shows also that convenience functions for obtaining a standard input reader are needed. The same holds for standard output writer.
However, when you would need to read more than 1 line, I'd definitly create the reader in the main function and pass it to the input processing actions.

Related

Haskell: Can a function be compiled?

Consider a simple Haskell Brainf*ck interpreter. Just look at the interpret function.
import Prelude hiding (Either(..))
import Control.Monad
import Data.Char (ord, chr)
-- function in question
interpret :: String -> IO ()
interpret strprog = let (prog, []) = parse strprog
in execBF prog
interpretFile :: FilePath -> IO ()
interpretFile fp = readFile fp >>= interpret
type BF = [BFInstr]
data BFInstr = Left | Right | Inc | Dec | Input | Output | Loop BF
type Tape = ([Integer], [Integer])
emptyTape = (repeat 0, repeat 0)
execBFTape :: Tape -> BF -> IO Tape
execBFTape = foldM doBF
execBF :: BF -> IO ()
execBF prog = do
execBFTape emptyTape prog
return ()
doBF :: Tape -> BFInstr -> IO Tape
doBF ((x:lefts), rights) Left = return (lefts, x:rights)
doBF (lefts, (x:rights)) Right = return (x:lefts, rights)
doBF (left, (x:rights)) Inc = return (left, (x+1):rights)
doBF (left, (x:rights)) Dec = return (left, (x-1):rights)
doBF (left, (_:rights)) Input = getChar >>= \c -> return (left, fromIntegral (ord c):rights)
doBF t#(_, (x: _)) Output = putChar (chr (fromIntegral x)) >> return t
doBF t#(left, (x: _)) (Loop bf) = if x == 0
then return t
else do t' <- execBFTape t bf
doBF t' (Loop bf)
simpleCommands = [('<', Left),
('>', Right),
(',', Input),
('.', Output),
('+', Inc),
('-', Dec)]
parse :: String -> (BF, String)
parse [] = ([], [])
parse (char:prog) = case lookup char simpleCommands of
Just command -> let (rest, prog') = parse prog
in (command : rest, prog')
Nothing ->
case char of
']' -> ([], prog)
'[' -> let (loop, prog') = parse prog
(rest, prog'') = parse prog'
in (Loop loop:rest, prog'')
_ -> parse prog
So I have a function applied like interpret "[->+<]". This gives me an IO () monadic action which executes the given program. It has the right type to be a main of some program.
Let's say I would like to have this action compiled to an executable, that is, I would like to generate an executable file with the result of interpret ... to be the main function. Of course, this executable would have to contain the GHC runtime system (for infinite lists, integer arithmetic etc.).
Questions:
It is my opinion that it is not possible at all to just take the monadic action and save it to be a new file. Is this true?
How could one go about reaching a comparable solution? Do the GHC Api and hint help?
EDIT
Sorry, I oversimplified in the original question. Of course, I can just write a file like this:
main = interpret "..."
But this is not what we usually do when we try to compile something, so consider interpretFile :: FilePath -> IO () instead. Let the BF program be saved in a file (helloworld.bf).
How would I go about creating an executable which executes the contents of helloworld.bf without actually needing the file?
$ ./MyBfCompiler helloworld.bf -o helloworld
The answer is basically no.
There are many ways to construct IO values:
Built in functions like putStrLn
Monad operations like return or >>=
Once you have an IO value there are three ways to break it down:
Set main equal to the value
unsafePerformIO
As the return value of an exported C function
All of these break down into converting an IO a into an a. There is no other way to inspect it to see what it does.
Similarly the only thing you can do with functions is put them in variables or call them (or convert them to C function pointers).
There is no sane way to otherwise inspect a function.
One thing you could do which isn’t compiling but is linking is to have your interpreter main function run on some external c string, build that into a static object, and then your “compiler” could make a new object with this C string of the program in it and link that to what you already have.
There is this theory of partial evaluation that says that if you do partial evaluation of a partial evaluator applied to an interpreter applied to some input then what you get is a compiler but ghc is not a sufficiently advanced partial evaluator.
I’m not sure whether you’re asking how you write a compiler that can take as its input a file such as helloworld.bf, or how you compile a Haskell program that runs helloworld.bf.
In the former case, you would want something a little more fleshed out than this:
import System.Environment (getArgs)
main :: IO ()
main = do
(_:fileName:_) <- getArgs
source <- readFile fileName
interpret source
interpret :: String -> IO ()
interpret = undefined -- You can fill in this piddly little detail yourself.
If you want the latter, there are a few different options. First, you can store the contents of your *.bf file in a string constant (or bettter yet, a Text or strict ByteString), and pass that to your interpreter function. I’d be surprised if GHC is optimistic enough to fully inline and expand that call at compile time, but in principle a Haskell compiler could.
The second is to turn Brainfuck into a domain-specific language with operators you define, so that you can actually write something like
interpret [^<,^+,^>,^.]
If you define (^<) and the other operators, the Brainfuck commands will compile to bytecode representing the Brainfuck program.
In this case, there isn’t an obvious benefit over the first approach, but with a more structured language, you can do an optimization pass, compile the source to stack-based bytecode more suitable for an interpreter to execute, or generate a more complex AST.
You might also express this idea as
interpret
(^< ^+ ^> ^.)
input
Here, if the Brainfuck commands are higher-order functions with right-to-left precedence, and interpret bf input = (bf begin) input, the Brainfuck code would simply compile to a function that the interpreter calls. This has the best chance of being turned into fast native code.
Previous Answer
In certain cases, a compiler can inline a function call (there are pragmas in GHC to tell it to do this). The compiler is also more likely to do what you want if you name the closure, such as:
main = interpret foo
In GHC, you can give the compiler a hint by adding
{-# INLINE main #-}
or even
{-# INLINE interpret #-}
You can check what code GHC generated by compiling the module with -S and looking through the source.

reading files with references to other files in haskell

I am trying to expand regular markdown with the ability to have references to other files, such that the content in the referenced files is rendered at the corresponding places in the "master" file.
But the furthest I've come is to implement
createF :: FTree -> IO String
createF Null = return ""
createF (Node f children) = ifNExists f (_id f)
(do childStrings <- mapM createF children
withFile (_path f) ReadMode $ \handle ->
do fc <- lines <$> hGetContents handle
return $ merge fc childStrings)
ifNExists is just a helper that can be ignored, the real problem happens in the reading of the handle, it just returns the empty string, I assume this is due to lazy IO.
I thought that the use of withFile filepath ReadMode $ \handle -> {-do stutff-}hGetContents handle would be the right solution as I've read fcontent <- withFile filepath ReadMode hGetContents is a bad idea.
Another thing that confuses me is that the function
createFT :: File -> IO FTree
createFT f = ifNExists f Null
(withFile (_path f) ReadMode $ \handle ->
do let thisParse = fparse (_id f :_parents f)
children <-rights . map ( thisParse . trim) . lines <$> hGetContents handle
c <- mapM createFT children
return $ Node f c)
works like a charm.
So why does createF return just an empty string?
the whole project and a directory/file to test can be found at github
Here are the datatype definitions
type ID = String
data File = File {_id :: ID, _path :: FilePath, _parents :: [ID]}
deriving (Show)
data FTree = Null
| Node { _file :: File
, _children :: [FTree]} deriving (Show)
As you suspected, lazy IO is probably the problem. Here's the (awful) rule you have to follow to use it properly without going totally nuts:
A withFile computation must not complete until all (lazy) I/O required to fully evaluate its result has been performed.
If something forces I/O after the handle is closed, you are not guaranteed to get an error, even though that would be very nice. Instead, you get completely undefined behavior.
You break this rule with return $ merge fc childStrings, because this value is returned before it's been fully evaluated. What you can do instead is something vaguely like
let retVal = merge fc childStrings
deepSeq retVal $ return retVal
An arguably cleaner alternative is to put all the rest of the code that relies on those results into the withFile argument. The only real reason not to do that is if you do a bunch of other work with the results after you're finished with that file. For example, if you're processing a bunch of different files and accumulating their results, then you want to be sure to close each of them when you're done with it. If you're just reading in one file and then acting on it, you can leave it open till you're finished.
By the way, I just submitted a feature request to the GHC team to see if they might be willing to make these kinds of programs more likely to fail early with useful error messages.
Update
The feature request was accepted, and such programs are now much more likely to produce useful error messages. See What caused this "delayed read on closed handle" error? for details.
I'd strongly suggest you to avoid lazy IO as it always creates problems like this, as described in What's so bad about Lazy I/O? As in your case, where you need to keep the file open until it's fully read, but this would mean closing the file somewhere in pure code, when the content is actually consumed.
One possibility would be to use strict ByteStrings and read files using readFile. This would also make many operations more efficient.
Another option would be to use one of the libraries that address the lazy IO problem (see What are the pros and cons of Enumerators vs. Conduits vs. Pipes?). These libraries allow you to separate content production from its processing or consumption. So you could have a producer that reads input files and produces a stream of some tokens, and a pure consumer (not depending on IO) that consumes the stream and produces some result. For example, in conduit-extra there is a module that converts an atto-parsec parser into a consumer.
See also Is there a better way to walk a directory tree?

Can I create a function in Haskell that will encapsulate reading data from file and returning me a simple list of data?

Consider the code below taken from a working example I've built to help me learn Haskell. This code parses a CSV file containing stock quotes downloaded from Yahoo into a nice simple list of bars with which I can then work.
My question: how can I write a function that will take a file name as its parameter and return an OHLCBarList so that the first four lines inside main can be properly encapsulated?
In other words, how can I implement (without getting all sorts of errors about IO stuff) the function whose type would be
getBarsFromFile :: Filename -> OHLCBarList
so that the grunt work that was being done in the first four lines of main can be properly encapsulated?
I've tried to do this myself but with my limited Haskell knowledge, I'm failing miserably.
import qualified Data.ByteString as BS
type Filename = String
getContentsOfFile :: Filename -> IO BS.ByteString
barParser :: Parser Bar
barParser = do
time <- timeParser
char ','
open <- double
char ','
high <- double
char ','
low <- double
char ','
close <- double
char ','
volume <- decimal
char ','
return $ Bar Bar1Day time open high low close volume
type OHLCBar = (UTCTime, Double, Double, Double, Double)
type OHLCBarList = [OHLCBar]
barsToBarList :: [Either String Bar] -> OHLCBarList
main :: IO ()
main = do
contents :: C.ByteString <- getContentsOfFile "PriceData/Daily/yhoo1.csv" --PriceData/Daily/Yhoo.csv"
let lineList :: [C.ByteString] = C.lines contents -- Break the contents into a list of lines
let bars :: [Either String Bar] = map (parseOnly barParser) lineList -- Using the attoparsec
let ohlcBarList :: OHLCBarList = barsToBarList bars -- Now I have a nice simple list of tuples with which to work
--- Now I can do simple operations like
print $ ohlcBarList !! 0
If you really want your function to have type Filename -> OHLCBarList, it can't be done.* Reading the contents of a file is an IO operation, and Haskell's IO monad is specifically designed so that values in the IO monad can never leave. If this restriction were broken, it would (in general) mess with a lot of things. Instead of doing this, you have two options: make the type of getBarsFromFile be Filename -> IO OHLCBarList — thus essentially copying the first four lines of main — or write a function with type C.ByteString -> OHLCBarList that the output of getContentsOfFile can be piped through to encapsulate lines 2 through 4 of main.
* Technically, it can be done, but you really, really, really shouldn't even try, especially if you're new to Haskell.
Others have explained that the correct type of your function has to be Filename -> IO OHLCBarList, I'd like to try and give you some insight as to why the compiler imposes this draconian measure on you.
Imperative programming is all about managing state: "do certain operations to certain bits of memory in sequence". When they grow large, procedural programs become brittle; we need a way of limiting the scope of state changes. OO programs encapsulate state in classes but the paradigm is not fundamentally different: you can call the same method twice and get different results. The output of the method depends on the (hidden) state of the object.
Functional programming goes all the way and bans mutable state entirely. A Haskell function, when called with certain inputs, will always produce the same output. Simple examples of
pure functions are mathematical operators like + and *, or most of the list-processing functions like map. Pure functions are all about the inputs and outputs, not managing internal state.
This allows the compiler to be very smart in optimising your program (for example, it can safely collapse duplicated code for you), and helps the programmer not to make mistakes: you can't put the system in an invalid state if there is none! We like pure functions.
The exception to the rule is IO. Code that performs IO is impure by definition: you could call getLine a hundred times and never get the same result, because it depends on what the user typed. Haskell handles this using the type system: all impure functions are marred with the IO type. IO can be thought of as a dependency on the state of the real world, sort of like World -> (NewWorld, a)
To summarise: pure functions are good because they are easy to reason about; this is why Haskell makes functions pure by default. Any impure code has to be labelled as such with an IO type signature; this tells the compiler and the reader to be careful with this function. So your function which reads from a file (a fundamentally impure action) but returns a pure value can't exist.
Addendum in response to your comment
You can still write pure functions to operate on data that was obtained impurely. Consider the following straw-man:
main :: IO ()
main = do
putStrLn "Enter the numbers you want me to process, separated by spaces"
line <- getLine
let numberStrings = words line
let numbers = map read numberStrings
putStrLn $ "The result of the calculation is " ++ (show $ foldr1 (*) numbers + 10)
Lots of code inside IO here. Let's extract some functions:
main :: IO ()
main = do
putStrLn "Enter the numbers you want me to process, separated by spaces"
result <- fmap processLine getLine -- fmap :: (a -> b) -> IO a -> IO b
-- runs an impure result through a pure function
-- without leaving IO
putStrLn $ "The result of the calculation is " ++ result
processLine :: String -> String -- look ma, no IO!
processLine = show . calculate . readNumbers
readNumbers :: String -> [Int]
readNumbers = map read . words
calculate :: [Int] -> Int
calculate numbers = product numbers + 10
product :: [Int] -> Int
product = foldr1 (*)
I've pulled logic out of main into pure functions which are easier to read, easier for the compiler to optimise, and more reusable (and so more testable). The program as a whole still lives inside IO because the data is obtained impurely (see the last part of this answer for a more thorough treatment of this argument). Impure data can be piped through pure functions using fmap and other combinators; you should try to put as little logic in main as possible.
Your code does seem to be most of the way there; as others have suggested you could extract lines 2-4 of your main into another function.
In other words, how can I implement (without getting all sorts of errors about IO stuff) the function whose type would be
getBarsFromFile :: Filename -> OHLCBarList
so that the grunt work that was being done in the first four lines of main can be properly encapsulated?
You cannot do this without getting all sorts of errors about IO stuff because this type for getBarsFromFile misses an IO. Probably that's what the errors about IO stuff are trying to tell you. Did you try understanding and fixing the errors?
In your situation, I would start by abstracting over the second to fourth line of your main in a function:
parseBars :: ByteString -> OHLCBarList
And then I would combine this function with getContentsOfFile to get:
getBarsFromFile :: FilePath -> IO OHLCBarList
This I would call in main.

Haskell: Interact use causing error

I'm trying to use the interact function, but I'm having an issue with the following code:
main::IO()
main = interact test
test :: String -> String
test [] = show 0
test a = show 3
I'm using EclipseFP and taking one input it seems like there is an error. Trying to run main again leads to a:
*** Exception: <stdin>: hGetContents: illegal operation (handle is closed)
I'm not sure why this is not working, the type of test is String -> String and show is Show a => a -> String, so it seems like it should be a valid input for interact.
EDIT/UPDATE
I've tried the following and it works fine. How does the use of unlines and lines cause interact to work as expected?
main::IO()
main = interact respondPalindromes
respondPalindromes :: String -> String
respondPalindromes =
unlines .
map (\xs -> if isPal xs then "palindrome" else "not a palindrome") .
lines
isPal :: String -> Bool
isPal xs = xs == reverse xs
GHCi and Unsafe I/O
You can reduce this problem (the exception) to:
main = getContents >> return ()
(interact calls getContents)
The problem is that stdin (getContents is really hGetContents stdin) remains evaluated in GHCi in-between calls to main. If you look up stdin, it's implemented as:
stdin :: Handle
stdin = unsafePerformIO $ ...
To see why this is a problem, you could load this into GHCi:
import System.IO.Unsafe
f :: ()
f = unsafePerformIO $ putStrLn "Hi!"
Then, in GHCi:
*Main> f
Hi!
()
*Main> f
()
Since we've used unsafePerformIO and told the compiler that f is a pure function, it thinks it doesn't need to evaluate it a second time. In the case of stdin, all of the initialization on the handle isn't run a second time and it's still in a semi-closed state (which hGetContents puts it in), which causes the exception. So I think that GHCi is "correct" in this case and the problem lies in the definition of stdin which is a practical convenience for compiled programs that will just evaluate stdin once.
Interact and Lazy I/O
As for why interact quits after a single line of input while the unlines . lines version continues, let's try reducing that as well:
main :: IO ()
main = interact (const "response\n")
If you test the above version, interact won't even wait for input before printing response. Why? Here's the source for interact (in GHC):
interact f = do s <- getContents
putStr (f s)
getContents is lazy I/O, and since f in this case doesn't need s, nothing is read from stdin.
If you change your test program to:
main :: IO ()
main = interact test
test :: String -> String
test [] = show 0
test a = show a
you should notice different behavior. And that suggests that in your original version (test a = show 3), the compiler is smart enough to realize that it only needs enough input to determine if the string read is empty or not (because if it's not empty, it doesn't need to know what a is, it just needs to print "3"). Since the input is presumably line-buffered on a terminal, it reads up until you press the return key.

Stdin as IO Handle

this may be a stupid question but i couldn't find answer anywhere. I'm a Haskell newbie and i'm having trouble with I/O.
I have this structure:
data SrcFile = SrcFile (IO Handle) String
srcFileHandle :: SrcFile -> IO Handle
srcFileHandle (SrcFile handle _) = handle
srcFileLine :: SrcFile -> String
srcFileLine (SrcFile _ string) = string
Now the problem is that i have no idea how to assign stdin/stderr/stdout into it, because the stdin etc are Handlers, no IO Handlers. And if i make the structure have Handle attributes insted of IO Handle, then i won't be able to add any other file handles into it.
Judging from your definition of SrcFile, it seems as though you may be trying to write a C program in Haskell. Language shapes the way we think, and the good news is Haskell is a much more powerful language!
The excellent book Real World Haskell has a section on lazy I/O. Consider an excerpt:
One novel way to approach I/O is the hGetContents function. hGetContents has the type Handle -> IO String. The String it returns represents all of the data in the file given by the Handle.
In a strictly-evaluated language, using such a function is often a bad idea. It may be fine to read the entire contents of a 2KB file, but if you try to read the entire contents of a 500GB file, you are likely to crash due to lack of RAM to store all that data. In these languages, you would traditionally use mechanisms such as loops to process the file's entire data.
Here's the radical part.
But hGetContents is different. The String it returns is evaluated lazily. At the moment you call hGetContents, nothing is actually read. Data is only read from the Handle as the elements (characters) of the list are processed. As elements of the String are no longer used, Haskell's garbage collector automatically frees that memory. All of this happens completely transparently to you. And since you have what looks like—and, really, is—a pure String, you can pass it to pure (non-IO) code.
Further down is a section on readFile and writeFile that shows you how to forget about handles entirely.
For example, say you want to grab all the import lines from a source file:
module Main where
import Control.Monad (liftM, mapM_)
import Data.List (isPrefixOf)
import System.Environment (getArgs, getProgName)
import System.IO (hPutStrLn, stderr)
main :: IO ()
main = getArgs >>= go
where go [path] = collectImports `liftM` readFile path >>= mapM_ putStrLn
go _ = getProgName >>=
hPutStrLn stderr . ("Usage: " ++) . (++ " source-file")
collectImports :: String -> [String]
collectImports = filter ("import" `isPrefixOf`)
. takeWhile (\l -> null l
|| "module" `isPrefixOf` l
|| "import" `isPrefixOf` l)
. lines
Even though the definition of main uses readFile, the program reads only as much of the named source-file as necessary, not the whole thing! There's nothing magic going on: note that collectImports uses takeWhile to examine only those lines it needs to rather than, say, filter that would have to read all lines.
When fed its own source, the program outputs
import Control.Monad (liftM, mapM_)
import Data.List (isPrefixOf)
import System.Environment (getArgs, getProgName)
import System.IO (hPutStrLn, stderr)
So embrace laziness. Laziness is your friend! Enjoy the rest of the wonderful journey with Haskell.
I'm not sure what you're really attempting to do, but you can convert a Handle to IO Handle by using return function. So,
stdin :: Handle
return stdin :: IO Handle
In fact, return is a polymorphic function. It's type is a -> m a where m can be IO, Maybe, [] and others. Don't confuse it with return in C - it's a normal function, not a keyword that is used to exit prematurely.
In your code, you can use record syntax. The following is equivalent and automatically declares srcFileHandle and srcFileLine as functions:
data SrcFile = SrcFile { srcFileHandle :: IO Handle,
srcFileLine :: String }
I don't quite get what you're trying to achieve.
An IO a means: An interaction with the outside world that, when run, will yield an a.
It therefore doesn't make sense to store an IO Handle in a data structure. You just store the handle and you can do IO with the handle, but for storing/loading it, you have no IO interaction involved.
Hence your structure is:
data SrcFile = SrcFile Handle String
If you want to change/add/manipulate the contents, you can use an IORef which you can use like a pointer from IO code.

Resources