Cabal package difference between readPackageDescription and parsePackageDescription - haskell

Haskell package Cabal-1.24.2 has module Distribution.PackageDescription.Parse.
Module has 2 functions: readPackageDescription and parsePackageDescription.
When I run in ghci:
let d = readPackageDescription normal "C:\\somefile.cabal"
I got parsed GenericPackageDescription
But when I run in ghci:
content <- readFile "C:\\somefile.cabal"
let d = parsePackageDescription content
I got Parse error:
ParseFailed (FromString "Plain fields are not allowed in between stanzas: F 2 \"version\" \"0.1.0.0\"" (Just 2))
File example is a file that generated using cabal init

parsePackageDescription expects the file contents themselves to be passed it, not the file path they are stored at. You'll want to readFile first... though beware of file encoding issues. http://www.snoyman.com/blog/2016/12/beware-of-readfile

Related

Haskell: simplehttp appending "%0D"?

I am using simplehttp to query webpage. eg: let webLink = "www.example.com/" and number= 257 (number is read from file).
res <- simpleHttp $ "webLink" ++ number
It is working fine on windows but on mac, it is throwing error 404 as its showing path as "www.example.com/257%0D"
I have no idea where this "%0D" is coming from because printing number is giving me 257 . I have tried filtering "%0D" as well like below, but still mac is showing error 404 due to %0D in path...Please suggest.
res <- simpleHttp $ (filter (not . (`elem` "%0D")) ("webLink" ++ number))
The 0x0D character is a component of the newline sequence on windows but not on mac. You are probably reading in a line from your windows-encoded file that contains a windows newline that your mac doesn't understand without a little help from you.

Problems with Character Encoding Using Haskells Text.Pandoc

I want to parse a LaTeX-File using Pandoc and output the text, like this:
import qualified Text.Pandoc as P
import Text.Pandoc.Error (handleError)
tex2Str = do
file <- readFile "test.tex"
let p = handleError $ P.readLaTeX P.def file
writeFile "A.txt" $ P.writePlain P.def p
writeFile "B.txt" $ file
While the encoding in file B.txt seems to be "right" (i.e. uft-8), the encoding in file A.txt is not correct.
Here the respective extracts of the files:
A.txt:
...
Der _Crawler_ läuft hierbei über die Dokumentenbasis
...
B.txt:
...
\usepackage[utf8]{inputenc}
...
Der \emph{Crawler} läuft hierbei über die Dokumentenbasis
...
Anyone knows how to fix this? Why does Pandoc use the wrong encoding (I thought, it uses utf-8 by default)?
Update:
I got a (partial) solution: Using the readFile and writeFile-Functions from Text.Pandoc.UTF8 seems to fix some of the problems, i.e.
import qualified Text.Pandoc as P
import Text.Pandoc.Error (handleError)
import qualified Text.Pandoc.UTF8 as UTF (readFile, writeFile)
tex2Str = do
file <- UTF.readFile "test.tex"
let p = handleError $ P.readLaTeX P.def file
UTF.writeFile "A.txt" $ P.writePlain P.def p
UTF.writeFile "B.txt" $ file
However, I still didnt get the clue what the actual problem was, since both Prelude.readFile and Prelude.writeFile seem to work uft8-aware...

NodeJS Usage require('../');

I am in the process of learning NodeJs and stumbled across this line of code:
var irsdk = require('../');
I cannot figure out what is being loaded. I can see where the variable is being used and calling functions.
I understand how to use the require statement when loading a particular file.
If anyone could shed some light it would be appreciated.
Thanks.
From Node's documentation on Modules
require(X) from module at path Y
1. If X is a core module,
a. return the core module
b. STOP
2. If X begins with './' or '/' or '../'
a. LOAD_AS_FILE(Y + X)
b. LOAD_AS_DIRECTORY(Y + X)
3. LOAD_NODE_MODULES(X, dirname(Y))
4. THROW "not found"
LOAD_AS_FILE(X)
1. If X is a file, load X as JavaScript text. STOP
2. If X.js is a file, load X.js as JavaScript text. STOP
3. If X.json is a file, parse X.json to a JavaScript Object. STOP
4. If X.node is a file, load X.node as binary addon. STOP
LOAD_AS_DIRECTORY(X)
1. If X/package.json is a file,
a. Parse X/package.json, and look for "main" field.
b. let M = X + (json main field)
c. LOAD_AS_FILE(M)
2. If X/index.js is a file, load X/index.js as JavaScript text. STOP
3. If X/index.json is a file, parse X/index.json to a JavaScript object. STOP
4. If X/index.node is a file, load X/index.node as binary addon. STOP
LOAD_NODE_MODULES(X, START)
1. let DIRS=NODE_MODULES_PATHS(START)
2. for each DIR in DIRS:
a. LOAD_AS_FILE(DIR/X)
b. LOAD_AS_DIRECTORY(DIR/X)
NODE_MODULES_PATHS(START)
1. let PARTS = path split(START)
2. let I = count of PARTS - 1
3. let DIRS = []
4. while I >= 0,
a. if PARTS[I] = "node_modules" CONTINUE
c. DIR = path join(PARTS[0 .. I] + "node_modules")
b. DIRS = DIRS + DIR
c. let I = I - 1
5. return DIRS
require('../') runs the LOAD_AS_DIRECTORY(X) section for the parent directory.
To make this simple, just keep in mind that if you dont specify a file but a directory, it will be the index.js file that will be loaded.
In the current case, we are requiring ../ witch will load the index of the upper directory.
It could be the InRule SDK for JS (http://www.inrule.com/products/inrule-for-javascript/) Which can be used to separate your business logic out from your application logic.
Or, it could be the npm 'node-irsdk' package which appears to be a telemetry package of some sort that enhances the existing "utils" module. (https://www.npmjs.com/package/node-irsdk)
Either way, you can log it out to the console to get more information about it by literally logging the variable.
console.log(irsdk);
//or
console.dir(irsdk);
//both must be called AFTER you instantiate the var irsdk = req.....

"Could not find module ‘Test.HUnit’" Error when executing Haskell's unittest (HUnit) in CodeRunner

I have simple unit test code for Haskell's HUnit. I use Mac OS X 10.10, and I installed HUnit with cabal install hunit.
module TestSafePrelude where
import SafePrelude( safeHead )
import Test.HUnit
testSafeHeadForEmptyList :: Test
testSafeHeadForEmptyList =
TestCase $ assertEqual "Should return Nothing for empty list"
Nothing (safeHead ([]::[Int]))
testSafeHeadForNonEmptyList :: Test
testSafeHeadForNonEmptyList =
TestCase $ assertEqual "Should return (Just head) for non empty list" (Just 1)
(safeHead ([1]::[Int]))
main :: IO Counts
main = runTestTT $ TestList [testSafeHeadForEmptyList, testSafeHeadForNonEmptyList]
I can execute it with runhaskell TestSafePrelude.hs to get the results:
Cases: 2 Tried: 2 Errors: 0 Failures: 0
Counts {cases = 2, tried = 2, errors = 0, failures = 0}
However, when I run it in Code Runner, I have error message that can't find the HUnit module.
CodeRunner launches the test on a different shell environment, and this seems to be the issue. If so, what environment variables need to be added? If not, what might be causing the problem?
I also find that ghc-pkg list from the CodeRunner does not search for the directories in ~/.ghc which contains the HUnit.
/usr/local/Cellar/ghc/7.8.3/lib/ghc-7.8.3/package.conf.d:
Cabal-1.18.1.4
array-0.5.0.0
...
xhtml-3000.2.1
This is the results when executed in shell:
/usr/local/Cellar/ghc/7.8.3/lib/ghc-7.8.3/package.conf.d
Cabal-1.18.1.4
array-0.5.0.0
...
/Users/smcho/.ghc/x86_64-darwin-7.8.3/package.conf.d
...
HUnit-1.2.5.2
...
zlib-0.5.4.2
I added both ~/.cabal and ~/.ghc in the path, but it doesn't work.
The problem was the $HOME setup change. I used different $HOME for CodeRunner, but Haskell searches for $HOME/.cabal and $HOME/.ghc for installed package.
After my resetting $HOME to correct location, everything works fine.

Snap: Download files stored on a database

I need to download files stored on a database.
I believe snap has file utils which help with file upload and download but it only deals with files resident on a filesystem.
I was given advice on the snap IRC to writeBS function to push the data to the browser.
Also, I was told to modify HTTP header so that browser treats the data as a file and brings save/open dialog. I got to play with it today and have more questions.
I have this so far:
getFirst :: AppHandler ()
getFirst = do
modifyResponse $ setContentType "application/octet-stream" -- modify HTTP header
result <- eitherWithDB $ fetch (select [] "files") -- get the file from db
let doc = either (const []) id result -- get the result out of either
fileName = at "name" doc -- get the name of the file
Binary fileData = at "blob" doc -- get the file data
writeBS fileData
Can you please tell me if this is the correct way of doing it?
It works but few things are missing:
How do I pass the file name and file type to the browser?
How do I set the Content-Disposition?
So I need to be able to set something like this:
Content-Disposition: attachment; filename=document.pdf
Content-Type: application/pdf
How can I do this?
You can set an arbitrary header of the response using modifyResponse in combination with setHeader (both from Snap.Core). Like this:
modifyResponse $ setHeader "Content-disposition" "attachment; filename=document.pdf"

Resources