Parse command line arguments - haskell

I try to parse command line arguments in haskell.
Below is a sample code:
import System.Environment
work :: [Integer] -> Int
work (s:r:t:es) = length es
main :: IO ()
main = getArgs >>= putStrLn . show . work . (map read)
I execute it with:
./test 2 10 10 [7, 3, 5, 4, 4]
The output is 5 like expected.
But if I replace length with sum and Int with Integer the execution raises the error
test: Prelude.read: no parse
Can someone explain how to do this?

The list returned by getArgs will look like this: ["2", "10", "10", "[7,", "3,", "5,", "4,", "4]"]. The first three of those strings are valid string representations of integers, but the others are not. So when you use read on those, you'll get an error.
The reason that you don't see an error when you calculate the length, is that length does not have to look at the values in the lists, so the reads are never evaluated.
In order to sum the values, however, they definitely do need to be evaluated. So that's why you get an exception then.
In order to fix your problem, you could either just change the format of the arguments to not include brackets and commas, or manually go through the arguments and remove the brackets and commas before you pass them to read.
Another alternative would be to concatenate the later arguments together, separated by spaces, (so you end up with "[7, 3, 5, 4, 4]") and then pass that as a single string to read with type [Integer].

Related

How can I write a parser using Parsec that only accepts unique elements?

I have recently started learning Haskell and have been trying my hand at Parsec. However, for the past couple of days I have been stuck with a problem that I have been unable to find the solution to. So what I am trying to do is write a parser that can parse a string like this:
<"apple", "pear", "pineapple", "orange">
The code that I wrote to do that is:
collection :: Parser [String]
collection = (char '<') *> (string `sepBy` char ',')) <* (char '>')
string :: Parser String
string = char '"' *> (many (noneOf ['\"', '\r', '\n', '"'])) <* char '"'
This works fine for me as it is able to parse the string that I have defined above. Nevertheless, I would now like to enforce the rule that every element in this collection must be unique and that is where I am having trouble. One of the first results I found when searching on the internet was this one, which suggest the usage of the nub function. Although the problem stated in that question is not the same, it would in theory solve my problem. But what I don't understand is how I can apply this function within a Parser. I have tried adding the nub function to several parts of the code above without any success. Later I also tried doing it the following way:
collection :: Parser [String]
collection = do
char '<'
value <- (string `sepBy` char ','))
char '>'
return nub value
But this does not work as the type does not match what nub is expecting, which I believe is one of the problems I am struggling with. I am also not entirely sure whether nub is the right way to go. My fear is that I am going in the wrong direction and that I won't be able to solve my problem like this. Is there perhaps something I am missing? Any advice or help anyone could provide would be greatly appreciated.
The Parsec Parser type is an instance of MonadPlus which means that we can always fail (ie cause a parse error) whenever we want. A handy function for this is guard:
guard :: MonadPlus m => Bool -> m ()
This function takes a boolean. If it's true, it return () and the whole computation (a parse in this case) does not fail. If it's false, the whole thing fails.
So, as long as you don't care about efficiency, here's a reasonable approach: parse the whole list, check for whether all the elements are unique and fail if they aren't.
To do this, the first thing we have to do is write a predicate that checks if every element of a list is unique. nub does not quite do the right thing: it return a list with all the duplicates taken out. But if we don't care much about performance, we can use it to check:
allUnique ls = length (nub ls) == length ls
With this predicate in hand, we can write a function unique that wraps any parser that produces a list and ensures that list is unique:
unique parser = do res <- parser
guard (allUnique res)
return res
Again, if guard is give True, it doesn't affect the rest of the parse. But if it's given False, it will cause an error.
Here's how we could use it:
λ> parse (unique collection) "<interactive>" "<\"apple\",\"pear\",\"pineapple\",\"orange\">"
Right ["apple","pear","pineapple","orange"]
λ> parse (unique collection) "<interactive>" "<\"apple\",\"pear\",\"pineapple\",\"orange\",\"apple\">"
Left "<interactive>" (line 1, column 46):unknown parse error
This does what you want. However, there's a problem: there is no error message supplied. That's not very user friendly! Happily, we can fix this using <?>. This is an operator provided by Parsec that lets us set the error message of a parser.
unique parser = do res <- parser
guard (allUnique res) <?> "unique elements"
return res
Ahhh, much better:
λ> parse (unique collection) "<interactive>" "<\"apple\",\"pear\",\"pineapple\",\"orange\",\"apple\">"
Left "<interactive>" (line 1, column 46):
expecting unique elements
All this works but, again, it's worth noting that it isn't efficient. It parses the whole list before realizing elements aren't unique, and nub takes quadratic time. However, this works and it's probably more than good enough for parsing small to medium-sized files: ie most things written by hand rather than autogenerated.

Haskell, make single string from integer set?

I'd greatly appreciate if you could tell me how to make a single string from a range between two ints. Like [5..10] i would need to get a "5678910". And then I'd have to calculate how many (zeroes, ones ... nines) there are in a string.
For example: if i have a range from [1..10] i'd need to print out
1 2 1 1 1 1 1 1 1 1
For now i only have a function to search for a element in string.
`countOfElem elem list = length $ filter (\x -> x == elem) list`
But the part how to construct such a string is bugging me out, or maybe there is an easier way? Thank you.
I tried something like this, but it wouldn't work.
let intList = map (read::Int->String) [15..22]
I tried something like this, but it wouldn't work. let intList = map (read::Int->String) [15..22]
Well... the purpose of read is to parse strings to read-able values. Hence it has a type signature String -> a, which obviously doesn't unify with Int -> String. What you want here is the inverse1 of read, it's called show.
Indeed map show [15..22] gives almost the result you asked for – the numbers as decimal-encoded strings – but still each number as a seperate list element, i.e. type [String] while you want only String. Well, how about asking Hoogle? It gives the function you need as the fifth hit: concat.
If you want to get fancy you can then combine the map and concat stages: both the concatMap function and the >>= operator do that. The most compact way to achieve the result: [15..22]>>=show.
1show is only the right inverse of read, to be precise.

How to include a expression in the name of file in Maxima

I have a Maxima program that does some algebra and then writes some things down on an external file. How do I include some calculated values and even small expressions into the name of the file?
A mwe would be the following:
N:3;
f: erf(x);
tay: taylor(f,x,0,N);
with_stdout("taylor.txt", fortran(tay));
But this example names the file taylor.txt. I wanted something that named the file taylor_N3_f_erf.txt or something like that. I have tried several syntaxes but nothing worked.
Also, I know Maxima in programmed in lisp and I learned the syntax for concatenating strings in Lisp but I haven't figured out how to use that in Maxima.
Thank you very much.
Here's what I came up with. It took some playing around with argument quoting and evaluation in functions but I think it works now.
(%i2) bar (name_base, name_extension, ['vars]) := sconcat (name_base, foo(vars), ".", name_extension) $
(%i3) foo(l) := apply (sconcat, join (makelist ("_", 2 * length (l)), join (l, map (string, map (ev, l))))) $
(%i4) [a, b, c] : [123, '(x + 1), '(y/2)];
y
(%o4) [123, x + 1, -]
2
(%i5) bar ("foobar", "txt", a, b, c);
(%o5) foobar_a_123_b_x+1_c_y/2.txt
(%i6) myname : bar ("baz", "quux", a, b);
(%o6) baz_a_123_b_x+1.quux
(%i7) with_stdout (myname, print ("HELLO WORLD"));
(%o7) HELLO WORLD
(%i8) printfile ("baz_a_123_b_x+1.quux");
HELLO WORLD
(%o8) baz_a_123_b_x+1.quux
Note that sconcat concatenates strings and string produces a string representation of an expression.
Division expressions could cause trouble since / means a directory in a file name ... maybe you'll have to subsitute for those characters or any other non-allowed characters. See ssubst.
Note that with_stdout evaluates its first argument, so if you have a variable e.g. myname then the value of myname is the name of the output file.

Parsing an input file file in Haskell

Is there any fast way in Haskell to cast an input file like that into corresponding types? For example a function that takes a string and produces a list of Ints? Or do I need to parse it manually using getLine and parse the string?
10.
10.
[4, 3, 2, 1].
[(5,8,'~'), (6,4,'*'), (7,10,'~'), (8,2,'o')].
[4,0,9,4,7,5,7,4,6,4].
[4,10,0,6,6,5,6,5,6,2].
Yes, the read function.
Once you read in the file with readFile for example, you can read each line to convert it to the type you want. You'll have to get rid of the periods first, though. So for example:
main = do
text <- readFile "test.txt"
let cases = lines text
-- to get rid of the periods at the end of each line
strs = map init cases
lastLine = read $ last strs
print $ show (map (+5) lastLine)
This will take your example file and read in a list of Ints from the last line, and the add 5 to the whole list and print it.
If every line were the same type, you could just map read over all the lines to get all of them. If there are different types, like in your example, you'd have to put in some logic to figure out what type is on each line, and then call an appropriate function to deal with that type.
To build on Jeff Burka's answer, here's the specific code you would use for your particular file:
main = do
[l1, l2, l3, l4, l5, l6] <- fmap (map init . lines) $ readFile "myFile.txt"
let myVal :: (Int, Int, [Int], [(Int, Int, Char)], [Int], [Int])
myVal = (read l1, read l2, read l3, read l4, read l5, read l6)
print myVal
This will print out the parsed tuple.
The init part is to get rid of the trailing period you have at the end of each line.

Haskell: Delimit a string by chosen sub-strings and whitespace

Am still new to Haskell, so apologize if there is an obvious answer to this...
I would like to make a function that splits up the all following lists of strings i.e. [String]:
["int x = 1", "y := x + 123"]
["int x= 1", "y:= x+123"]
["int x=1", "y:=x+123"]
All into the same string of strings i.e. [[String]]:
[["int", "x", "=", "1"], ["y", ":=", "x", "+", "123"]]
You can use map words.lines for the first [String].
But I do not know any really neat ways to also take into account the others - where you would be using the various sub-strings "=", ":=", "+" etc. to break up the main string.
Thank you for taking the time to enlighten me on Haskell :-)
The Prelude comes with a little-known handy function called lex, which is a lexer for Haskell expressions. These match the form you need.
lex :: String -> [(String,String)]
What a weird type though! The list is there for interfacing with a standard type of parser, but I'm pretty sure lex always returns either 1 or 0 elements (0 indicating a parse failure). The tuple is (token-lexed, rest-of-input), so lex only pulls off one token. So a simple way to lex a whole string would be:
lexStr :: String -> [String]
lexStr "" = []
lexStr s =
case lex s of
[(tok,rest)] -> tok : lexStr rest
[] -> error "Failed lex"
To appease the pedants, this code is in terrible form. An explicit call to error instead of returning a reasonable error using Maybe, assuming lex only returns 1 or 0 elements, etc. The code that does this reliably is about the same length, but is significantly more abstract, so I spared your beginner eyes.
I would take a look at parsec and build a simple grammar for parsing your strings.
how about using words .)
words :: String -> [String]
and words wont care for whitespaces..
words "Hello World"
= words "Hello World"
= ["Hello", "World"]

Resources