badarg exception from io:format - io

I want to write a function that can take a series of numbers separated with \n
and print them in a list. However I cannot make any headway of the badarg error. How can I proceed with this code? The idea is to pipe the numbers to this program, but when I pass more than one number I get this error:
exception error: bad argument
in function io:format/3
called as io:format(<0.62.0>,"~w~n",[3,2,1])
in call from erl_eval:local_func/6 (erl_eval.erl, line 564)
in call from escript:interpret/4 (escript.erl, line 788)
in call from escript:start/1 (escript.erl, line 277)
in call from init:start_em/1
in call from init:do_boot/3
Here is my code:
-module(prog).
-export([read_stdin/1]).
-export([main/0]).
read_stdin(Acc) ->
case io:fread(standard_io, '', "~d") of
eof -> Acc;
{ok, Line} -> read_stdin(Line ++ Acc)
end.
main() ->
Data = read_stdin([]),
io:format("~w~n", Data).

The second argument to io:format is a list of values. Even if you only use one control sequence using a value (~w in this case), you need to wrap the value in a list:
io:format("~w~n", [Data]).

Related

Save io.read() file reading result to a string

So I have some code:
io.input(file)
print(io.read())
result = io.read()
print(result)
io.close(file)
and when I run this, I get
dasdasd
nil
where "dasdasd" is the content of the file. This signifies to me that the result of io.read() was not properly not saved to the string. Why is this the case? What am I missing?
You're assuming read() goes back to the beginning each time. This would require a seek() operation to be performed. https://pgl.yoyo.org/luai/i/file%3Aseek
f = io .input( 'filename.txt' )
print( f :read() )
f :seek('set') -- set returns to the beginning
result = f :read()
print( result )
f :close()
Lua is not a referentially transparent programming language, and io.read() is not a pure function. If you want to use the output from a call to it multiple times, you can't just call it multiple times. Save it to a variable and use that instead (like you did anyway immediately after your first call to it).

Why UnboundLocalError doesn't occur with lists?

I have a question regarding a tutorial problem.
Write a function make_monitored that takes as input a function, f, that itself takes one input. The result returned by make_monitored is a third function, say mf, that keeps track of the number of times it has been called by maintaining an internal counter. If the input to mf is the special string "how-many-calls?", then mf returns the value of the counter. If the input is the special string "reset-count", then mf resets the counter to zero. For any other input, mf returns the result of calling f on that input and increments the counter.
I have the following solution which works, surprisingly.
def make_monitored(f):
calls=[0]
def helper(n):
if n=="how-many-calls?":
return calls[0]
elif n=="reset-count":
calls[0]=0
else:
calls[0]+=1
return f(n)
return helper
I recalled reading about UnboundLocalError here: UnboundLocalError in Python
My question would be why won't calls[0]+=1 trigger that error? I made an assignation to a variable outside the local scope of the third function helper , and it seems a similar solution that uses instead calls.append(1) (the rest of the code correspondingly becomes len(calls) and calls.clear()) also bypasses that error.

Two function calls but only one trace displayed

With GHC version 8.0.2 the following program:
import Debug.Trace
f=trace("f was called")$(+1)
main = do
print $ f 1
print $ f 2
outputs:
f was called
2
3
Is it the expected behaviour? If yes, why? I expected the string f was called to be printed twice, one before 2 and one before 3.
Same result on TIO: Try it online!
EDIT
But this program:
import Debug.Trace
f n=trace("f was called:"++show n)$n+1
main = do
print $ f 1
print $ f 2
outputs:
f was called:1
2
f was called:2
3
Try it online!
I suspect those behaviours have something to do with laziness, but my questions remain: is this the expected behaviour and, if yes, why?
Hackage asserts this:
The trace function outputs the trace message given as its first
argument, before returning the second argument as its result.
I don't see it in the first example.
EDIT 2 Third example based on #amalloy comments:
import Debug.Trace
f n=trace "f was called"$n+1
main = do
print $ f 1
print $ f 2
outputs:
f was called
2
f was called
3
Your trace prints when defining f, not when calling it. If you want the trace to happen as part of the call, you should make sure it is not evaluated until a parameter is received:
f x = trace "f was called" $ x + 1
Also, when I run your TIO I don't see the trace appearing at all. trace is not really a reliable way to print things, because it cheats the IO model that the language is built on. The most subtle changes in evaluation order can disturb it. Of course for debugging you can use it, but as even this simple example demonstrates it is not guaranteed to help much.
In your edit, you quote the documentation of trace:
The trace function outputs the trace message given as its first
argument, before returning the second argument as its result.
And indeed this is exactly what happens in your program! When defining f,
trace "f was called" $ (+ 1)
needs to be evaluated. First, "f was called" is printed. Then, trace evaluates to, and returns, (+ 1). This is the final value of the trace expression, and therefore (+ 1) is what f is defined as. The trace has vanished, see?
It is indeed a result of laziness.
Laziness means that merely defining a value doesn't mean it will be evaluated; that will only happen if it's needed for something. If it's not needed, the code that would actually produce it doesn't "do anything". If a particular value is needed the code is run, but only the first time it would be needed; if there are other references to the same value and it is used again, those uses will just directly use the value that was produced the first time.
You have to remember that functions are values in every sense of the term; everything that applies to ordinary values also applies to functions. So your definition of f is simply writing an expression for a value, the expression's evaluation will be deferred until the value of f is actually needed, and as it's needed twice the value (function) the expression computes will be saved and reused the second time.
Lets look at it in more detail:
f=trace("f was called")$(+1)
You're defining a value f with a simple equation (not using any syntactic sugar for writing arguments on the left hand side of the equation, or providing cases via multiple equations). So we can simply take the right hand side as a single expression that defines the value f. Just defining it does nothing, it sits there until you call:
print $ f 1
Now print needs its argument evaluated, so this is forcing the expression f 1. But we can't apply f to 1 without first forcing f. So we need to figure out what function the expression trace "f was called" $ (+1) evaluates to. So trace is actually called, does its unsafe IO printing and f was called appears at the terminal, and then trace returns its second argument: (+1).
So now we know what function f is: (+1). f will now be a direct reference to that function, with no need to evaluate the original code trace("f was called")$(+1) if f is called again. Which is why the second print does nothing.
This case is quite different, even though it might look similar:
f n=trace("f was called:"++show n)$n+1
Here we are using the syntactic sugar for defining functions by writing arguments on the left hand side. Let's desugar that to lambda notation to see more clearly what the actual value being bound to f is:
f = \n -> trace ("f was called:" ++ show n) $ n + 1
Here we've written a function value directly, rather than an expression that can be evaluated to result in a function. So when f needs to be evaluated before it can be called on 1, the value of f is that whole function; the trace call is inside the function instead of being the thing that is called to result in a function. So trace isn't called as part of evaluating f, it's called as part of evaluating the application f 1. If you saved the result of that (say by doing let x = f 1) and then printed it multiple times, you'd only see the one trace. But the when we come to evaluate f 2, the trace call is still there inside the function that is the value of f, so when f is called again so is trace.

Megaparsec: macro expansion during parsing

In a small DSL, I'm parsing macro definitions, similarly to #define C pre-processor directives (here a simplistic example):
_def mymacro(a,b) = a + b / a
When the following call is encountered by the parser
c = mymacro(pow(10,2),3)
it is expanded to
c = pow(10,2) + 3 / pow(10,2)
My current approach is:
wrap the parser in a State monad
when parsing macro definitions, store them in the state, with their body unparsed (parse it as a string)
when parsing a macro call, find the definition in the state, replace the arguments in the body text, replace the call with this body and resume the parsing.
Some code from the last step:
macrocallStmt
= do -- capture starting position and content of old input before macro call
oldInput <- getInput
oldPos <- getPosition
-- parse the call
ret <- identifier
symbolCS "="
i <- identifier
args <- parens $ commaSep anyExprStr
-- expand the macro call
us <- get
let inlinedCall = replaceMacroArgs i args ret us
-- set up new input with macro call expanded
remainder <- getInput
let newInput = T.append inlinedCall (T.cons '\n' remainder)
setPosition oldPos
setInput newInput
-- update the expanded input script
modify (updateExpandedInput oldInput newInput)
anyExprStr = fmap praShow expression <|> fmap praShow algexpr
This approach does the job decently. However, it has a number of drawbacks.
Parsing multiple times
Any valid DSL expression can be an argument of the macro call. Therefore, even though I only need their textual representation (to be replaced in the macro body), I need to parse them and then convert them again to string - simply looking for the next comma wouldn't work. Then the complete and customised macro will be parsed. So in practice, macro arguments get parsed twice (and also show-ed, which has its cost). Moreover, each call requires a new parsing of the (almost same) body. The reason to keep the body unparsed in memory is to allow maximum flexibility: in the body, even DSL keywords could be constructed out of the macro arguments.
Error handling
Because the expanded body is inserted in front of the unconsumed input (replacing the call), the initial and final input can be quite different. In the event of a parse error, the position where the error occurred in the expanded input is available. However, when processing the error, I only have the original, not expanded, input. So the error position won't match.
That is why, in the code snippet above, I use the state to save the expanded input, so that it is available when the parser exits with an error.
This works well, but I noticed that it becomes quite costly, with new Text arrays (the input stream is Text) being allocated for the whole stream at every expansion. Perhaps keeping the expanded input in the state as String, rather than Text, would be cheaper in this case, i.e. when a middle part needs to be replaced?
The reasons for this question are:
I would appreciate suggestions / comments on the two issues described above
Can anyone suggest a better approach altogether?

Type Problems chaining CaseOf Statements with Parsec

I'm learning haskell, and my current project is writing a parser to read a text file representation of a database.
At the moment, I'm setting up the code for reading individual fields of tables. In the text file, fields look either like this:
name type flags format
or this:
name type format
This gives the trouble of having to account for cases of there being a flag or not being a flag. I solved this in my main function like this:
main = case parse fieldsWithFlags "(test)" testLine of
Left err -> noFlags
Right res -> print res
where noFlags = case parse fieldsWithoutFlags "(test)" testLine of
Left err -> print err
Right res -> print res
If I understand correctly, this says "If it's a line that doesn't have flags, try to parse it as such; otherwise, return an error." It prints the correct results for any "testLine" I throw at it, and returns errors if both options fail. However, when I try to pull this out into its own function, like this:
field :: Either ParseError Field
field = case parse fieldsWithFlags "(test)" testLine of
Left err -> noFlags
Right res -> return Right res
where noFlags = case parse fieldsWithoutFlags "(test)" testLine of
Left err -> return Left err
Right res -> return Right res
main = case field of
Left err -> print err
Right res -> print res
GHC gives me:
haskellParsing.hs:58:26:
Couldn't match expected type `Either ParseError Field'
with actual type `b0 -> Either b0 b0'
In the expression: noFlags
In a case alternative: Left err -> noFlags
In the expression:
case parse fieldsWithFlags "(test)" testLine of {
Left err -> noFlags
Right res -> return Right res }
I've played around with this a lot, but just can't get it working. I'm sure there's a much more clear-headed way of doing this, so any suggestions would be welcome - but I also want to understand why this isn't working.
Full code is at: http://pastebin.com/ajG6BkPU
Thanks!
You don't need the return in your cases. Once you wrap something in Left or Right it is in Either; since you only need a Either ParseError Field, the Left and Right do not need an extra return.
Also, you should be able to simplify your parseFields significantly. You can write a new parser that looks like this:
fields = try fieldsWithFlags <|> fieldsWithoutFlags
what this does is run the first one and, if it fails, backtrack and run the second one. The try is important because this is what enables the backtracking behavior. You have to backtrack because fieldsWithFlags consumes some of the input that you care about.
Now you should be able to just use fields in your main function.
Since the form without flags is almost identical to that with flags (just that the flags are missing), the alternative can be pushed down to where the flags might appear. In this way, you avoid backtracking over name and type in with-flags, just to parse them again in without-flags. We could combine the with and without fields parsers like this:
fields = do
iName <- getFieldName
spaces
iType <- getDataType
spaces
iFlag <- option "" $ try getFlag
spaces
iFormat <- getFormat
newline -- this was only present in without flags, was that intended?
return $ Field iName iType iFlag iFormat

Resources