I was parsing a C-like language with array and struct. Following C operator precedence, . and [] are made of equal precedence.
opTable :: [[Operator Parser Expr]]
opTable = [[ InfixL $ Access <$ symbol "." , opSubscript]]
opSubscript = Postfix $ foldr1 (.) <$> some singleIndex
singleIndex = do
index < brackets expr
return $ \l -> ArrayIndex l index
When parsing
Struct S {
int[3] a;
}
Struct S s;
s.a[1]
it yielded
Access (Var "s") (ArrayIndex (Var "a") 1)
instead of
ArrayIndex (Access (Var "s") (Var "a")) 1
Why? Is it because [] is not parsed as InfixL?
Update:
After changing it to
opTable :: [[Operator Parser Expr]]
opTable = [[ PostFix $ (\ident expr -> Access expr ident) <$ symbol "." <*> identifier, opSubscript]]
I got another error
s.a[1]
| ^
unexpected '['
expecting ')', '_', alphanumeric character, or operator
The documentation for makeExprParser from parser-combinators is terrible with respect to prefix and postfix operators.
First, it fails to explain that with a mixture of prefix/postfix/infix operators at the supposed "same" precedence level, the prefix/postfix operators are always treated as higher precedence than the infix operators.
Second, when it makes the claims that "prefix and postfix operators of the same precedence can only occur once" and then gives --2 as an example for prefix operator -, it actually means that even two separate prefix operators (or two separate postfix operators) aren't allowed, so +-2 with separate prefix operators + and - isn't allowed either. What is allowed is a single prefix operator and a single postfix operator, at the same level, in which case the association is to the left, so -2! is okay (assuming - and ! are prefix and postfix operators at the same precedence level) and is parsed as (-2)!.
Oh, and third, the documentation never makes it clear that the example code for manyUnaryOp only works correctly for multiple prefix operators, and a non-obvious change is needed to get multiple postfix operators in the right order.
So, your first attempt doesn't work because the postfix operator is of secretly higher precedence than the infix operator. Your second attempt doesn't work because two different postfix operators at the same precedence level can't be parsed.
Your best bet is to parse single "postfix operator" consisting of a chain of access and index operations. Note the need for flip to get the ordering right for postfix operators.
opTable :: [[Operator Parser Expr]]
opTable = [[ indexAccessChain ]]
indexAccessChain = Postfix $ foldr1 (flip (.)) <$> some (singleIndex <|> singleAccess)
singleIndex = flip ArrayIndex <$> brackets expr
singleAccess = flip Access <$> (char '.' *> identifier)
A self-contained example:
{-# OPTIONS_GHC -Wall #-}
module Operators where
import Text.Megaparsec
import Text.Megaparsec.Char
import Control.Monad.Combinators.Expr
import Data.Void
type Parser = Parsec Void String
data Expr
= Access Expr String
| ArrayIndex Expr Expr
| Var String
| Lit Int
deriving (Show)
expr :: Parser Expr
expr = makeExprParser term opTable
identifier :: Parser String
identifier = some letterChar
term :: Parser Expr
term = Var <$> identifier
<|> Lit . read <$> some digitChar
opTable :: [[Operator Parser Expr]]
opTable = [[ indexAccessChain ]]
indexAccessChain :: Operator Parser Expr
indexAccessChain = Postfix $ foldr1 (flip (.)) <$> some (singleIndex <|> singleAccess)
singleIndex, singleAccess :: Parser (Expr -> Expr)
singleIndex = flip ArrayIndex <$> brackets expr
singleAccess = flip Access <$> (char '.' *> identifier)
brackets :: Parser a -> Parser a
brackets = between (char '[') (char ']')
main :: IO ()
main = parseTest expr "s.a[1][2][3].b.c[4][5][6]"
Related
I'm trying to add a parser for infix operators to a simple expressions parser. I have already looked at the documentation and at this question, but it seems like I am missing something.
import qualified Text.Parsec.Expr as Expr
import qualified Text.Parsec.Token as Tokens
import Text.ParserCombinators.Parsec
import Text.Parsec
data Expr = Number Integer
| Op Expr Expr
| Boolean Bool
instance Show Expr where
show (Op l r) = "(+ " ++ (show l) ++ " " ++ (show r) ++ ")"
show (Number r) = show r
show (Boolean b) = show b
parens = Tokens.parens haskell
reserved = Tokens.reservedOp haskell
infix_ operator func =
Expr.Infix (spaces >> reserved operator >> spaces >> return func) Expr.AssocLeft
infixOp =
Expr.buildExpressionParser table parser
where
table = [[infix_ "+" Op]]
number :: Parser Expr
number =
do num <- many1 digit
return $ Number $ read num
bool :: Parser Expr
bool = (string "true" >> return (Boolean True)) <|> (string "false" >> return (Boolean False))
parser = parens infixOp <|> number <|> bool
run = Text.Parsec.runParser parser () ""
This parser is able to parse expressions like 1, false, (1 + 2), (1 + false), but not 1 + 2 (it's parsed as 1). If I try to change the parser to parens infixOp <|> infixOp <|> number <|> bool, it get stuck.
What should i change in order to parse expressions like 1 + 2 without parenthesis?
You have to run the infixOp parser at the top level like this:
run = Text.Parsec.runParser infixOp () ""
Otherwise the your infix expressions can only be parsed when occuring in parentheses.
The attempt to use parens infixOp <|> infixOp <|> number <|> bool most likely gets stuck because it loops: parser tries to parse using infixOp, which tries to parse using parse and so on...
These tutorial might help you getting started with parsec (they did for me):
https://wiki.haskell.org/Parsing_a_simple_imperative_language
http://dev.stephendiehl.com/fun/002_parsers.html
I am stuck at a point in converting a expression entered by the user to my own datatype
I did it using biuldExpressionParser , but using simple parser and recursion I did as follows
openBrace = char '('
closeBrace :: GenParser Char st Char
closeBrace = char ')'
bracketExpr = do
spaces >> openBrace
expr <- expressionParser
spaces >> closeBrace
return expr
bracketExpr will return the entered expression in my own datatype
to convert it into my datatype I did for negation,if expression is a number or a variable as follows:
expressionParser = negate1
<|> number
<|> variable
--<|> addition
<?> "simple expression"
negate1 :: Parser Expr
negate1 = do{ char '-'
;ds <- number
;return (ExprNeg (ds) )
}
<?> "negate"
variable :: Parser Expr
variable = do{ ds<- many1 (letter <|> digit)
; return (ExprVar ds)}
<?> "variable"
number :: Parser Expr
number = do{ ds<- many1 digit
; return (ExprNum (read ds))}
<?> "number"
To do the same for addtion I tried by seperating the expression using sepBy but I am encountering several issues.
If the extered expreesion is 1+2
Then I should getExprAdd (ExprNum 1) (ExprNum 2)
I am unable to proceed further from here .Help would be great.
Thank you.
If you want to be writing a parser with parser combinators you need to think in terms of high-level rules first. Here's a skeleton parser in Parsec; it does not 100% meet your needs because all of the operators are same-precedence and right-associative, whereas you probably want different precedences and left-associativity. Still, this is the basic way to write a parser:
import Text.Parsec
import Text.Parsec.Char
import Data.Char (isDigit)
-- basic data type
data Expr = Op Char Expr Expr | N Integer deriving (Show)
type Parser x = Parsec String () x
-- reverse-sequenced >>, used to implement `parenthesized` and `whitespaced`
(<<) :: Monad m => m x -> m y -> m x
mx << my = mx >>= \x -> my >> return x
infixl 1 <<
parenthesized :: Parser e -> Parser e
parenthesized e = char '(' >> e << char ')'
whitespaced :: Parser e -> Parser e
whitespaced e = spaces >> e << spaces
number :: Parser Expr
number = do
c <- oneOf "123456789" -- leading 0's can be reserved for octal/hexadecimal
cs <- many digit
return (N (read (c:cs)))
operator :: Parser Expr
operator = do
e1 <- expr_no_op
o <- whitespaced (oneOf "+*/-")
e2 <- expression
return (Op o e1 e2)
expr_no_op :: Parser Expr
expr_no_op = whitespaced (try number <|> parenthesized expression)
expression :: Parser Expr
expression = whitespaced (try operator <|> try number <|> parenthesized expression)
Notice that you define tokens (above, just 'number') and then combine them with a "try this <|> try that <|> otherwise..." syntax. Notice also that you have to stop operator from taking an expression as its first argument otherwise you'll get a operator -> expression -> operator loop in the parsing. This is called "left factoring."
I'm programming a standard math notation -> DC POSIX-compliant format converter. It takes the input string, parses it into an intermediary data type and then turns it into the output string by showing it.
This is the Data type used. I have no problems with the Data type -> Output String conversion, it works flawlessly:
data Expression = Expression :+ Expression
| Expression :- Expression
| Expression :* Expression
| Expression :/ Expression
| Expression :^ Expression
| Cons String
infixr 0 :+
infixr 0 :-
infixr 1 :*
infixr 1 :/
infixr 2 :^
instance Show Expression where
show (x :+ y) = unwords [show x, show y, "+"]
show (x :- y) = unwords [show x, show y, "-"]
show (x :* y) = unwords [show x, show y, "*"]
show (x :/ y) = unwords [show x, show y, "/"]
show (x :^ y) = unwords [show x, show y, "^"]
show (Cons y) = y
The Parsec parser part, however, refuses to comply with the defined operator precedency rules. Clearly because of the way chainl1 is used in the subexpression parser definition:
expression :: Parser Expression
expression = do
spaces
x <- subexpression
spaces >> eof >> return x
subexpression :: Parser Expression
subexpression = (
(bracketed subexpression) <|>
constant
) `chainl1` (
try addition <|>
try substraction <|>
try multiplication <|>
try division <|>
try exponentiation
)
addition = operator '+' (:+)
substraction = operator '-' (:-)
multiplication = operator '*' (:*)
division = operator '/' (:/)
exponentiation = operator '^' (:^)
operator :: Char -> (a -> a -> a) -> Parser (a -> a -> a)
operator c op = do
spaces >> char c >> spaces
return op
bracketed :: Parser a -> Parser a
bracketed parser = do
char '('
x <- parser
char ')'
return x
constant :: Parser Expression
constant = do
parity <- optionMaybe $ oneOf "-+"
constant <- many1 (digit <|> char '.')
return (if parity == Just '-'
then (Cons $ '_':constant)
else Cons constant)
Is there a way of making the parser take into account the operator precedence rules without having to rewrite the entirety of my code?
Well, you don't need to rewrite your entire code, but since your subexpression parser doesn't take precedence into account at all, you have to rewrite that - substantially.
One possibility is to build it from parsers for subexpressions with top-level operators with the same precedence,
atom :: Parser Expression
atom = bracketed subexpression <|> constant
-- highest precedence operator is exponentiation, usually that's
-- right-associative, hence I use chainr1 here
powers :: Parser Expression
powers = atom `chainr1` try exponentiation
-- a multiplicative expression is a product or quotient of powers,
-- left-associative
multis :: Parser Expression
multis = powers `chainl1` (try multiplication <|> try division)
-- a subexpression is a sum (or difference) of multiplicative expressions
subexpression :: Parser Expression
subexpression = multis `chainl1` (try addition <|> try substraction)
Another option is to let the precedence and associativities be taken care of by the library and use Text.Parsec.Expr, namely buildExpressionParser:
table = [ [binary "^" (:^) AssocRight]
, [binary "*" (:*) AssocLeft, binary "/" (:/) AssocLeft]
, [binary "+" (:+) AssocLeft, binary "-" (:-) AssocLeft]
]
binary name fun assoc = Infix (do{ string name; spaces; return fun }) assoc
subexpression = buildExpressionParser table atom
(which requires that bracketed parser and constant consume the spaces after the used tokens).
After having read Anthony's response on a style-related parser question, I was trying to convince myself that writing monadic parsers can still be rather compact.
So instead of
reference :: Parser Transc
reference = try $ do string "#{"
a <- number
char ','
b <- number
char ','
c <- number
char '}'
return $ Outside (a,b,c)
We can simply have:
reference3 :: Parser Transc
reference3 = liftM3 (((Outside .).) . (,,))
(string "#{" >> number <<! char ',')
number
(char ',' >> number <<! char '}') where
(<<!) = liftM2 const
Which is very similar to applicative version provided by Anthony:
reference2 :: Parser Transc
reference2 = ((Outside .) .) . (,,)
<$> (string "#{" *> number2 <* char ',')
<*> number2
<*> (char ',' *> number2 <* char '}')
...except for the <<! operator which is conceptually similar to <* which is defined as liftA2 const meaning "sequence but discard value and use value provided to the left".
Of course << would have been a bad name for liftM2 const, it would have suggested that << is equivalent to flip >> if we follow the same logic as >>= and =<<.
I don't find a "liftM2 const" under a single name. Is this because it is not that useful?
I don't quite see the problem. Every monad is also an of applicative functor, so you can simply use (*>) in the monadic expressions as well.
(At the time of this answer (year 2011), Applicative was not a superclass of Monad, so it may have been necessary to add a corresponding class instance.)
Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell.
Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it:
import Control.Applicative
import Text.Parsec hiding ((<|>))
expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-')
term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/')
fact = read <$> many1 digit <|> char '(' *> expr <* char ')'
eval :: String -> Int
eval = either (error . show) id . parse expr "" . filter (/= ' ')
main :: IO ()
main = do
file <- readFile "expr"
putStr $ show $ eval file
putStr "\n"
and here's my self-contained precedence climbing parser in F#:
let rec (|Expr|) = function
| P(f, xs) -> Expr(loop (' ', f, xs))
| xs -> invalidArg "Expr" (sprintf "%A" xs)
and loop = function
| ' ' as oop, f, ('+' | '-' as op)::P(g, xs)
| (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) ->
let h, xs = loop (op, g, xs)
match op with
| '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' | _ -> (/)
|> fun op -> loop (oop, op f h, xs)
| _, f, xs -> f, xs
and (|P|_|) = function
| '('::Expr(f, ')'::xs) -> Some(P(f, xs))
| c::_ as xs when '0' <= c && c <= '9' ->
let rec loop n = function
| c2::xs when '0' <= c2 && c2 <= '9' -> loop (10*n + int(string c2)) xs
| xs -> Some(P(n, xs))
loop 0 xs
| _ -> None
My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?
EDIT:
Here's the OCaml script I used to generate a ~2Mb expression for benchmarking:
open Printf
let rec f ff n =
if n=0 then fprintf ff "1" else
fprintf ff "%a+%a*(%a-%a)" f (n-1) f (n-1) f (n-1) f (n-1)
let () =
let n = try int_of_string Sys.argv.(1) with _ -> 3 in
fprintf stdout "%a\n" f n
I've come up with a Haskell solution that is 30× faster than the Haskell solution you posted (with my concocted test expression).
Major changes:
Change Parsec/String to Attoparsec/ByteString
In the fact function, change read & many1 digit to decimal
Made the chainl1 recursion strict (remove $! for the lazier version).
I tried to keep everything else you had as similar as possible.
import Control.Applicative
import Data.Attoparsec
import Data.Attoparsec.Char8
import qualified Data.ByteString.Char8 as B
expr :: Parser Int
expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-')
term :: Parser Int
term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/')
fact :: Parser Int
fact = decimal <|> char '(' *> expr <* char ')'
eval :: B.ByteString -> Int
eval = either (error . show) id . eitherResult . parse expr . B.filter (/= ' ')
chainl1 :: (Monad f, Alternative f) => f a -> f (a -> a -> a) -> f a
chainl1 p op = p >>= rest where
rest x = do f <- op
y <- p
rest $! (f x y)
<|> pure x
main :: IO ()
main = B.readFile "expr" >>= (print . eval)
I guess what I concluded from this is that the majority of the slowdown for the parser combinator was that it was sitting on an inefficient base, not that it was a parser combinator, per se.
I imagine with more time and profiling this could go faster, as I stopped when I went past the 25× mark.
I don't know if this would be faster than the precedence climbing parser ported to Haskell. Maybe that would be an interesting test?
I'm currently working on the next version of FParsec (v. 0.9), which will in many situations improve performance by up to a factor of 2 relative to the current version.
[Update: FParsec 0.9 has been released, see http://www.quanttec.com/fparsec ]
I've tested Jon's F# parser implementation against two FParsec implementations. The first FParsec parser is a direct translation of djahandarie's parser. The second one uses FParsec's embeddable operator precedence component. As the input I used a string generated with Jon's OCaml script with parameter 10, which gives me an input size of about 2.66MB. All parsers were compiled in release mode and were run on the 32-bit .NET 4 CLR. I only measured the pure parsing time and didn't include startup time or the time needed for constructing the input string (for the FParsec parsers) or the char list (Jon's parser).
I measured the following numbers (updated numbers for v. 0.9 in parens):
Jon's hand-rolled parser: ~230ms
FParsec parser #1: ~270ms (~235ms)
FParsec parser #2: ~110ms (~102ms)
In light of these numbers, I'd say that parser combinators can definitely offer competitive performance, at least for this particular problem, especially if you take into account that FParsec
automatically generates highly readable error messages,
supports very large files as input (with arbitrary backtracking), and
comes with a declarative, runtime-configurable operator-precedence parser module.
Here's the code for the two FParsec implementations:
Parser #1 (Translation of djahandarie's parser):
open FParsec
let str s = pstring s
let expr, exprRef = createParserForwardedToRef()
let fact = pint32 <|> between (str "(") (str ")") expr
let term = chainl1 fact ((str "*" >>% (*)) <|> (str "/" >>% (/)))
do exprRef:= chainl1 term ((str "+" >>% (+)) <|> (str "-" >>% (-)))
let parse str = run expr str
Parser #2 (Idiomatic FParsec implementation):
open FParsec
let opp = new OperatorPrecedenceParser<_,_,_>()
type Assoc = Associativity
let str s = pstring s
let noWS = preturn () // dummy whitespace parser
opp.AddOperator(InfixOperator("-", noWS, 1, Assoc.Left, (-)))
opp.AddOperator(InfixOperator("+", noWS, 1, Assoc.Left, (+)))
opp.AddOperator(InfixOperator("*", noWS, 2, Assoc.Left, (*)))
opp.AddOperator(InfixOperator("/", noWS, 2, Assoc.Left, (/)))
let expr = opp.ExpressionParser
let term = pint32 <|> between (str "(") (str ")") expr
opp.TermParser <- term
let parse str = run expr str
In a nutshell, parser combinators are slow for lexing.
There was a Haskell combinator library for building lexers (see "Lazy Lexing is Fast" Manuel M. T. Chakravarty) - as the tables were generated at runtime, there wasn't the hassle of code generation. The library got used a bit - it was initially used in one of the FFI preprocessors, but I don't think it ever got uploaded to Hackage, so maybe it was a little too inconvenient for regular use.
In the OCaml code above, the parser is directly matching on char-lists so it can be as fast as list destructuring is in the host language (it would be much faster than Parsec if it were re-implemented in Haskell). Christian Lindig had an OCaml library that had a set of parser combinators and a set of lexer combinators - the lexer combinators were certainly much simpler than Manuel Chakravarty's, and it might might be worthwhile tracking down this library and bench-marking it before writing a lexer generator.
Have you tried one of the known fast parser libraries? Parsec's aims have never really been speed, but ease of use and clarity. Comparing to something like attoparsec may be a more fair comparison, especially because the string types are likely to be more equal (ByteString instead of String).
I also wonder which compile flags were used. This being another trolling post by the infamous Jon Harrop, it would not surprise me if no optimisations were used at all for the Haskell code.