If my input is "ab" and the parse is looking for "a", it recognises "a" as expected but I need the trailing "b" to produce an error. How do I test for this?
The lexer generates an EOF token at the end of the source input. To force processing of all input, require the EOF as part of your main parser rule:
r : a+ EOF ;
a : A ;
b : B ;
A : 'a' ;
B : 'b' ;
The parser, starting from rule r with input 'abaab', will throw an unrecognized input error - actually two. The default parser error strategy will attempt to skip a limited number of consecutive unknown tokens - one IIRC - and try to resynchronize with the input token stream. In this case it will succeed in resynchronizing, first with an A token and second with the EOF token.
Optionally, use
Parser.addErrorListener(...) to add your own error reporter (extend BaseErrorListener)
Parser.setErrorHandler(...) to add your own error recovery strategy (extend DefaultErrorStrategy)
If I remember correctly you can use actions inside your Antlr grammar which will looks something like:
grammar Expr;
prog: a b;
a: 'a';
b: 'b'{throw new Exception();};
Which will throw an error after the parser has seen a valid declaration of b. Instead of throwing an error you can also print out some debugging information.
Related
The rule I am trying to match is: hello followed by a sequence of characters. If that sequence contains an alphabet in it, that should match the str rule, else it should match the num rule.
For e.g.
hello123 - 123 should be matched by num rule
hello1a3 - 1a3 should be matched by the str rule
The grammar I wrote is below:
grammar Hello;
r: 'hello'seq;
// seq: str | integ;
seq: num | str;
num : DIGITS;
str : CHARS;
DIGITS: [0-9]+;
CHARS : [0-9a-zA-Z]+;
WS : [ \t\n\r]+ -> skip;
While trying to visualize the parse tree (using grun) (against the first input example above) I got the below parse tree:
However if the input had space in between there was no problem. Please explain why the error.
Lexing in ANTLR (as well as most lexer generators) works according to the maximum munch rule, which says that it always applies the lexer rule that could match the longest prefix of the current input. For the input hello123, the rule 'hello' would match hello, whereas the rule CHARS would match the entire input hello123. Therefore CHARS produces the longer match and is chosen over 'hello'.
If your CHARS and DIGITS tokens can only appear after a 'hello' token, you can use lexer modes to make it so that these rules are only available after a 'hello' has been matched.
Otherwise, to get the behaviour you want, your best bet would probably be to create a single lexer rule that matches 'hello' [0-9a-zA-Z]* and then take apart the tokens generated by that in a separate step. Though it all depends on why you need this.
I am trying to parse a text file and I want to create a grammar to catch specific text blocks let's say
a) the word 'specificWordA' or 'specWordB' followed by zero or more digits, or
b) the word 'testC' followed by 1 or more digits.
My grammar looks like this:
grammar Hello;
catchExpr : expr+ EOF;
expr : matchAB | matchC;
matchAB : TEXTAB DIGIT*;
matchC : TEXTC DIGIT+;
TEXTAB : ('specificWordA' | 'specWordB') ;
TEXTC : ('testC') ;
DIGIT : NUMBER+ ;
CHARS : ('a'..'z' | 'A'..'Z')+ ;
SPACES : [ \r\t\n] ->skip;
fragment NUMBER: '0'..'9'+ ;
I am using ANTLR4 and I have compiled the code both on JAVA (to use the TestRig gui command for the AST) and Python2 (to provide a custom listener to traverse the tree). My file contains the following text:
specificWordA 11
specWordB specWordB specWordB testC 22 not not testD
testD 11
testC teeeeeeeeeest
testD 2
end here
Please could someone help my with the following questions:
1) Does ANTLR4 create nodes by default for every token I have defined in my grammar? How can I remove them so as to get a simplified version of the AST (see image below there are nodes for every sequence of characters that match token CHARS)?
2) Why does "testC teeeeeeeeeest testD 2 end here"
matches an expression? My rule is a text block 'testC' followed by at least one digit!
3) When I run my code I get the following messages:
line 3:39 extraneous input 'not' expecting {<EOF>, TEXTAB, 'testC'}
line 7:6 mismatched input 'teeeeeeeeeest' expecting {<EOF>, TEXTAB, 'testC'}
What does extraneous input mean? Do I have to change my grammar or it is just a warning?
Based on these questions,
ANTLR4 extraneous input
ANTLR4: Extraneous Input error
I cannot figure out what is wrong with my grammar!
My full grammar results in an incarnation of the dreaded "no viable alternative", but anyway, maybe a solution to the problem I'm seeing with this trimmed-down version can help me understand what's going on.
grammar NOVIA;
WS : [ \t\r\n]+ -> skip ; // whitespace rule -> toss it out
T_INITIALIZE : 'INITIALIZE' ;
T_REPLACING : 'REPLACING' ;
T_ALPHABETIC : 'ALPHABETIC' ;
T_ALPHANUMERIC : 'ALPHANUMERIC' ;
T_BY : 'BY' ;
IdWord : IdLetter IdSeparatorAndLetter* ;
IdLetter : [a-zA-Z0-9];
IdSeparatorAndLetter : ([\-]* [_]* [A-Za-z0-9]+);
FigurativeConstant :
'ZEROES' | 'ZERO' | 'SPACES' | 'SPACE'
;
statement : initStatement ;
initStatement : T_INITIALIZE identifier+ T_REPLACING (T_ALPHABETIC | T_ALPHANUMERIC) T_BY (literal | identifier) ;
literal : FigurativeConstant ;
identifier : IdWord ;
and the following input
INITIALIZE ABC REPLACING ALPHANUMERIC BY SPACES
results in
(statement (initStatement INITIALIZE (identifier ABC) REPLACING ALPHANUMERIC BY (identifier SPACES)))
I would have expected to see SPACES being recognized as "literal", not "identifier".
Any and all pointer greatly appreciated,
TIA - Alex
Every string that might match the FigurativeConstant rule will also match the IdWord rule. Because the IdWord rule is listed first and the match length is the same with either rule, the Lexer issues an IdWord token, not a FigurativeConstant token.
List the FigurativeConstant rule first and you will get the result you were expecting.
As a matter of style, the order in which you are listing your rules obscures the significance of their order, particularly for the necessary POV of the Lexer and Parser. Take a look at the grammars in the antlr/grammars-v4 repository as examples -- typically, for a combined grammar, parser on top and a top-down ordering. I would even hazard a guess that others might have answered sooner had your grammar been easier to read.
I know this question has been asked before, but I haven't found any solution to my specific problem. I am using Antlr4 with the C# target and I have the following lexer rules:
INT : [0-9]+
;
LETTER : [a-zA-Z_]+
;
WS : [ \t\r\n\u000C]+ -> skip
;
LineComment
: '#' ~[\r\n]* -> skip
;
That are all lexer rules, but there are many parser rules which I will not post here since I don't think it is relevant.
The problem I have is that whitespaces do not get skipped. When I inspect the token stream after the lexer ran my input, the whitespaces are still in there and therefore cause an error. The input I use is relatively basic:
"fd 100"
it parses complete until it reaches this parser rule:
noSignFactor
: ':' ident #NoSignFactorArg
| integer #NoSignFactorInt
| float #NoSignFactorFloat
| BOOLEAN #NoSignFactorBool
| '(' expr ')' #NoSignFactorExpr
| 'not' factor #NoSignFactorNot
;
integer : INT #IntegerInt
;
Start by separating your grammar into a separate lexer grammar and parser grammar. For example, if you have a grammar Foo;, create the following:
Create a file FooLexer.g4, and move all of the lexer rules from Foo.g4 into FooLexer.g4.
Create a file FooParser.g4, and move all of the parser rules from Foo.g4 into FooParser.g4.
Include the following option in FooParser.g4:
options {
tokenVocab=FooLexer;
}
This separation will ensure that your parser isn't silently creating lexer rules for you. In a combined grammar, using a literal such as 'not' in a parser rule will create a lexer rule for you if one does not already exist. When this happens, it's easy to lose track of what kinds of tokens your lexer is able to produce. When you use a separate lexer grammar, you will need to explicitly declare a rule like the following in order to use 'not' in a parser rule.
NOT : 'not';
This should solve the problems with whitespace should you have included the literal ' ' somewhere in a parser rule.
Using ANTLR 4.2, I'm trying a very simple parse of this test data:
RRV0#ABC
Using a minimal grammar:
grammar Tiny;
thing : RRV N HASH ID ;
RRV : 'RRV' ;
N : [0-9]+ ;
HASH : '#' ;
ID : [a-zA-Z0-9]+ ;
WS : [\t\r\n]+ -> skip ; // match 1-or-more whitespace but discard
I expect the lexer RRV to match before ID, based on the excerpt below from Terence Parr's Definitive ANTLR 4 reference:
BEGIN : 'begin' ; // match b-e-g-i-n sequence; ambiguity resolves to BEGIN
ID : [a-z]+ ; // match one or more of any lowercase letter
Running the ANTLR4 test rig with the test data above, the output is
[#0,0:3='RRV0',<4>,1:0]
[#1,4:4='#',<3>,1:4]
[#2,5:7='ABC',<4>,1:5]
[#3,10:9='<EOF>',<-1>,2:0]
line 1:0 mismatched input 'RRV0' expecting 'RRV'
I can see the first token is <4> for ID, with the value 'RRV0'
I have tried rearranging the lexer item order. I have also tried using implicit lexer items by explicitly matching in the grammar rule (rather than through an explicit lexer item). I tried making matches non greedy too. Those were not successful for me.
If I change the lexed ID item to not match upper case then the RRV item does match and the parse will get further.
I started in ANTLR 4.1 with the same issue.
I checked in ANTLRWorks and from the command line, with the same result both ways.
How can I change the grammar to match lexer item RRV in preference to ID ?
The grammar order resolution policy only applies when two different lexer rules match the same length of token. When the length differs, the longest one always wins. In your case, the ID rule matches a token with length 4, which is longer than the RRV token that only matches 3 characters.
This strategy is especially important in languages like Java. Consider the following input:
String className = "";
Along with the following two grammar rules (slightly simplified):
CLASS : 'class';
ID : [a-zA-Z_] [a-zA-Z0-9_]*;
If we only considered grammar order, then the input className would produce a keyword followed by the identifier Name. Rearranging the rules wouldn't solve the problem because then there would be no way to ever create a CLASS token, even for the input class.