I need to handle this sequences: <1>, <1-2>, <3-5 /0.5/>.
In ANTLR v3 I used these rules:
LPOINTY : ('<' REPEAT (PROBABILITY)? '>') => '<' // will consume only '<'
repeatOperator : LPOINTY_OR_ABNF_URI (XML_NM_TOKEN (weightOrProbability'>')?
In ANTLR v4, there is not allowed this opertor "=>", so I wrote this like that:
LPOINTY_OR_ABNF_URI // will return only digit, ex: 1, 1-2, 3-5
: '<' REPEAT '>' { setText(getText().substring(1, getText().length() - 1)); }
| '<' REPEAT WS+ { setText(getText().substring(1, getText().length())); }
;
repeatOperator
: LPOINTY_OR_ABNF_URI (WEIGHT_OR_PROBABILITY)? SHARP_BRACKET_RIGHT?
;
where tokens:
XML_NM_TOKEN - it match content of '<..>'
weightOrProbability and WEIGHT_OR_PROBABILITY - it match /0.5/
PROBABILITY - it match /0.5/
WS - it match white spaces
SHARP_BRACKET_RIGHT - it matches '>'
Is there better way to this ? I would like to use look ahead functionality and consume only the first charcter, like in old version. Is there a way do this ?
My solution:
REPEAT_OP1
: '<' REPEAT '>' { setText(getText().substring(1, getText().length()-1)); }
;
REPEAT_OP2
: '<' REPEAT { setText(getText().substring(1, getText().length())); }
;
repeatOperator
: REPEAT_OP1
| REPEAT_OP2 WEIGHT_OR_PROBABILITY? SHARP_BRACKET_RIGHT
| REPEAT_OP2 WEIGHT_OR_PROBABILITY? {notifyErrorListeners("Missing closing '>'!");}
;
Related
What's wrong with the following antlr lexer?
I got an error
warning(146): MySQL.g4:5685:0: non-fragment lexer rule VERSION_COMMENT_TAIL can match the empty string
Attached source code
VERSION_COMMENT_TAIL:
{ VERSION_MATCHED == False }? // One level of block comment nesting is allowed for version comments.
((ML_COMMENT_HEAD MULTILINE_COMMENT) | . )*? ML_COMMENT_END { self.setType(MULTILINE_COMMENT); }
| { self.setType(VERSION_COMMENT); IN_VERSION_COMMENT = True; }
;
You are trying to convert my ANTLR3 grammar for MySQL to ANTLR4? Remove all the comment rules in the lexer and insert this instead:
// There are 3 types of block comments:
// /* ... */ - The standard multi line comment.
// /*! ... */ - A comment used to mask code for other clients. In MySQL the content is handled as normal code.
// /*!12345 ... */ - Same as the previous one except code is only used when the given number is a lower value
// than the current server version (specifying so the minimum server version the code can run with).
VERSION_COMMENT_START: ('/*!' DIGITS) (
{checkVersion(getText())}? // Will set inVersionComment if the number matches.
| .*? '*/'
) -> channel(HIDDEN)
;
// inVersionComment is a variable in the base lexer.
MYSQL_COMMENT_START: '/*!' { inVersionComment = true; setChannel(HIDDEN); };
VERSION_COMMENT_END: '*/' {inVersionComment}? { inVersionComment = false; setChannel(HIDDEN); };
BLOCK_COMMENT: '/*' ~[!] .*? '*/' -> channel(HIDDEN);
POUND_COMMENT: '#' ~([\n\r])* -> channel(HIDDEN);
DASHDASH_COMMENT: DOUBLE_DASH ([ \t] (~[\n\r])* | LINEBREAK | EOF) -> channel(HIDDEN);
You need a local inVersionComment member and a function checkVersion() in your lexer (I have it in the base lexer from which the generated lexer derives) which returns true or false, depending on whether the current server version is equal to or higher than the given version.
And for your question: you cannot have actions in alternatives. Actions can only appear at the end of an entire rule. This differs from ANTLR3.
I'm trying to use a semantic predicate in the lexer to look ahead one token but somehow I can't get it right. Here's what I have:
lexer grammar
lexer grammar TLLexer;
DirStart
: { getCharPositionInLine() == 0 }? '#dir'
;
DirEnd
: { getCharPositionInLine() == 0 }? '#end'
;
Cont
: 'contents' [ \t]* -> mode(CNT)
;
WS
: [ \t]+ -> channel(HIDDEN)
;
NL
: '\r'? '\n'
;
mode CNT;
CNT_DirEnd
: '#end' [ \t]* '\n'?
{ System.out.println("--matched end--"); }
;
CNT_LastLine
: ~ '\n'* '\n'
{ _input.LA(1) == CNT_DirEnd }? -> mode(DEFAULT_MODE)
;
CNT_Line
: ~ '\n'* '\n'
;
parser grammar
parser grammar TLParser;
options { tokenVocab = TLLexer; }
dirs
: ( dir
| NL
)*
;
dir
: DirStart Cont
contents
DirEnd
;
contents
: CNT_Line* CNT_LastLine
;
Essentially each line in the stuff in the CNT mode is free-form, but it never begins with #end followed by optional whitespace. Basically I want to keep matching the #end tag in the default lexer mode.
My test input is as follows:
#dir contents
..line..
#end
If I run this in grun I get the following
$ grun TL dirs test.txt
--matched end--
line 3:0 extraneous input '#end\n' expecting {CNT_LastLine, CNT_Line}
So clearly CNT_DirEnd gets matched, but somehow the predicate doesn't detect it.
I know that this this particular task doesn't require a semantic predicate, but that's just the part that doesn't work. The actual parser, while it may be written without the predicate, will be a lot less clean if I simply move the matching of the the #end tag into the mode CNT.
Thanks,
Kesha.
I think I figured it out. The member _input represents the characters of the original input, thus _input.LA returns characters, not lexer token IDs (is that the correct term?). Either way, the numbers returned by the lexer to the parser have nothing to do with the values returned by _input.LA, hence the predicate fails unless by some weird luck the character value returned by _input.LA(1) is equal to the lexer ID of CNT_DirEnd.
I modified the lexer as shown below and now it works, even though it is not as elegant as I hoped it would be (maybe someone knows a better way?)
lexer grammar TLLexer;
#lexer::members {
private static final String END_DIR = "#end";
private boolean isAtEndDir() {
StringBuilder sb = new StringBuilder();
int n = 1;
int ic;
// read characters until EOF
while ((ic = _input.LA(n++)) != -1) {
char c = (char) ic;
// we're interested in the next line only
if (c == '\n') break;
if (c == '\r') continue;
sb.append(c);
}
// Does the line begin with #end ?
if (sb.indexOf(END_DIR) != 0) return false;
// Is the #end followed by whitespace only?
for (int i = END_DIR.length(); i < sb.length(); i++) {
switch (sb.charAt(i)) {
case ' ':
case '\t':
continue;
default: return false;
}
}
return true;
}
}
[skipped .. nothing changed in the default mode]
mode CNT;
/* removed CNT_DirEnd */
CNT_LastLine
: ~ '\n'* '\n'
{ isAtEndDir() }? -> mode(DEFAULT_MODE)
;
CNT_Line
: ~ '\n'* '\n'
;
I've been using antlr for 3 days. I can parse expressions, write Listeners, interpret parse trees... it's a dream come true.
But then I tried to match a literal string 'foo%' and I'm failing. I can find plenty of examples that claim to do this. I have tried them all.
So I created a tiny project to match a literal string. I must be doing something silly.
grammar Test;
clause
: stringLiteral EOF
;
fragment ESCAPED_QUOTE : '\\\'';
stringLiteral : '\'' ( ESCAPED_QUOTE | ~('\n'|'\r') ) + '\'';
Simple test:
public class Test {
#org.junit.Test
public void test() {
String input = "'foo%'";
TestLexer lexer = new TestLexer(new ANTLRInputStream(input));
CommonTokenStream tokens = new CommonTokenStream(lexer);
TestParser parser = new TestParser(tokens);
ParseTree clause = parser.clause();
System.out.println(clause.toStringTree(parser));
ParseTreeWalker walker = new ParseTreeWalker();
}
}
The result:
Running com.example.Test
line 1:1 token recognition error at: 'f'
line 1:2 token recognition error at: 'o'
line 1:3 token recognition error at: 'o'
line 1:4 token recognition error at: '%'
line 1:6 no viable alternative at input '<EOF>'
(clause (stringLiteral ' ') <EOF>)
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.128 sec - in com.example.Test
Results :
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
The full maven-ized build tree is available for a quick review here
31 lines of code... most of it borrowed from small examples.
$ mvn clean test
Using antlr-4.5.2-1.
fragment rules can only be used by other lexer rules. So, you need to make stringLiteral a lexer rule instead of a parser rule. Just let it start with an upper case letter.
Also, it's better to expand your negated class ~('\n'|'\r') to include a backslash and quote, and you might want to include a backslash to be able to be escaped:
clause
: StringLiteral EOF
;
StringLiteral : '\'' ( Escape | ~('\'' | '\\' | '\n' | '\r') ) + '\'';
fragment Escape : '\\' ( '\'' | '\\' );
I have the following grammar ( minimized for SO)
grammar Hello;
odataIdentifier : identifierLeadingCharacter identifierCharacter*;
identifierLeadingCharacter : Alpha| UNDERSCORE;
identifierCharacter : identifierLeadingCharacter | Digit;
identifierUnreserved : identifierCharacter | (MINUS | DOT | TILDE);
Digit : ZERO_TO_FIVE |[6-9];
ONEHUNDRED_TO_ONEHUNDREDNINETYNINE : '1' Digit Digit; // 100-199
TWOHUNDRED_TO_TWOHUNDREDFOURTYNINE : '2' ZERO_TO_FOUR Digit; // 200-249
TWOHUNDREDFIFTY_TO_TWOHUNDREDFIFTYFIVE : '25' ZERO_TO_FIVE; // 250-255
TEN_TO_NINETYNINE : ONE_TO_NINE Digit; // 10-99
ZERO_TO_ONE : [0-1];
ZERO_TO_TWO : ZERO_TO_ONE | [2];
ZERO_TO_THREE : ZERO_TO_TWO | [3];
ZERO_TO_FOUR : ZERO_TO_THREE | [4];
ZERO_TO_FIVE : ZERO_TO_FOUR | [5];
ONE_TO_TWO : [1-2];
ONE_TO_THREE : ONE_TO_TWO | [3];
ONE_TO_FOUR : ONE_TO_THREE | [4];
ONE_TO_NINE : ONE_TO_FOUR | [5-9];
Alpha : [a-zA-Z];
MINUS : [-];
DOT : '.';
UNDERSCORE : '_';
TILDE : '~';
WS : (' '|'\r'|'\t'|'\u000C'|'\n') -> skip
;
for input c9 it works fine, but when i have 2 digits for example c10 it says:
extraneous input '92' expecting {<EOF>, Digit, Alpha, '_'}
so i guess it parses 9 and parses 2 and doesn't know if this should be TEN_TO_NINETYNINE or 2 Digit Digit.
i am a noob to this, so wondering if my analysis is right and how could i alleviate this ...
Your input is resulting in an Alpha token followed by a TEN_TO_NINETYNINE token. While the parser rule identifierLeadingCharacter does allow the Alpha token, the identifierCharacter rule cannot match a TEN_TO_NINETYNINE token.
The input 10 will always produce a TEN_TO_NINETYNINE token rather than two Digit tokens, because the former matches more of the input and lexer rules are greedy.
I am trying to parse a boolean expression of the following type
B1=p & A4=p | A6=p &(~A5=c)
I want a tree that I can use to evaluate the above expression. So I tried this in Antlr3 with the example in Antlr parser for and/or logic - how to get expressions between logic operators?
It worked in Antlr3. Now I want to do the same thing for Antlr 4. I came up the grammar below and it compiles. But I am having trouble writing the Java code.
Start of Antlr4 grammar
grammar TestAntlr4;
options {
output = AST;
}
tokens { AND, OR, NOT }
AND : '&';
OR : '|';
NOT : '~';
// parser/production rules start with a lower case letter
parse
: expression EOF! // omit the EOF token
;
expression
: or
;
or
: and (OR^ and)* // make `||` the root
;
and
: not (AND^ not)* // make `&&` the root
;
not
: NOT^ atom // make `~` the root
| atom
;
atom
: ID
| '('! expression ')'! // omit both `(` and `)`
;
// lexer/terminal rules start with an upper case letter
ID
:
(
'a'..'z'
| 'A'..'Z'
| '0'..'9' | ' '
| ('+'|'-'|'*'|'/'|'_')
| '='
)+
;
I have written the Java Code (snippet below) for getting a tree for the expression "B1=p & A4=p | A6=p &(~A5=c)". I am expecting & with children B1=p and |. The child | operator will have children A4=p and A6=p &(~A5=c). And so on.
Here is that Java code but I am stuck trying to figure out how I will get the tree. I was able to do this in Antlr 3.
Java Code
String src = "B1=p & A4=p | A6=p &(~A5=c)";
CharStream stream = (CharStream)(new ANTLRInputStream(src));
TestAntlr4Lexer lexer = new TestAntlr4Lexer(stream);
parser.setBuildParseTree(true);
ParserRuleContext tree = parser.parse();
tree.inspect(parser);
if ( tree.children.size() > 0) {
System.out.println(" **************");
test.getChildren(tree, parser);
}
The get Children method is below. But this does not seem to extract any tokens.
public void getChildren(ParseTree tree, TestAntlr4Parser parser ) {
for (int i=0; i<tree.getChildCount(); i++){
System.out.println(" Child i= " + i);
System.out.println(" expression = <" + tree.toStringTree(parser) + ">");
if ( tree.getChild(i).getChildCount() != 0 ) {
this.getChildren(tree.getChild(i), parser);
}
}
}
Could someone help me figure out how to write the parser in Java?
The output=AST option was removed in ANTLR 4, as well as the ^ and ! operators you used in the grammar. ANTLR 4 produces parse trees instead of ASTs, so the root of the tree produced by a rule is the rule itself. For example, given the following rule:
and : not (AND not)*;
You will end up with an AndContext tree containing NotContext and TerminalNode children for the not and AND references, respectively. To make it easier to work with the trees, AndContext will contain a generated method not() which returns a list of context objects returned by the invocations of the not rule (return type List<? extends NotContext>). It also contains a generated method AND which returns a list of the TerminalNode instances created for each AND token that was matched.