ANTLR4 - named function arguments - antlr4

My goal is to generate parser that could handle following code with named function parameters and nested function calls
fnCallY(namedArgStr = "xxx", namedArgZ=fnCallZ(namedArg="www"))
G4 lang file:
val : type_string
| function_call
;
function_call : function_name=ID arguments='('argument? (',' argument)* ')';
argument : name=ID '=' value=val ;
ID : [a-zA-Z_][a-zA-Z0-9_]*;
type_string : LITERAL;
fragment ESCAPED_QUOTE : '\\"';
LITERAL : '"' ( ESCAPED_QUOTE | ~('\n'|'\r') )*? '"'
| '\'' ( ESCAPED_QUOTE | ~('\n'|'\r') )*? '\'';
#Override
public void exitFunction_call(Test.Function_callContext ctx) {
List<Test.ArgumentContext> argument = ctx.argument();
for (Test.ArgumentContext arg : argument) {
Token name = arg.name;
Test.ValContext value = arg.value;
if (value.type_literal() == null || value.function_call() == null) {
throw new RuntimeException("Could not parse argument value");
}
}
}
arg.name holds correct data, but i cannot make the parser to parse the part after =.

The parser is recognizing the argument values.
(It's really valuable to learn the grun command line utility as it can test the grammar and tree structure without involving any of your own code)
This condition would appear to be your problem:
if (value.type_literal() == null || value.function_call() == null)
One or the other will always be null, so this will fail.
if (value.type_literal() == null && value.function_call() == null)
is probably what you want.

Related

Where is the InputMismatchException thrown?

When I execute my program with a certain token in the wrong spot, it throws the InputMismatchException, saying something along the lines of
line 21:0 mismatched input '#' expecting {'in', '||', '&&', '==', '!=', '>=', '<=', '^', '>', '<', '+', '-', '*', '/', '%', '[', ';', '?'}
Which is a terrible error message for the language I'm developing, so I'm looking to change it, but I can't find the source of it, I know why the error is being thrown, but I can't find the actual line of java code that throws the InputMismatchException, I don't think its anywhere in my project, so I assume it's somewhere in the antlr4 runtime, is there a way to disable these error messages, or at least change them?
Edit:
My grammar (the relevant parts) are as follows:
grammar Q;
parse
: header? ( allImport ';' )*? block EOF
;
block
: ( statement | functionDecl )* ( Return expression ';' )?
;
statement
: functionCall ';'
| ifStatement
| forStatement | forInStatement
| whileStatement
| tryCatchStatement
| mainFunctionStatement
| addWebServerTextStatement ';'
| reAssignment ';'
| classStatement
| constructorStatement ';'
| windowAddCompStatement ';'
| windowRenderStatement ';'
| fileWriteStatement ';'
| verifyFileStatement ';'
| objFunctionCall (';')?
| objCreateStatement ';'
| osExecStatement ';'
| anonymousFunction
| hereStatement ';'
;
And an example of the importStatement visit method is:
#Override
public QValue visitImportStatement(ImportStatementContext ctx) {
StringBuilder path = new StringBuilder();
StringBuilder text = new StringBuilder();
for (TerminalNode o : ctx.Identifier()) {
path.append("/").append(o.getText());
}
for (TerminalNode o : ctx.Identifier()) {
text.append(".").append(o.getText());
}
if (lang.allLibs.contains(text.toString().replace(".q.", "").toLowerCase(Locale.ROOT))) {
lang.parse(text.toString());
return QValue.VOID;
}
for (File f : lang.parsed) {
Path currentRelativePath = Paths.get("");
String currentPath = currentRelativePath.toAbsolutePath().toString();
File file = new File(currentPath + "/" + path + ".l");
if (f.getPath().equals(file.getPath())) {
return null;
}
}
QLexer lexer = null;
Path currentRelativePath = Paths.get("");
String currentPath = currentRelativePath.toAbsolutePath().toString();
File file = new File(currentPath + "/" + path + ".l");
lang.parsed.add(file);
try {
lexer = new QLexer(CharStreams.fromFileName(currentPath + "/" + path + ".l"));
} catch (IOException e) {
throw new Problem("Library or File not found: " + path, ctx);
}
QParser parser = new QParser(new CommonTokenStream(lexer));
parser.setBuildParseTree(true);
ParseTree tree = parser.parse();
Scope s = new Scope(lang.scope, false);
Visitor v = new Visitor(s, new HashMap<>());
v.visit(tree);
return QValue.VOID;
}
Because of the parse rule in my g4 file, the import statement MUST come before any other thing (aside from a header statement), so doing this would throw an error
class Main
#import src.main.QFiles.aLib;
fn main()
try
std::ln("orih");
onflaw
end
new Object as o();
o::set("val");
std::ln(o::get());
std::ln("itj");
end
end
And, as expected, it throws an InputMismatchException, but that's not in any of my code
You can remove the default error strategy and implement your own:
...
QParser parser = new QParser(new CommonTokenStream(lexer));
parser.removeErrorListeners();
parser.addErrorListener(new BaseErrorListener() {
#Override
public void syntaxError(Recognizer<?, ?> recognizer, Object offendingSymbol, int line, int charPositionInLine, String msg, RecognitionException e) {
throw new RuntimeException("Your own message here", e);
}
});
ParseTree tree = parser.parse();
...

Xtext refering to element from different file does not work

Hello I am having two files in my xtext editor, the first one containing all definitions and the second one containing the executed recipe. The Grammar looks like this:
ServiceAutomationProgram:
('package' name=QualifiedName ';')?
imports+=ServiceAutomationImport*
definitions+=Definition*;
ServiceAutomationImport:
'import' importedNamespace=QualifiedNameWithWildcard ';';
Definition:
'define' ( TypeDefinition | ServiceDefinition |
SubRecipeDefinition | RecipeDefinition) ';';
TypeDefinition:
'quantity' name=ID ;
SubRecipeDefinition:
'subrecipe' name=ID '('( subRecipeParameters+=ServiceParameterDefinition (','
subRecipeParameters+=ServiceParameterDefinition)*)? ')' '{'
recipeSteps+=RecipeStep*
'}';
RecipeDefinition:
'recipe' name=ID '{' recipeSteps+=RecipeStep* '}';
RecipeStep:
(ServiceInvocation | SubRecipeInvocation) ';';
SubRecipeInvocation:
name=ID 'subrecipe' calledSubrecipe=[SubRecipeDefinition] '('( parameters+=ServiceInvocationParameter (',' parameters+=ServiceInvocationParameter)* )?')'
;
ServiceInvocation:
name=ID 'service' service=[ServiceDefinition]
'(' (parameters+=ServiceInvocationParameter (',' parameters+=ServiceInvocationParameter)*)? ')'
;
ServiceInvocationParameter:
ServiceEngineeringQuantityParameter | SubRecipeParameter
;
ServiceEngineeringQuantityParameter:
parameterName=[ServiceParameterDefinition] value=Amount;
ServiceDefinition:
'service' name=ID ('inputs' serviceInputs+=ServiceParameterDefinition (','
serviceInputs+=ServiceParameterDefinition)*)?;
ServiceParameterDefinition:
name=ID ':' (parameterType=[TypeDefinition]);
;
SubRecipeParameter:
parameterName=[ServiceParameterDefinition]
;
QualifiedNameWithWildcard:
QualifiedName '.*'?;
QualifiedName:
ID ('.' ID)*;
Amount:
INT ;
....
definitionfile file.mydsl:
define quantity Temperature;
define service Heater inputs SetTemperature:Temperature;
define subrecipe sub_recursive() {
Heating1 service Heater(SetTemperature 10);
};
....
recipefile secondsfile.mydsl:
define recipe Main {
sub1 subrecipe sub_recursive();
};
.....
In my generator file which looks like this:
override void doGenerate(Resource resource, IFileSystemAccess2 fsa, IGeneratorContext context) {
for (e : resource.allContents. toIterable.filter (RecipeDefinition)){
e.class;//just for demonstration add breakpoint here and //traverse down the tree
}
}
I need as an example the information RecipeDefinition.recipesteps.subrecipeinvocation.calledsubrecipe.recipesteps.serviceinvocation.service.name which is not accessible (null) So some of the very deep buried information gets lost (maybe due to lazylinking?).
To make the project executable also add to the scopeprovider:
public IScope getScope(EObject context, EReference reference) {
if (context instanceof ServiceInvocationParameter
&& reference == MyDslPackage.Literals.SERVICE_INVOCATION_PARAMETER__PARAMETER_NAME) {
ServiceInvocationParameter invocationParameter = (ServiceInvocationParameter) context;
List<ServiceParameterDefinition> candidates = new ArrayList<>();
if(invocationParameter.eContainer() instanceof ServiceInvocation) {
ServiceInvocation serviceCall = (ServiceInvocation) invocationParameter.eContainer();
ServiceDefinition calledService = serviceCall.getService();
candidates.addAll(calledService.getServiceInputs());
if(serviceCall.eContainer() instanceof SubRecipeDefinition) {
SubRecipeDefinition subRecipeCall=(SubRecipeDefinition) serviceCall.eContainer();
candidates.addAll(subRecipeCall.getSubRecipeParameters());
}
return Scopes.scopeFor(candidates);
}
else if(invocationParameter.eContainer() instanceof SubRecipeInvocation) {
SubRecipeInvocation serviceCall = (SubRecipeInvocation) invocationParameter.eContainer();
SubRecipeDefinition calledSub = serviceCall.getCalledSubrecipe();
candidates.addAll(calledSub.getSubRecipeParameters());
return Scopes.scopeFor(candidates);
}
}return super.getScope(context, reference);
}
When I put all in the same file it works as it does the first time executed after launching runtime but afterwards(when dogenerate is triggered via editor saving) some information is missing. Any idea how to get to the missing informations? thanks a lot!

Newbie: 2.4# is accepted as a float. Is '#' a special character?

Wondering why the expression "setvalue(2#)' is happily accepted by the lexer (and parser) given my grammar/visitor. I am sure I am doing something wrong.
Below is a small sample that should illustrate the problem.
Any help is much appreciated.
grammar ExpressionEvaluator;
parse
: block EOF
;
block
: stat*
;
stat
: assignment
;
assignment
: SETVALUE OPAR expr CPAR
;
expr
: atom #atomExpr
;
atom
: OPAR expr CPAR #parExpr
| (INT | FLOAT) #numberAtom
| ID #idAtom
;
OPAR : '(';
CPAR : ')';
SETVALUE : 'setvalue';
ID
: [a-zA-Z_] [a-zA-Z_0-9]*
;
INT
: [0-9]+
;
FLOAT
: [0-9]+ '.' [0-9]*
| '.' [0-9]+
;
STRING
: '"' (~["\r\n] | '""')* '"'
;
SPACE
: [ \t\r\n] -> skip
;
Code snippet:
public override object VisitParse(ExpressionEvaluatorParser.ParseContext context)
{
return this.Visit(context.block());
}
public override object VisitAssignment(ExpressionEvaluatorParser.AssignmentContext context)
{
// TODO - Set ID Value
return Convert.ToDouble(this.Visit(context.expr()));
}
public override object VisitIdAtom(ExpressionEvaluatorParser.IdAtomContext context)
{
string id = context.GetText();
// TODO - Lookup ID value
return id;
}
public override object VisitNumberAtom(ExpressionEvaluatorParser.NumberAtomContext context)
{
return Convert.ToDouble(context.GetText());
}
public override object VisitParExpr(ExpressionEvaluatorParser.ParExprContext context)
{
return this.Visit(context.expr());
}
The # character actually isn't matching anything at all. When the lexer reaches that character, the following happen in order:
The lexer determines that no lexer rule can match the # character.
The lexer reports an error regarding the failure.
The lexer calls _input.consume() to skip past the bad character.
To ensure errors are reported as easily as possible, always add the following rule as the last rule in your lexer.
ErrChar
: .
;
The parser will report an error when it reaches an ErrChar, so you won't need to add an error listener to the lexer.

ANTLR4 Specific search

Basically I want to find, in a file, using ANTLR, every expression as defined :
WORD.WORD
for example : "end.beginning" matches
For the time being the file can have hundreds and hundreds of lines and a complexe structure.
Is there a way to skip every thing (character?) that does not match with the pattern described above, without making a grammar that fully represents the file ?
So far this is my grammar but i don't know what to do next.
grammar Dep;
program
:
dependencies
;
dependencies
:
(
dependency
)*
;
dependency
:
identifier
DOT
identifier
;
identifier
:
INDENTIFIER
;
DOT : '.' ;
INDENTIFIER
:
[a-zA-Z_] [a-zA-Z0-9_]*
;
OTHER
:
. -> skip
;
The way you're doing it now, the dependency rule would also match the tokens 'end', '.', 'beginning' from the input:
end
#####
.
#####
beginning
because the line breaks and '#'s are being skipped from the token stream.
If that is not what you want, i.e. you'd like to match "end.beginning" without any char in between, you should make a single lexer rule of it, and match that rule in your parser:
grammar Dep;
program
: DEPENDENCY* EOF
;
DEPENDENCY
: [a-zA-Z_] [a-zA-Z0-9_]* '.' [a-zA-Z_] [a-zA-Z0-9_]*
;
OTHER
: . -> skip
;
Then you could use a tree listener to do something useful with your DEPENDENCY's:
public class Main {
public static void main(String[] args) throws Exception {
String input = "### end.beginning ### end ### foo.bar mu foo.x";
DepLexer lexer = new DepLexer(new ANTLRInputStream(input));
DepParser parser = new DepParser(new CommonTokenStream(lexer));
ParseTreeWalker.DEFAULT.walk(new DepBaseListener(){
#Override
public void enterProgram(#NotNull DepParser.ProgramContext ctx) {
for (TerminalNode node : ctx.DEPENDENCY()) {
System.out.println("node=" + node.getText());
}
}
}, parser.program());
}
}
which would print:
node=end.beginning
node=foo.bar
node=foo.x

ANTLR4 Tokenizing the First Non-Whitespace Non-comment Char of a Line

I am looking for a way to tokenize a '#' being the first non-whitespace, non-comment character of a line (this is exactly the same as the standard C++ preprocessing directives requirement). Notice the first non-whitespace requirement implying the # can be preceded by whitespaces and multiline comments such as (using C++ preprocessing directives as examples):
/* */ /* abc */ #define getit(x,y) #x x##y
and
/*
can be preceded by multiline comment spreading across >1 lines
123 */ /* abc */# /* */define xyz(a) #a
The '#' could be preceded and followed by multiline comments spanning >1 lines and whitespaces. Other '#' can appear in the line as operators so being the first effective character in the line is the key requirement.
How do we tokenize the first effective # character ?
I tried this
FIRSTHASH: {getCharPositionInLine() == 0}? ('/*' .*? '*/' | [ \t\f])* '#';
But this is buggy since an input like this
/* */other line
/* S*/ /*SS*/#
is wrongly considered as 2 tokens ( 1 big comment + a single '#'). i.e. the .*? consumed the 2 two */ incorrectly causing the 2 lines combined as 1 comment. (Is it possible to replace the .*? inside multiline comment by something explicitly excludes */?)
I'd lex it without the restraint and check during parsing phase or even after parsing.
It may be not conforming to the grammar to put the '#' elsewhere but it doesn't invalidate the parsing => move it to a later phase where it can be detected more easily!
If you really want to do it "early" (i.e. not after parsing), do it during the parsing phase.
The lexing doesn't depend on it (i.e. unlike strings or comments), so there's no point in doing it during the lexing phase.
Here's a sample in C#.
It checks all 3 defines (first two are ok, the third is not ok).
public class hashTest
{
public static void test()
{
var sample = File.ReadAllText(#"ANTLR\unrelated\hashTest.txt");
var sampleStream = new Antlr4.Runtime.AntlrInputStream(sample);
var lexer = new hashLex(input: sampleStream);
var tokenStream = new CommonTokenStream(tokenSource: lexer);
var parser = new hashParse(input: tokenStream);
var result = parser.compileUnit();
var visitor = new HashVisitor(tokenStream: tokenStream);
var visitResult = visitor.Visit(result);
}
}
public class HashVisitor : hashParseBaseVisitor<object>
{
private readonly CommonTokenStream tokenStream;
public HashVisitor(CommonTokenStream tokenStream)
{
this.tokenStream = tokenStream;
}
public override object VisitPreproc(hashParse.PreprocContext context)
{
;
var startSymbol = context.PreProcStart().Symbol;
var tokenIndex = startSymbol.TokenIndex;
var startLine = startSymbol.Line;
var previousTokens_reversed = tokenStream.GetTokens(0, tokenIndex - 1).Reverse();
var ok = true;
var allowedTypes = new[] { hashLex.RangeComment, hashLex.WS, };
foreach (var token in previousTokens_reversed)
{
if (token.Line < startLine)
break;
if (allowedTypes.Contains(token.Type) == false)
{
ok = false;
break;
}
;
}
if (!ok)
{
; // handle error
}
return base.VisitPreproc(context);
}
}
The lexer:
lexer grammar hashLex;
PreProcStart : Hash -> pushMode(PRE_PROC_MODE)
;
Identifier:
Identifier_
;
LParen : '(';
RParen : ')';
WS
: WS_-> channel(HIDDEN)
;
LineComment
: '//'
~('\r'|'\n')*
(LineBreak|EOF)
-> channel(HIDDEN)
;
RangeComment
: '/*'
.*?
'*/'
-> channel(HIDDEN)
;
mode PRE_PROC_MODE;
PreProcIdentifier : Identifier_;
PreProcHash : Hash;
PreProcEnd :
(EOF|LineBreak) -> popMode
;
PreProcWS : [ \t]+ -> channel(HIDDEN)
;
PreProcLParen : '(';
PreProcRParen : ')';
PreProcRangeComment
: '/*'
(~('\r' | '\n'))*?
'*/'
-> channel(HIDDEN)
;
fragment LineBreak
: '\r' '\n'?
| '\n'
;
fragment Identifier_:
[a-zA-Z]+
;
fragment Hash : '#'
;
fragment WS_
: [ \t\r\n]+
;
The parser:
parser grammar hashParse;
options { tokenVocab=hashLex; }
compileUnit
: (allKindOfStuff | preproc)*
EOF
;
preproc : PreProcStart .*? PreProcEnd
;
allKindOfStuff
: Identifier
| LParen
| RParen
;
The sample:
/*
can be preceded by multiline comment spreading across >1 lines
123 */ /* abc */# /* */define xyz(a) #a
/* def */# /* */define xyz(a) #a
some other code // #
illegal #define a b

Resources