Implementation of the DefaultTerminalConverters in order to instantiate Integer instead of terminal rule in Xtext throws ClassCastException - dsl

I want to implement my own DefaultTerminalConverters class in order to insatiate Integer instead of terminal rule VALUE_TERMINAL
VALUE_TERMINAL from my grammar is:
terminal VALUE_TERMINAL:
( '0' .. '9' )+ ;
code of my own DefaultTerminalConverters is:
import com.google.inject.Inject;
import org.eclipse.xtext.common.services.DefaultTerminalConverters;
import org.eclipse.xtext.conversion.IValueConverter;
import org.eclipse.xtext.conversion.ValueConverter;
import org.eclipse.xtext.conversion.impl.AbstractLexerBasedConverter;
import org.eclipse.xtext.nodemodel.INode;
public class MyLangValueConverter extends DefaultTerminalConverters {
#Inject MyINTValueConverter myINTValueConverter;
#ValueConverter(rule="VALUE_TERMINAL")
public IValueConverter<Integer> VALUE_TERMINAL() {
return myINTValueConverter;
}
private static class MyINTValueConverter extends AbstractLexerBasedConverter<Integer> {
#Override
public Integer toValue(String string, INode node) {
return new Integer(string);
}
#Override
public String toString(Integer value){
return String.valueOf(value);
}
}
}
When I'm writing someting in my own DSL I'm always getting error java.lang.Integer cannot be cast to java.lang.String when using VALUE_TERMINAL. What could be the problem ?

the problem is the grammar:
terminal VALUE_TERMINAL:
( '0' .. '9' )+ ;
is short for
import "http://www.eclipse.org/emf/2002/Ecore" as ecore
...
terminal VALUE_TERMINAL returns ecore::EString:
( '0' .. '9' )+ ;
so you need to specify the returned datatype for the terminal rule explicitly. something like
terminal VALUE_TERMINAL returns ecore::EInt:
or
terminal VALUE_TERMINAL returns ecore::EIntegerObject:

Related

How to apply `#POJO` to classes via config script?

I have a few classes which I'd like to keep as POJO. Manually annotating each of these would be troublesome, for both updating all current ones and adding future such classes.
I have a SourceAwareCustomizer able to identify all these classes. However I do not know how to apply the #POJO via the config script.
I tried ast(POJO), and I would get an error:
Provided class doesn't look like an AST #interface
I dug in the code a bit and found that #POJO is not an AST transformation (it's not annotated with #GroovyASTTransformationClass.
Is there a way to apply #POJO, or maybe a random annotation, to a class via the config script?
POJO is not an AST transformation.
Compare POJO source to ToString (for example). In POJO the GroovyASTTransformationClass annotation is missing..
I can't make #POJO working without #CompileStatic..
So, here is my try with groovy 4.0.1:
congig.groovy
import org.codehaus.groovy.ast.ClassNode
import org.codehaus.groovy.ast.AnnotationNode
import groovy.transform.stc.POJO
import groovy.transform.CompileStatic
withConfig(configuration) {
inline(phase:'SEMANTIC_ANALYSIS'){Object...args->
if(args.size()>2){
ClassNode cn = args[2]
if( cn.getSuperClass().name=="java.lang.Object" ) {
if( !cn.annotations.find{it.classNode.name==POJO.class.name} ) {
cn.addAnnotation( new AnnotationNode(new ClassNode(POJO.class)) )
//can't see how POJO could work without compile static in groovy 4.0.1
if( !cn.annotations.find{it.classNode.name==CompileStatic.class.name} )
cn.addAnnotation( new AnnotationNode(new ClassNode(CompileStatic.class)) )
}
}
println "class = $cn"
println "annotations = ${cn.getAnnotations()}"
}
}
}
A.groovy
class A{
String id
}
compile command line:
groovyc --configscript config.groovy A.groovy
generated class
public class A
{
private String id;
#Generated
public A() {}
#Generated
public String getId() {
return this.id;
}
#Generated
public void setId(final String id) {
this.id = id;
}
}

Cucumber converting TypeRegistryConfiguration to ParameterType annotation JAVA

Since TypeRegistry io.cucumber.core.api.TypeRegistry is #Deprecated I have troubles declaring my parameter annotations, I have now idea how to transform them to the #ParameterType
I tried this
#ParameterType(value = ".*", name = "foo")
public String foo(String foo) {
return "foobar";
}
#ParameterType("foo")
#When("I type {foo}")
public void iType(String string) {
System.out.println(string);
}
the step recognises this parameter and it compiles but I get following error:
io.cucumber.java.InvalidMethodSignatureException: A #ParameterType annotated method must have one of these signatures:
* public Author parameterName(String all)
* public Author parameterName(String captureGroup1, String captureGroup2, ...ect )
* public Author parameterName(String... captureGroups)
at com.dsm.steps.RequestAccessSteps.withTheInformationIconContaining(java.lang.String)
Note: Author is an example of the class you want to convert captureGroups to
but honestly I dont understand what they are trying to say
The fault is I put the #Parametertype("foo") above the step but thats not necessary and thus throwing the error. It works perfectly fine otherwise.
So this works:
public String foo(String foo) {
return "foobar";
}
#When("I type {foo}")
public void iType(String string) {
System.out.println(string);
}

Unexpected behaviour for Groovy 'with' method - variable assignment silently failed

I have the following code:
import groovy.transform.ToString
#ToString(includeNames = true)
class Simple {
String creditPoints
}
Simple simple = new Simple()
simple.with {
creditPoints : "288"
}
println simple
Clearly, I made a mistake here with creditPoints : "288". It should have been creditPoints = "288".
I expected Groovy to fail at the runtime saying that I made a mistake and I should have used creditPoints = "288"but clearly it did not.
Since it did not fail then what did Groovy do with the closure I created?
From the Groovy compiler perspective, there is no mistake in your closure code. The compiler sees creditPoints : "288" as labeled statement which is a legal construction in the Groovy programming language. As the documentation says, label statement does not add anything to the resulting bytecode, but it can be used for instance by AST transformations (Spock Framework uses it heavily).
It becomes more clear and easy to understand if you format code more accurately to the label statement use case, e.g
class Simple {
String creditPoints
static void main(String[] args) {
Simple simple = new Simple()
simple.with {
creditPoints:
"288"
}
println simple
}
}
(NOTE: I put your script inside the main method body to show you its bytecode representation in the next section.)
Now when we know how compiler sees this construction, let's take a look and see what does the final bytecode look like. To do this we will decompile the .class file (I use IntelliJ IDEA for that - you simply open .class file in IDEA and it decompiles it for you):
//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by Fernflower decompiler)
//
import groovy.lang.Closure;
import groovy.lang.GroovyObject;
import groovy.lang.MetaClass;
import groovy.transform.ToString;
import org.codehaus.groovy.runtime.DefaultGroovyMethods;
import org.codehaus.groovy.runtime.GeneratedClosure;
import org.codehaus.groovy.runtime.InvokerHelper;
#ToString
public class Simple implements GroovyObject {
private String creditPoints;
public Simple() {
MetaClass var1 = this.$getStaticMetaClass();
this.metaClass = var1;
}
public static void main(String... args) {
Simple simple = new Simple();
class _main_closure1 extends Closure implements GeneratedClosure {
public _main_closure1(Object _outerInstance, Object _thisObject) {
super(_outerInstance, _thisObject);
}
public Object doCall(Object it) {
return "288";
}
public Object call(Object args) {
return this.doCall(args);
}
public Object call() {
return this.doCall((Object)null);
}
public Object doCall() {
return this.doCall((Object)null);
}
}
DefaultGroovyMethods.with(simple, new _main_closure1(Simple.class, Simple.class));
DefaultGroovyMethods.println(Simple.class, simple);
Object var10000 = null;
}
public String toString() {
StringBuilder _result = new StringBuilder();
Boolean $toStringFirst = Boolean.TRUE;
_result.append("Simple(");
if ($toStringFirst == null ? false : $toStringFirst) {
Boolean var3 = Boolean.FALSE;
} else {
_result.append(", ");
}
if (this.getCreditPoints() == this) {
_result.append("(this)");
} else {
_result.append(InvokerHelper.toString(this.getCreditPoints()));
}
_result.append(")");
return _result.toString();
}
public String getCreditPoints() {
return this.creditPoints;
}
public void setCreditPoints(String var1) {
this.creditPoints = var1;
}
}
As you can see, your closure used with the with method is represented as an inner _main_closure1 class. This class extends Closure class, and it implements GeneratedClosure interface. The body of the closure is encapsulated in public Object doCall(Object it) method. This method only returns "288" string, which is expected - the last statement of the closure becomes a return statement by default. There is no label statement in the generated bytecode, which is also expected as labels get stripped at the CANONICALIZATION Groovy compiler phase.

Antlr4 doesn't correctly recognizes unicode characters

I've very simple grammar which tries to match 'é' to token E_CODE.
I've tested it using TestRig tool (with -tokens option), but parser can't correctly match it.
My input file was encoded in UTF-8 without BOM and I've used ANTLR version 4.4.
Could somebody else also check this ? I got this output on my console:
line 1:0 token recognition error at: 'Ă'
grammar Unicode;
stat:EOF;
E_CODE: '\u00E9' | 'é';
I tested the grammar:
grammar Unicode;
stat: E_CODE* EOF;
E_CODE: '\u00E9' | 'é';
as follows:
UnicodeLexer lexer = new UnicodeLexer(new ANTLRInputStream("\u00E9é"));
UnicodeParser parser = new UnicodeParser(new CommonTokenStream(lexer));
System.out.println(parser.stat().getText());
and the following got printed to my console:
éé<EOF>
Tested with 4.2 and 4.3 (4.4 isn't in Maven Central yet).
EDIT
Looking at the source I see TestRig takes an optional -encoding param. Have you tried setting it?
This is not an answer but a large comment.
I just hit a snag with Unicode, so I thought I would test this. Turned out I wrongly encoded the input file, but here is the test code, everything is default and working extremely well in ANTLR 4.10.1. Maybe of some use:
grammar LetterNumbers;
text: WORD*;
WS: [ \t\r\n]+ -> skip ; // toss out whitespace
// The letters that return Character.LETTER_NUMBER to Character.getType(ch)
// The list: https://www.compart.com/en/unicode/category/Nl
// Roman Numerals are the best known here
WORD: LETTER_NUMBER+;
LETTER_NUMBER:
[\u16ee-\u16f0]|[\u2160-\u2182]|[\u2185-\u2188]
|'\u3007'
|[\u3021-\u3029]|[\u3038-\u303a]|[\ua6e6-\ua6ef];
And the JUnit5 test that goes with that:
package antlerization.minitest;
import antlrgen.minitest.LetterNumbersBaseListener;
import antlrgen.minitest.LetterNumbersLexer;
import antlrgen.minitest.LetterNumbersParser;
import org.antlr.v4.runtime.Lexer;
import org.antlr.v4.runtime.tree.TerminalNode;
import org.junit.jupiter.api.Test;
import org.antlr.v4.runtime.CharStreams;
import org.antlr.v4.runtime.CommonTokenStream;
import org.antlr.v4.runtime.tree.ParseTree;
import org.antlr.v4.runtime.tree.ParseTreeWalker;
import java.util.LinkedList;
import java.util.List;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.*;
public class MiniTest {
static class WordCollector extends LetterNumbersBaseListener {
public final List<String> collected = new LinkedList<>();
#Override
public void exitText(LetterNumbersParser.TextContext ctx) {
for (TerminalNode tn : ctx.getTokens(LetterNumbersLexer.WORD)) {
collected.add(tn.getText());
}
}
}
private static ParseTree stringToParseTree(String inString) {
Lexer lexer = new LetterNumbersLexer(CharStreams.fromString(inString));
CommonTokenStream tokens = new CommonTokenStream(lexer);
// "text" is the root of the grammar tree
// this returns a sublcass of ParseTree: LetterNumbersParser.TextContext
return (new LetterNumbersParser(tokens)).text();
}
private static List<String> collectWords(ParseTree parseTree) {
WordCollector wc = new WordCollector();
(new ParseTreeWalker()).walk(wc, parseTree);
return wc.collected;
}
private static String joinForTest(List<String> list) {
return String.join(",",list);
}
private static String stringInToStringOut(String parseThis) {
return joinForTest(collectWords(stringToParseTree(parseThis)));
}
#Test
void unicodeCharsOneWord() {
String res = stringInToStringOut("ⅣⅢⅤⅢ");
assertThat(res,equalTo("ⅣⅢⅤⅢ"));
}
#Test
void escapesOneWord() {
String res = stringInToStringOut("\u2163\u2162\u2164\u2162");
assertThat(res,equalTo("ⅣⅢⅤⅢ"));
}
#Test
void unicodeCharsMultipleWords() {
String res = stringInToStringOut("ⅠⅡⅢ ⅣⅤⅥ ⅦⅧⅨ ⅩⅪⅫ ⅬⅭⅮⅯ");
assertThat(res,equalTo("ⅠⅡⅢ,ⅣⅤⅥ,ⅦⅧⅨ,ⅩⅪⅫ,ⅬⅭⅮⅯ"));
}
#Test
void unicodeCharsLetters() {
String res = stringInToStringOut("Ⅰ Ⅱ Ⅲ \n Ⅳ Ⅴ Ⅵ \n Ⅶ Ⅷ Ⅸ \n Ⅹ Ⅺ Ⅻ \n Ⅼ Ⅽ Ⅾ Ⅿ");
assertThat(res,equalTo("Ⅰ,Ⅱ,Ⅲ,Ⅳ,Ⅴ,Ⅵ,Ⅶ,Ⅷ,Ⅸ,Ⅹ,Ⅺ,Ⅻ,Ⅼ,Ⅽ,Ⅾ,Ⅿ"));
}
}
Your grammar file is not saved in utf8 format.
Utf8 is default format that antlr accept as input grammar file, according with terence Parr book.

Inheriting from Own class instead from XMLParserRuleContext

I am using the 'visitor' pattern to generate XML from my parsed code. On typical context class looks like:
public static class On_dtmContext extends ParserRuleContext {
public List<FieldContext> field() {
return getRuleContexts(FieldContext.class);
}
public TerminalNode ON() { return getToken(SRC_REP_SCREENParser.ON, 0); }
public On_dtm_headerContext on_dtm_header() {
return getRuleContext(On_dtm_headerContext.class,0);
}
.....
}
and I access the element in my visitors call back function using RuleContext's 'getText' member function.
I would like to write a class inheriting from 'ParserRuleContext' and overload 'getText' in order to replace characters like '<' or '>' with their xml escape sequences. Is there a way I can have my code generated and having the context classes inheriting from my class, as:
public static class On_dtmContext extends XMLParserRuleContext {
public List<FieldContext> field() {
return getRuleContexts(FieldContext.class);
}
public TerminalNode ON() { return getToken(SRC_REP_SCREENParser.ON, 0); }
public On_dtm_headerContext on_dtm_header() {
return getRuleContext(On_dtm_headerContext.class,0);
}
.....
}
Thank you for your help!
Kind regards, wolf
Is there a reason why you are trying to extend the class, rather than creating a parser rule in your grammar to capture < and > so you can translate them as they occur?
The parser rules would look something like:
lessThan
: '<'
;
greaterThan
: '>'
;
At that point, you would have specific visitors for each of those terms and could translate them as you will.

Resources