antlr4 failed to generate grammar - antlr4

The code tool.process(grammar, false); produces a null pointer exception.
See my code below:
package antlr;
import java.nio.file.Files;
import java.nio.file.Paths;
import org.antlr.v4.Tool;
import org.antlr.v4.tool.Grammar;
import org.antlr.v4.tool.ast.GrammarRootAST;
import org.junit.Test;
// License : Apache License Version 2.0 https://www.apache.org/licenses/LICENSE-2.0
/**
*
* #author Peter <peter#quantr.hk>
*/
public class TestDynamicParser {
#Test
public void testDynamicParser() throws Exception {
Tool tool = new Tool();
String content = new String(Files.readAllBytes(Paths.get(getClass().getResource("Hello.g4").toURI())));
GrammarRootAST ast = tool.parseGrammarFromString(content);
Grammar grammar = tool.createGrammar(ast);
tool.process(grammar, false);
}
}
Grammar:
grammar Hello;
r : 'hello' ID;
ID : [a-z]+ ;
WS : [ \t\r\n]+ -> skip ;
Error:
java.lang.NullPointerException
at antlr.TestDynamicParser.testDynamicParser(TestDynamicParser.java:23)
How can I prevent this error from occurring?

Related

Katalon Studio to call custom keyword from other custom keyword

I have 2 CustomKeywords, located in the same package in Katalon Studio project. I try to call one custom keyword from the other one. This code isn’t working in this case:
CustomKeywords.'mypack.myclass.mymethod'()
Keyword, which should be called:
package uploadFile
import static com.kms.katalon.core.checkpoint.CheckpointFactory.findCheckpoint
import static com.kms.katalon.core.testcase.TestCaseFactory.findTestCase
import static com.kms.katalon.core.testdata.TestDataFactory.findTestData
import static com.kms.katalon.core.testobject.ObjectRepository.findTestObject
import com.kms.katalon.core.annotation.Keyword
import com.kms.katalon.core.checkpoint.Checkpoint
import com.kms.katalon.core.cucumber.keyword.CucumberBuiltinKeywords as CucumberKW
import com.kms.katalon.core.mobile.keyword.MobileBuiltInKeywords as Mobile
import com.kms.katalon.core.model.FailureHandling
import com.kms.katalon.core.testcase.TestCase
import com.kms.katalon.core.testdata.TestData
import com.kms.katalon.core.testobject.TestObject
import com.kms.katalon.core.webservice.keyword.WSBuiltInKeywords as WS
import com.kms.katalon.core.webui.keyword.WebUiBuiltInKeywords as WebUI
import java.awt.Robot
import java.awt.Toolkit
import java.awt.datatransfer.StringSelection
import java.awt.event.KeyEvent
import com.kms.katalon.core.annotation.Keyword
import com.kms.katalon.core.testobject.TestObject
import com.kms.katalon.core.webui.keyword.WebUiBuiltInKeywords as WebUI
import internal.GlobalVariable
class upload2Files {
#Keyword
def upload(TestObject to, String filePath , String file , String file2) {
WebUI.click(to)
StringSelection ss = new StringSelection("\""+filePath+"\" " +"\""+ file +"\" "+ file2 );
Toolkit.getDefaultToolkit().getSystemClipboard().setContents(ss, null);
Robot robot = new Robot();
robot.keyPress(KeyEvent.VK_ENTER);
robot.keyRelease(KeyEvent.VK_ENTER);
robot.keyPress(KeyEvent.VK_CONTROL);
robot.keyPress(KeyEvent.VK_V);
robot.keyRelease(KeyEvent.VK_V);
robot.keyRelease(KeyEvent.VK_CONTROL)
robot.keyPress(KeyEvent.VK_ENTER);
robot.keyRelease(KeyEvent.VK_ENTER);
}
}
Other keyword, where I try to call it:
(new uploadFile.upload2Files()).upload(findTestObject('Object Repository/validateFile/input_originalFile'), (d_directory.toString() + '\\') + detachedTXT1, (d_directory.toString() + '\\') + detachedTXT2)
Error message:
org.codehaus.groovy.runtime.InvokerInvocationException: groovy.lang.MissingMethodException: No signature of method: uploadFile.upload2Files.upload() is applicable for argument types: (com.kms.katalon.core.testobject.TestObject, java.lang.String, java.lang.String) values
I will explain on the example "(new packagename.classname()).methodname()"
I have keyword1:
package closeAplication
import...
public class closeApp {
#Keyword
public void cmdAdbCloseApp(String ApplicationID){
String CMDclose = ('adb shell am force-stop ' + ApplicationID)
println ('This CMD Windows command will be executed: ' + CMDclose)
Runtime.getRuntime().exec(CMDclose)
}
}
I will use keyword1 in the test case:
def ApplicationID = (GlobalVariable.ApplicationIDds)
CustomKeywords.'closeAplication.closeApp.cmdAdbCloseApp'(ApplicationID)
I want to write another keyword2 and call inside keyword1:
package runAppInMobile
import ...
public class runAppClass {
#Keyword
public void runApp (String ApplicationID){
new closeAplication.runAppClass().cmdAdbCloseApp(ApplicationID) //here's the call of the above keyword1
Mobile.startExistingApplication(ApplicationID)
}
}
(new packagename.classname()).methodname()

Implementation of the DefaultTerminalConverters in order to instantiate Integer instead of terminal rule in Xtext throws ClassCastException

I want to implement my own DefaultTerminalConverters class in order to insatiate Integer instead of terminal rule VALUE_TERMINAL
VALUE_TERMINAL from my grammar is:
terminal VALUE_TERMINAL:
( '0' .. '9' )+ ;
code of my own DefaultTerminalConverters is:
import com.google.inject.Inject;
import org.eclipse.xtext.common.services.DefaultTerminalConverters;
import org.eclipse.xtext.conversion.IValueConverter;
import org.eclipse.xtext.conversion.ValueConverter;
import org.eclipse.xtext.conversion.impl.AbstractLexerBasedConverter;
import org.eclipse.xtext.nodemodel.INode;
public class MyLangValueConverter extends DefaultTerminalConverters {
#Inject MyINTValueConverter myINTValueConverter;
#ValueConverter(rule="VALUE_TERMINAL")
public IValueConverter<Integer> VALUE_TERMINAL() {
return myINTValueConverter;
}
private static class MyINTValueConverter extends AbstractLexerBasedConverter<Integer> {
#Override
public Integer toValue(String string, INode node) {
return new Integer(string);
}
#Override
public String toString(Integer value){
return String.valueOf(value);
}
}
}
When I'm writing someting in my own DSL I'm always getting error java.lang.Integer cannot be cast to java.lang.String when using VALUE_TERMINAL. What could be the problem ?
the problem is the grammar:
terminal VALUE_TERMINAL:
( '0' .. '9' )+ ;
is short for
import "http://www.eclipse.org/emf/2002/Ecore" as ecore
...
terminal VALUE_TERMINAL returns ecore::EString:
( '0' .. '9' )+ ;
so you need to specify the returned datatype for the terminal rule explicitly. something like
terminal VALUE_TERMINAL returns ecore::EInt:
or
terminal VALUE_TERMINAL returns ecore::EIntegerObject:

ConfigurationException: Trigger class 'org.apache.cassandra.triggers.AuditTrigger' doesn't exist

I am creating trigger using here but it is not working at all and I am getting ConfigurationException: Trigger class 'org.apache.cassandra.triggers.AuditTrigger' doesn't exist.
Steps I followed to create trigger:
1: I have compiled my java file using
javac -cp /CassandraTriggerExample/lib/cassandra-all-3.6.jar
AuditTrigger.Java
2:Jar creation :
jar -cvf trigger-example.jar AuditTrigger.class
3: I checked content of my jar file:
"unzip -l trigger-example.jar"
4: Copied this jar file into:
cassandra_home/conf/triggers
5: Copied AuditTrigger.properties into:
cassandra_home/conf
6: Restarted cassandra server
7: ./nodetool -h localhost reloadtriggers
8: In system.log i can see the entry:
INFO [RMI TCP Connection(2)-127.0.0.1] 2018-07-22 22:15:25,827
CustomClassLoader.java:89 - Loading new jar
/Users/uname/cassandra/conf/triggers/trigger-example.jar
9: Now when i am creating my trigger using :
CREATE TRIGGER test1 ON test.test
USING 'org.apache.cassandra.triggers.AuditTrigger';
I am getting "ConfigurationException: Trigger class 'org.apache.cassandra.triggers.AuditTrigger' doesn't exist".
I think that the problem is that your jar isn't correctly packaged: if your class has name org.apache.cassandra.triggers.AuditTrigger, then it should be located under org/apache/cassandra/triggers/AuditTrigger.class inside jar file...
See this documentation for more details explanation how classes are found...
Did had a similar issue. Could be because you copied it but did not reload the trigger or created the trigger. Got it resolved by following the below checks and execution of command to re-load and create the trigger.
Check
Ensure that the class has name org.apache.cassandra.triggers.AuditTrigger and the same is located under org/apache/cassandra/triggers/AuditTrigger.class inside jar file.
CMD Command
Go to the bin folder of Cassandra installed folder to run the nodetool reloadtriggers command as below.
C:\Cassandra\apache-cassandra-3.11.6\bin>nodetool reloadtriggers
Execute the below statement at cqlsh prompt
CREATE TRIGGER test1 ON test.test USING 'org.apache.cassandra.triggers.AuditTrigger';
Your trigger should be now available!
In case if still the problem persists you can try to restart the server once to see if the same is available.
Find in the below code what I used to publish a message to Kafka consumer upon every insert to Cassandra DB as an example. You can modify the same for update. I used JDK 1.8.0_251, apache-cassandra-3.11.7, kafka_2.13-2.6.0 and Zookeeper-3.6.1.
/**
*
*/
package com.cass.kafka.insert.trigger;
import java.util.Collection;
import java.util.Collections;
import java.util.Iterator;
import java.util.Properties;
import java.util.concurrent.LinkedBlockingDeque;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
import org.apache.cassandra.config.ColumnDefinition;
import org.apache.cassandra.db.Mutation;
import org.apache.cassandra.db.partitions.Partition;
import org.apache.cassandra.db.rows.Cell;
import org.apache.cassandra.db.rows.Row;
import org.apache.cassandra.db.rows.Unfiltered;
import org.apache.cassandra.db.rows.UnfilteredRowIterator;
import org.apache.cassandra.triggers.ITrigger;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.ProducerRecord;
/**
* #author Dinesh.Lomte
*
*/
public class InsertCassTriggerForKafkaPublish implements ITrigger {
private String topic;
private Producer<String, String> producer;
private ThreadPoolExecutor threadPoolExecutor;
/**
*
*/
public InsertCassTriggerForKafkaPublish() {
Thread.currentThread().setContextClassLoader(null);
topic = "test";
producer = new KafkaProducer<String, String>(getProps());
threadPoolExecutor = new ThreadPoolExecutor(4, 20, 30,
TimeUnit.SECONDS, new LinkedBlockingDeque<Runnable>());
}
/**
*
*/
#Override
public Collection<Mutation> augment(Partition partition) {
threadPoolExecutor.execute(() -> handleUpdate(partition));
return Collections.emptyList();
}
/**
*
* #param partition
*/
private void handleUpdate(Partition partition) {
if (!partition.partitionLevelDeletion().isLive()) {
return;
}
UnfilteredRowIterator it = partition.unfilteredIterator();
while (it.hasNext()) {
Unfiltered un = it.next();
Row row = (Row) un;
if (row.primaryKeyLivenessInfo().timestamp() != Long.MIN_VALUE) {
Iterator<Cell> cells = row.cells().iterator();
Iterator<ColumnDefinition> columns = row.columns().iterator();
while (cells.hasNext() && columns.hasNext()) {
ColumnDefinition columnDef = columns.next();
Cell cell = cells.next();
if ("payload_json".equals(columnDef.name.toString())) {
producer.send(new ProducerRecord<>(
topic, columnDef.type.getString(cell.value())));
break;
}
}
}
}
}
/**
*
* #return
*/
private Properties getProps() {
Properties properties = new Properties();
properties.put("bootstrap.servers", "localhost:9092");
properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
return properties;
}
}

How to print RuntimeVisibleAnnotations in java ASM

I am new to ASM. I have a class file in which I have runtime visible annotations for methods. I want to parse this class file and select the annotation according to specific criteria. I looked into the documentation of ASM and tried with the visibleAnnotation. I can't seem to print the list of annotations of method which I can see in my class files.
My code is as
import java.io.FileInputStream;
import java.io.InputStream;
import java.util.Iterator;
import org.objectweb.asm.tree.AnnotationNode;
import org.objectweb.asm.tree.ClassNode;
import org.objectweb.asm.tree.MethodNode;
import org.objectweb.asm.ClassReader;
public class ByteCodeParser {
public static void main(String[] args) throws Exception{
InputStream in=new FileInputStream("sample.class");
ClassReader cr=new ClassReader(in);
ClassNode classNode=new ClassNode();
//ClassNode is a ClassVisitor
cr.accept(classNode, 0);
//
Iterator<MethodNode> i = classNode.methods.iterator();
while(i.hasNext()){
MethodNode mn = i.next();
System.out.println(mn.name+ "" + mn.desc);
System.out.println(mn.visibleAnnotations);
}
}
}
The output is:
<clinit>()V
null
<init>()V
null
MyRandomFunction1()V
[org.objectweb.asm.tree.AnnotationNode#5674cd4d]
MyRandomFunction2()V
[org.objectweb.asm.tree.AnnotationNode#63961c42]
My RandomFunction 1 & 2 has annotations but I can't seem to understand [org.objectweb.asm.tree.AnnotationNode#5674cd4d].
I solved this issue myself, I had to iterate over the annotations which I didn't realize initally.
if (mn.visibleAnnotations != null) {
Iterator<AnnotationNode>j=mn.visibleAnnotations.iterator();
while (j.hasNext()) {
AnnotationNode an=j.next();
System.out.println(an.values);
}
}

Antlr4 doesn't correctly recognizes unicode characters

I've very simple grammar which tries to match 'é' to token E_CODE.
I've tested it using TestRig tool (with -tokens option), but parser can't correctly match it.
My input file was encoded in UTF-8 without BOM and I've used ANTLR version 4.4.
Could somebody else also check this ? I got this output on my console:
line 1:0 token recognition error at: 'Ă'
grammar Unicode;
stat:EOF;
E_CODE: '\u00E9' | 'é';
I tested the grammar:
grammar Unicode;
stat: E_CODE* EOF;
E_CODE: '\u00E9' | 'é';
as follows:
UnicodeLexer lexer = new UnicodeLexer(new ANTLRInputStream("\u00E9é"));
UnicodeParser parser = new UnicodeParser(new CommonTokenStream(lexer));
System.out.println(parser.stat().getText());
and the following got printed to my console:
éé<EOF>
Tested with 4.2 and 4.3 (4.4 isn't in Maven Central yet).
EDIT
Looking at the source I see TestRig takes an optional -encoding param. Have you tried setting it?
This is not an answer but a large comment.
I just hit a snag with Unicode, so I thought I would test this. Turned out I wrongly encoded the input file, but here is the test code, everything is default and working extremely well in ANTLR 4.10.1. Maybe of some use:
grammar LetterNumbers;
text: WORD*;
WS: [ \t\r\n]+ -> skip ; // toss out whitespace
// The letters that return Character.LETTER_NUMBER to Character.getType(ch)
// The list: https://www.compart.com/en/unicode/category/Nl
// Roman Numerals are the best known here
WORD: LETTER_NUMBER+;
LETTER_NUMBER:
[\u16ee-\u16f0]|[\u2160-\u2182]|[\u2185-\u2188]
|'\u3007'
|[\u3021-\u3029]|[\u3038-\u303a]|[\ua6e6-\ua6ef];
And the JUnit5 test that goes with that:
package antlerization.minitest;
import antlrgen.minitest.LetterNumbersBaseListener;
import antlrgen.minitest.LetterNumbersLexer;
import antlrgen.minitest.LetterNumbersParser;
import org.antlr.v4.runtime.Lexer;
import org.antlr.v4.runtime.tree.TerminalNode;
import org.junit.jupiter.api.Test;
import org.antlr.v4.runtime.CharStreams;
import org.antlr.v4.runtime.CommonTokenStream;
import org.antlr.v4.runtime.tree.ParseTree;
import org.antlr.v4.runtime.tree.ParseTreeWalker;
import java.util.LinkedList;
import java.util.List;
import static org.hamcrest.MatcherAssert.assertThat;
import static org.hamcrest.Matchers.*;
public class MiniTest {
static class WordCollector extends LetterNumbersBaseListener {
public final List<String> collected = new LinkedList<>();
#Override
public void exitText(LetterNumbersParser.TextContext ctx) {
for (TerminalNode tn : ctx.getTokens(LetterNumbersLexer.WORD)) {
collected.add(tn.getText());
}
}
}
private static ParseTree stringToParseTree(String inString) {
Lexer lexer = new LetterNumbersLexer(CharStreams.fromString(inString));
CommonTokenStream tokens = new CommonTokenStream(lexer);
// "text" is the root of the grammar tree
// this returns a sublcass of ParseTree: LetterNumbersParser.TextContext
return (new LetterNumbersParser(tokens)).text();
}
private static List<String> collectWords(ParseTree parseTree) {
WordCollector wc = new WordCollector();
(new ParseTreeWalker()).walk(wc, parseTree);
return wc.collected;
}
private static String joinForTest(List<String> list) {
return String.join(",",list);
}
private static String stringInToStringOut(String parseThis) {
return joinForTest(collectWords(stringToParseTree(parseThis)));
}
#Test
void unicodeCharsOneWord() {
String res = stringInToStringOut("ⅣⅢⅤⅢ");
assertThat(res,equalTo("ⅣⅢⅤⅢ"));
}
#Test
void escapesOneWord() {
String res = stringInToStringOut("\u2163\u2162\u2164\u2162");
assertThat(res,equalTo("ⅣⅢⅤⅢ"));
}
#Test
void unicodeCharsMultipleWords() {
String res = stringInToStringOut("ⅠⅡⅢ ⅣⅤⅥ ⅦⅧⅨ ⅩⅪⅫ ⅬⅭⅮⅯ");
assertThat(res,equalTo("ⅠⅡⅢ,ⅣⅤⅥ,ⅦⅧⅨ,ⅩⅪⅫ,ⅬⅭⅮⅯ"));
}
#Test
void unicodeCharsLetters() {
String res = stringInToStringOut("Ⅰ Ⅱ Ⅲ \n Ⅳ Ⅴ Ⅵ \n Ⅶ Ⅷ Ⅸ \n Ⅹ Ⅺ Ⅻ \n Ⅼ Ⅽ Ⅾ Ⅿ");
assertThat(res,equalTo("Ⅰ,Ⅱ,Ⅲ,Ⅳ,Ⅴ,Ⅵ,Ⅶ,Ⅷ,Ⅸ,Ⅹ,Ⅺ,Ⅻ,Ⅼ,Ⅽ,Ⅾ,Ⅿ"));
}
}
Your grammar file is not saved in utf8 format.
Utf8 is default format that antlr accept as input grammar file, according with terence Parr book.

Resources