java.io.InvalidClassException: com.pingcap.tikv.region.TiRegion - Local class incompatible - tidb

I want to connect my Spark cluster to TIDB by TiSpark but I got a problem when I run my Spark application, an error occur:
java.io.InvalidClassException: com.pingcap.tikv.region.TiRegion; local class incompatible: stream classdesc serialVersionUID = -3091715739322916126, local class serialVersionUID = -3556238418089320368
I'm setting up a TIDB cluster follow the guide at https://pingcap.com/docs/v3.0/how-to/get-started/deploy-tidb-from-binary/
After that I follow the guide at https://pingcap.com/docs/v3.0/reference/tispark/ to download tispark-core-2.2.0-SNAPSHOT-jar-with-dependencies.jar and copy it to my jars folder in Spark.
I also config:
spark.tispark.pd.addresses 127.0.0.1:2379
spark.sql.extensions org.apache.spark.sql.TiExtensions
Here is my pom file:
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.vng</groupId>
<artifactId>testlan102</artifactId>
<version>1.0-SNAPSHOT</version>
<inceptionYear>2019</inceptionYear>
<properties>
<scala.version>2.11.12</scala.version>
<spark.version>2.4.3</spark.version>
</properties>
<repositories>
<repository>
<id>scala-tools.org</id>
<name>Scala-Tools Maven2 Repository</name>
<url>http://scala-tools.org/repo-releases</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>scala-tools.org</id>
<name>Scala-Tools Maven2 Repository</name>
<url>http://scala-tools.org/repo-releases</url>
</pluginRepository>
</pluginRepositories>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.4.3</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.4.3</version>
</dependency>
<dependency>
<groupId>org.scala-lang.modules</groupId>
<artifactId>scala-xml_2.11</artifactId>
<version>1.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.4.3</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-streams</artifactId>
<version>2.3.0</version>
</dependency>
<dependency>
<groupId>com.pingcap.tispark</groupId>
<artifactId>tispark-core</artifactId>
<version>2.1.1-spark_2.4</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.37</version>
</dependency>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.11.12</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/scala</sourceDirectory>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
<configuration>
<scalaVersion>${scala.version}</scalaVersion>
<args>
<arg>-target:jvm-1.5</arg>
</args>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-eclipse-plugin</artifactId>
<configuration>
<downloadSources>true</downloadSources>
<buildcommands>
<buildcommand>ch.epfl.lamp.sdt.core.scalabuilder</buildcommand>
</buildcommands>
<additionalProjectnatures>
<projectnature>ch.epfl.lamp.sdt.core.scalanature</projectnature>
</additionalProjectnatures>
<classpathContainers>
<classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>
<classpathContainer>ch.epfl.lamp.sdt.launching.SCALA_CONTAINER</classpathContainer>
</classpathContainers>
</configuration>
</plugin>
</plugins>
</build>
<reporting>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<configuration>
<scalaVersion>${scala.version}</scalaVersion>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-jar-plugin</artifactId>
<configuration>
<archive>
<manifest>
<addClasspath>true</addClasspath>
<mainClass>fully.qualified.MainClass</mainClass>
</manifest>
</archive>
</configuration>
</plugin>
<plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
</plugin>
</plugins>
</reporting>
</project>
My Spark Session is:
val _spark = SparkSession.builder()
.master("spark://127.0.0.1:7077")
.config("spark.tispark.pd.addresses", "127.0.0.1:2379")
.config("spark.sql.extensions","org.apache.spark.sql.TiExtensions")
.appName("SparkApp")
.getOrCreate()
When I call a simple query to database:
_spark.sql("use locdb")
val df = _spark.sql("select * from bang")
df.show()
I got an error:
java.io.InvalidClassException: com.pingcap.tikv.region.TiRegion; local class incompatible: stream classdesc serialVersionUID = -3091715739322916126, local class serialVersionUID = -3556238418089320368
My full log is here:
https://gist.github.com/lploc94/bb6bf9db14c030ee123630f6362f6160
I think the reason is I using TiSpark 2.1.1-2.4 in maven pom file but the Tispark jar file I download and copy to jars folder is 2.2.0. But I cant see any other version of TiSpark like tispark-core-2.1.1-SNAPSHOT-jar-with-dependencies.jar

I'm a dev of tispark.
Yes your educated guess is correct :). There are two different versions of tispark jars during your run which caused problem. Version 2.2 (in your cluster env) is not officially released to maven repo artifacts yet.
Since you already have tispark jars deployed in your cluster you can just remove tispark dependency in your pom. In most of the cases you don't need any special api from tispark unless you are using older version (< 2.0) and you still can query tidb directly.
Or you might remove all jars in your cluster environment and rely on tispark in your pom (if so, please pack it with dependencies).

Related

Spark Session Error: java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem

I am unable to build a SparkSession in Scala without an error. I am using Maven as my build tool. I have Spark 3.3.0 installed locally. I also have an Azure Databricks 3.3.0 cluster. Whenever I pass the databricks master URL it bombs out with this error:
Error: java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>sample</groupId>
<artifactId>sample</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<spark.version>3.3.0</spark.version>
<scala.version.major>2.12</scala.version.major>
<scala.version.minor>12</scala.version.minor>
<scala.version>${scala.version.major}.${scala.version.minor}</scala.version>
</properties>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_${scala.version.major}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_${scala.version.major}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_${scala.version.major}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_${scala.version.major}</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>org.jmockit</groupId>
<artifactId>jmockit</artifactId>
<version>1.34</version>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
<configuration>
<scalaVersion>${scala.version}</scalaVersion>
</configuration>
</plugin>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.3</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
</plugins>
</build>
</project>
package sample
import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
object SparkMain extends App {
val sparkConf = new SparkConf()
.setMaster("local[1]")
.setAppName("ScalaSparkDF-Test")
.set("spark.driver.allowMultipleContexts", "false")
.set("spark.ui.enabled", "false")
val spark = SparkSession.builder()
.config(sparkConf)
.getOrCreate()
val df = spark.read.option("header","true")
.csv("/Users/")
}

spark-submit error while running kafka spark streaming jar

I am trying to run my kafka spark streaming application using spark submit. Its java maven project and I have created a fat jar using assembly plugin. The same jar I am trying to execute using spark submit but it fails with below error. My pom.xml dependencies are as below.
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>consumer</groupId>
<artifactId>consumer</artifactId>
<version>0.0.1-SNAPSHOT</version>
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.22.1</version>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
<configuration>
<archive>
<manifest>
<mainClass>
consumer.consumer
</mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<dependencies>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-10 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.4.6</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.influxdb/influxdb-client-java -->
<dependency>
<groupId>com.influxdb</groupId>
<artifactId>influxdb-client-java</artifactId>
<version>1.11.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/com.influxdb/influxdb-client-core -->
<dependency>
<groupId>com.influxdb</groupId>
<artifactId>influxdb-client-core</artifactId>
<version>1.11.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.4.6</version>
<scope>provided</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.4.6</version>
<scope>provided</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/org.scala-lang/scala-library -->
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.11.12</version>
</dependency>
</dependencies>
</project>
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/kafka/clients/consumer/Consumer
at org.apache.spark.streaming.kafka010.ConsumerStrategies$.Subscribe(ConsumerStrategy.scala:299)
at org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe(ConsumerStrategy.scala)
at consumer.consumer.consume(consumer.java:156)
at consumer.consumer.main(consumer.java:80)
Tried giving all dependencies in classpath using --jars in spark submit. But no luck.
Tried lowering versions for spark and kafka and scala. But still getting same error.
Tried using same scala version for kafka and scala but no luck.
Tried using the same versions as Kafka and Scala installation jars, but couldn't fix it.
I am using spark-2.4.6-bin-hadoop2.7 and kafka_2.13-2.6.0 on standalone machine.
Anything I am missing here? I tried implementing answers from similar questions as well but its still erroring out. Appreciate any help with this.Thanks!

Apache beam spark /flink runner not getting executed in EMR(Access files from GCS)

I have an apache beam pipeline to index some data to elasticsearch. I was trying to use spark or Flink runner to run the job in AWS EMR. When I tried to run the job on a stand-alone spark on local setup, pipeline works with source files in the local disk, however, when I read the file from GCS it's not working. It is the same when I am running in the EMR cluster.
The configs that I set on the Hadoop core-site.xml
as EMR config
{
"Classification": "core-site",
"Properties": {
"fs.gs.impl": "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem",
"fs.AbstractFileSystem.gs.impl": "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS",
"fs.gs.project.id": "data-warehouse",
"google.cloud.auth.service.account.enable": "true",
"fs.gs.auth.service.account.json.keyfile": "/home/hadoop/utils/key.json"
}
}
Also, GCS-connector jar is in the spark jar path and hadoop jar path
The pom file of the maven for the pipeline
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.company.beam</groupId>
<artifactId>IndexToEs</artifactId>
<version>1.0-SNAPSHOT</version>
<properties>
<beam.version>2.22.0</beam.version>
<maven-compiler-plugin.version>3.7.0</maven-compiler-plugin.version>
<maven-exec-plugin.version>1.6.0</maven-exec-plugin.version>
<slf4j.version>1.7.25</slf4j.version>
</properties>
<repositories>
<repository>
<id>apache.snapshots</id>
<name>Apache Development Snapshot Repository</name>
<url>https://repository.apache.org/content/repositories/snapshots/</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
</repositories>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>${maven-compiler-plugin.version}</version>
<configuration>
<source>8</source>
<target>8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.2.3</version>
<executions>
<execution>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<createDependencyReducedPom>false</createDependencyReducedPom>
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>com.company.beam.IndexToEs</mainClass>
</transformer>
</transformers>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>${maven-exec-plugin.version}</version>
<configuration>
<cleanupDaemonThreads>false</cleanupDaemonThreads>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
<dependencies>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-core</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-elasticsearch</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-io-google-cloud-platform</artifactId>
<version>${beam.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.beam/beam-sdks-java-extensions-google-cloud-platform-core -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-sdks-java-extensions-google-cloud-platform-core</artifactId>
<version>${beam.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.beam/beam-runners-google-cloud-dataflow-java -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-google-cloud-dataflow-java</artifactId>
<version>${beam.version}</version>
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-direct-java</artifactId>
<version>${beam.version}</version>
<!-- <scope>runtime</scope>-->
</dependency>
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-spark</artifactId>
<version>${beam.version}</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.beam/beam-runners-flink -->
<dependency>
<groupId>org.apache.beam</groupId>
<artifactId>beam-runners-flink_2.11</artifactId>
<version>2.16.0</version>
</dependency>
<dependency>
<groupId>com.google.cloud.bigdataoss</groupId>
<artifactId>gcs-connector</artifactId>
<version>hadoop2-1.9.17</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.1.3</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.1.3</version>
</dependency>
<!-- slf4j API frontend binding with JUL backend -->
<!-- <dependency>-->
<!-- <groupId>org.slf4j</groupId>-->
<!-- <artifactId>slf4j-api</artifactId>-->
<!-- <version>${slf4j.version}</version>-->
<!-- </dependency>-->
<!-- <dependency>-->
<!-- <groupId>org.slf4j</groupId>-->
<!-- <artifactId>slf4j-jdk14</artifactId>-->
<!-- <version>${slf4j.version}</version>-->
<!-- </dependency>-->
</dependencies>
</project>
There is no error but EMR shows task com[pleted but the pipeline has not run.
I could not figure out if its an apache beam problem or cluster config problem.
I figured out the issue. Apache beam sdk uses gsutil to access the GCS files. As per flink documentation, hadoop connectors were responsible for any other files system access, but in the case of apache beam using flink runner the data is read using gsutil and fed into the downstream. So I installed google could SDK and activated the service account.

No source TO compile maven-compiler-plugin:3.5.1:compile in Linux System

I am trying to run my selenium tessts from a linux system where Jenkins is configured . But On running the job i am getting No sources to compile message # maven-compiler-plugin:3.5.1:compile section .
Note : It is a Maven Project.
pom.xml contenet
http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0
Aspire_TA_Aspire_Generic_Framework_Java
GenericFramework_Java
1.0.0
jar
<name>Aspire TA Aspire Generic Framework - Java</name>
<url>http://repo.maven.apache.org/maven2</url>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
</properties>
<dependencies>
<dependency>
<groupId>org.seleniumhq.selenium</groupId>
<artifactId>selenium-java</artifactId>
<version>2.53.1</version>
</dependency>
<dependency>
<groupId>org.testng</groupId>
<artifactId>testng</artifactId>
<version>6.9.4</version>
</dependency>
<dependency>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-clean-plugin</artifactId>
<version>3.0.0</version>
<type>maven-plugin</type>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.5.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.19.1</version>
<inherited>true</inherited>
<configuration>
<suiteXmlFiles>
<suiteXmlFile>testng.xml</suiteXmlFile>
</suiteXmlFiles>
<!-- <systemPropertyVariables>
<url>${project.url}</url>
<site>${site}</site>
</systemPropertyVariables> -->
</configuration>
</plugin>
</plugins>
</build>
<repositories>
<!-- To directly pointing the saucelabs repository -->
<repository>
<id>saucelabs-repository</id>
<url>https://repository-saucelabs.forge.cloudbees.com/release</url>
<releases>
<enabled>true</enabled>
</releases>
<snapshots>
<enabled>true</enabled>
</snapshots>
</repository>
<!-- To directly pointing the soapui maven2 repository -->
<repository>
<id>smartbear-sweden-plugin-repository</id>
<url>http://www.soapui.org/repository/maven2/</url>
</repository>
</repositories>
Below is my Directory structure
Linux is a case-sensitive OS by default, so you should rename the 'src/test/Java' folder to be 'src/test/java', and I don't see a 'src/main/java' folder so it has nothing to compile, or you should use the following configuration in your pom.xml file:
<build>
...
<sourceDirectory>src/main/java</sourceDirectory>
<testSourceDirectory>src/test/Java</testSourceDirectory>
...
</build>

java.lang.ClassCastException: Netty fail on jar execution on Flink

I have a flink jar that sink a datastream of serializable datatype on elastic and cassandra with a beahaviour that differ from stand-alone context.
I have read about netty conflict with Flink process and I excluded it from the pom file but it follow to be included in the package
Any suggestions?
This is the exception:
java.lang.ClassCastException: io.netty.channel.epoll.EpollEventLoopGroup cannot be cast to io.netty.channel.EventLoopGroup
at com.datastax.driver.core.NettyUtil.newEventLoopGroupInstance(NettyUtil.java:134)
at com.datastax.driver.core.NettyOptions.eventLoopGroup(NettyOptions.java:99)
at com.datastax.driver.core.Connection$Factory.<init>(Connection.java:774)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1446)
at com.datastax.driver.core.Cluster.init(Cluster.java:159)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:330)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:305)
at com.datastax.driver.core.Cluster.connect(Cluster.java:247)
at it.almaviva.wtf.mms.integratemobilitystatusevent.repository.cassandra.ScheduledEventRepositoryImpl.findProgrammed(ScheduledEventRepositoryImpl.java:51)
at it.almaviva.wtf.mms.integratemobilitystatusevent.transformation.ObservedEventDelayProcessFunction.loadScheduledEvents(ObservedEventDelayProcessFunction.java:368)
at it.almaviva.wtf.mms.integratemobilitystatusevent.transformation.ObservedEventDelayProcessFunction.processElement(ObservedEventDelayProcessFunction.java:64)
at it.almaviva.wtf.mms.integratemobilitystatusevent.transformation.ObservedEventDelayProcessFunction.processElement(ObservedEventDelayProcessFunction.java:1)
at org.apache.flink.streaming.api.operators.KeyedProcessOperator.processElement(KeyedProcessOperator.java:94)
at org.apache.flink.streaming.runtime.io.StreamInputProcessor.processInput(StreamInputProcessor.java:207)
at org.apache.flink.streaming.runtime.tasks.OneInputStreamTask.run(OneInputStreamTask.java:69)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:264)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:718)
at java.lang.Thread.run(Thread.java:748)
Suppressed: java.lang.NullPointerException
at com.datastax.driver.core.Cluster$Manager.close(Cluster.java:1676)
at com.datastax.driver.core.Cluster$Manager.access$200(Cluster.java:1354)
at com.datastax.driver.core.Cluster.closeAsync(Cluster.java:566)
at com.datastax.driver.core.Cluster.close(Cluster.java:578)
at it.almaviva.wtf.mms.integratemobilitystatusevent.repository.cassandra.ScheduledEventRepositoryImpl.findProgrammed(ScheduledEventRepositoryImpl.java:68)
this is the pom file:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>it.almaviva.wtf.mms</groupId>
<artifactId>mms-integretedmobilitystatusevent-flink-process</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>Passing Event Process</name>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<flink.version>1.4.0</flink.version>
<slf4j.version>1.7.7</slf4j.version>
<log4j.version>1.2.17</log4j.version>
<scala.binary.version>2.11</scala.binary.version>
<junit.version>4.12</junit.version>
<cassandra.version>3.2.0</cassandra.version>
<flink.connector.elastic.version>1.3.1</flink.connector.elastic.version>
<thyco.compiler.version>0.21.0</thyco.compiler.version>
<jackson.version>2.9.4</jackson.version>
<cassandra.unit.version>3.3.0.2</cassandra.unit.version>
</properties>
<repositories>
<repository>
<id>apache.snapshots</id>
<name>Apache Development Snapshot Repository</name>
<url>https://repository.apache.org/content/repositories/snapshots/</url>
<releases>
<enabled>false</enabled>
</releases>
<snapshots>
</snapshots>
</repository>
</repositories>
<!-- Execute "mvn clean package -Pbuild-jar" to build a jar file out of
this project! How to use the Flink Quickstart pom: a) Adding new dependencies:
You can add dependencies to the list below. Please check if the maven-shade-plugin
below is filtering out your dependency and remove the exclude from there.
b) Build a jar for running on the cluster: There are two options for creating
a jar from this project b.1) "mvn clean package" -> this will create a fat
jar which contains all dependencies necessary for running the jar created
by this pom in a cluster. The "maven-shade-plugin" excludes everything that
is provided on a running Flink cluster. b.2) "mvn clean package -Pbuild-jar"
-> This will also create a fat-jar, but with much nicer dependency exclusion
handling. This approach is preferred and leads to much cleaner jar files. -->
<dependencies>
<!-- Apache Flink dependencies -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
</dependency>
<!-- This dependency is required to actually execute jobs. It is currently pulled in by
flink-streaming-java, but we explicitly depend on it to safeguard against future changes. -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.10_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<!-- explicitly add a standard loggin framework, as Flink does not have
a hard dependency on one specific framework by default -->
<!-- https://mvnrepository.com/artifact/ch.qos.logback/logback-classic -->
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.3</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-core</artifactId>
<version>1.2.3</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>${log4j.version}</version>
</dependency>
<dependency>
<groupId>com.carrotsearch</groupId>
<artifactId>hppc</artifactId>
<version>0.3.4</version>
</dependency>
<!-- TEST PURPOSE -->
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>${junit.version}</version>
</dependency>
<dependency>
<groupId>org.cassandraunit</groupId>
<artifactId>cassandra-unit-shaded</artifactId>
<version>3.3.0.2</version>
</dependency>
<!-- DATASTAX -->
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-extras</artifactId>
<version>${cassandra.version}</version>
<exclusions>
<exclusion>
<groupId>io.netty</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-mapping</artifactId>
<version>${cassandra.version}</version>
<exclusions>
<exclusion>
<groupId>io.netty</groupId>
<artifactId>*</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-elasticsearch5_2.10</artifactId>
<version>${flink.connector.elastic.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-cep_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-statebackend-rocksdb_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-cassandra_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
</dependency>
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>${jackson.version}</version>
</dependency>
</dependencies>
<profiles>
<profile>
<!-- Profile for packaging correct JAR files -->
<id>build-jar</id>
<activation>
<activeByDefault>false</activeByDefault>
</activation>
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-core</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-java</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
<version>${flink.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>${log4j.version}</version>
<scope>provided</scope>
</dependency>
<!-- DTO -->
<dependency>
<groupId>com.fasterxml.jackson.core</groupId>
<artifactId>jackson-databind</artifactId>
<version>${jackson.version}</version>
</dependency>
<!-- APACHE CASSANDRA -->
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-extras</artifactId>
<version>${cassandra.version}</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-mapping</artifactId>
<version>${cassandra.version}</version>
</dependency>
</dependencies>
<build>
<plugins>
<!-- disable the exclusion rules -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.4.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<excludes combine.self="override"/>
</artifactSet>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</profile>
</profiles>
<build>
<plugins>
<!-- We use the maven-shade plugin to create a fat jar that contains all
dependencies except flink and it's transitive dependencies. The resulting
fat-jar can be executed on a cluster. Change the value of Program-Class if
your program entry point changes. -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.1.0</version>
<executions>
<!-- Run shade goal on package phase -->
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<artifactSet>
<excludes>
<!-- This list contains all dependencies of flink-dist Everything
else will be packaged into the fat-jar -->
<exclude>org.apache.flink:flink-annotations</exclude>
<exclude>org.apache.flink:flink-shaded-hadoop1_2.10</exclude>
<exclude>org.apache.flink:flink-shaded-hadoop2</exclude>
<exclude>org.apache.flink:flink-shaded-curator-recipes</exclude>
<exclude>org.apache.flink:flink-core</exclude>
<exclude>org.apache.flink:flink-java</exclude>
<!-- <exclude>org.apache.flink:flink-scala_2.10</exclude> -->
<exclude>org.apache.flink:flink-runtime_2.10</exclude>
<exclude>org.apache.flink:flink-optimizer_2.10</exclude>
<exclude>org.apache.flink:flink-clients_2.10</exclude>
<exclude>org.apache.flink:flink-avro_2.10</exclude>
<exclude>org.apache.flink:flink-examples-batch_2.10</exclude>
<exclude>org.apache.flink:flink-examples-streaming_2.10</exclude>
<exclude>org.apache.flink:flink-streaming-java_2.10</exclude>
<!-- Also exclude very big transitive dependencies of Flink WARNING:
You have to remove these excludes if your code relies on other versions of
these dependencies. -->
<exclude>org.scala-lang:scala-library</exclude>
<exclude>org.scala-lang:scala-compiler</exclude>
<exclude>org.scala-lang:scala-reflect</exclude>
<exclude>com.typesafe.akka:akka-actor_*</exclude>
<exclude>com.typesafe.akka:akka-remote_*</exclude>
<exclude>io.netty:netty-all</exclude>
<exclude>io.netty:netty</exclude>
<exclude>commons-fileupload:commons-fileupload</exclude>
<exclude>org.apache.avro:avro</exclude>
<exclude>commons-collections:commons-collections</exclude>
<exclude>org.codehaus.jackson:jackson-core-asl</exclude>
<exclude>org.codehaus.jackson:jackson-mapper-asl</exclude>
<exclude>com.thoughtworks.paranamer:paranamer</exclude>
<exclude>org.xerial.snappy:snappy-java</exclude>
<exclude>org.apache.commons:commons-compress</exclude>
<exclude>org.tukaani:xz</exclude>
<exclude>com.esotericsoftware.kryo:kryo</exclude>
<exclude>com.esotericsoftware.minlog:minlog</exclude>
<exclude>org.objenesis:objenesis</exclude>
<exclude>com.twitter:chill_*</exclude>
<exclude>com.twitter:chill-java</exclude>
<exclude>commons-lang:commons-lang</exclude>
<exclude>junit:junit</exclude>
<exclude>org.apache.commons:commons-lang3</exclude>
<exclude>log4j:log4j</exclude>
<exclude>org.apache.commons:commons-math</exclude>
<exclude>org.apache.sling:org.apache.sling.commons.json</exclude>
<exclude>commons-logging:commons-logging</exclude>
<exclude>commons-codec:commons-codec</exclude>
<exclude>stax:stax-api</exclude>
<exclude>com.typesafe:config</exclude>
<exclude>org.uncommons.maths:uncommons-maths</exclude>
<exclude>com.github.scopt:scopt_*</exclude>
<exclude>commons-io:commons-io</exclude>
<exclude>commons-cli:commons-cli</exclude>
</excludes>
</artifactSet>
<filters>
<filter>
<artifact>org.apache.flink:*</artifact>
<excludes>
<!-- exclude shaded google but include shaded curator -->
<exclude>org/apache/flink/shaded/com/**</exclude>
<exclude>web-docs/**</exclude>
</excludes>
</filter>
<filter>
<!-- Do not copy the signatures in the META-INF folder. Otherwise,
this might cause SecurityExceptions when using the JAR. -->
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
<!-- If you want to use ./bin/flink run <quickstart jar> uncomment
the following lines. This will add a Main-Class entry to the manifest file -->
<transformers>
<transformer
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
<mainClass>it.almaviva.wtf.mms.integratemobilitystatusevent.PassingEventProcess
</mainClass>
</transformer>
</transformers>
<createDependencyReducedPom>false</createDependencyReducedPom>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<configuration>
<source>1.8</source>
<target>1.8</target>
<compilerId>jdt</compilerId>
</configuration>
<dependencies>
<dependency>
<groupId>org.eclipse.tycho</groupId>
<artifactId>tycho-compiler-jdt</artifactId>
<version>0.21.0</version>
</dependency>
</dependencies>
</plugin>
</plugins>
<pluginManagement>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.7.0</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
<compilerId>jdt</compilerId>
</configuration>
<dependencies>
<dependency>
<groupId>org.eclipse.tycho</groupId>
<artifactId>tycho-compiler-jdt</artifactId>
<version>0.21.0</version>
</dependency>
</dependencies>
</plugin>
<plugin>
<groupId>org.eclipse.m2e</groupId>
<artifactId>lifecycle-mapping</artifactId>
<version>1.0.0</version>
<configuration>
<lifecycleMappingMetadata>
<pluginExecutions>
<pluginExecution>
<pluginExecutionFilter>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-assembly-plugin</artifactId>
<versionRange>[2.4,)</versionRange>
<goals>
<goal>single</goal>
</goals>
</pluginExecutionFilter>
<action>
<ignore/>
</action>
</pluginExecution>
<pluginExecution>
<pluginExecutionFilter>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<versionRange>[3.1,)</versionRange>
<goals>
<goal>testCompile</goal>
<goal>compile</goal>
</goals>
</pluginExecutionFilter>
<action>
<ignore/>
</action>
</pluginExecution>
</pluginExecutions>
</lifecycleMappingMetadata>
</configuration>
</plugin>
</plugins>
</pluginManagement>
</build>
</project>
You can disable the Netty’s native epoll transport and force the default NIO-based transport by adding the JVM argument -Dcom.datastax.driver.FORCE_NIO=true.
(https://docs.datastax.com/en/developer/java-driver-dse/1.4/faq/#what-is-netty-s-native-epoll-transport-and-how-do-i-enable-or-disable-it)
In Flink you have to set env.java.opts into the conf/flink-conf.yaml with that argument.
(https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/config.html#common-options)
I added the argument to the conf/flink.yaml and great, it works like a charm!!!!
I lost hours checking the pom file with my collegues. :)
Thankkkss
REF:
https://docs.datastax.com/en/developer/java-driver-dse/1.4/faq/
https://ci.apache.org/projects/flink/flink-docs-release-1.2/setup/config.html#runtime-algorithms

Resources