NoSuchMethodError with Guava , GCP Cloud Storage & Datastax - cassandra

java.lang.NoSuchMethodError: com.google.common.io.ByteStreams.exhaust(Ljava/io/InputStream;)J
Getting above error msg , when using Guava - 18.0v with Cloud Storage - 2.2.2v:
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-storage</artifactId>
<version>2.2.2</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>18.0</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>3.2.0</version>
</dependency>
If I use Guava - 23v with Datastax -3.2.0v I'm getting below error msg:
java.lang.NoClassDefFoundError:
com/google/common/util/concurrent/FutureFallback
so Cloud storage needs Guava version above 20 , but DataStax needs version below 20 , so either thing is working at a time but I want both the things.
My code:
StorageOptions options = StorageOptions.newBuilder()
.setProjectId(PROJECT_ID)
.setCredentials(GoogleCredentials
.fromStream(new FileInputStream(PATH_TO_JSON_KEY))).build();
Storage storage = options.getService();
Blob blob = storage.get(BUCKET_NAME, OBJECT_NAME);
ReadChannel r = blob.reader();

3.2.0 seems like a very old version of Cassandra driver from 2017 -- try upgrading to the latest in this major version, 3.11.0.
It also looks like the driver switched packages to com.datastackx.oss:java-driver-core and moved to the major version 4.x.

Tried to reproduce your error using the code and dependencies you have provided.
and was able to resolve the issue by using these version.
<dependency>
<groupId>com.google.cloud</groupId>
<artifactId>google-cloud-storage</artifactId>
<version>2.2.2</version>
</dependency>
<dependency>
<groupId>com.google.guava</groupId>
<artifactId>guava</artifactId>
<version>20.0</version>
</dependency>
<dependency>
<groupId>com.datastax.cassandra</groupId>
<artifactId>cassandra-driver-core</artifactId>
<version>3.11.0</version>
</dependency>
Output:
It is also advisable to use the latest version of DataStax when you're using Guava 20.0 and above.

Related

Test Runner file - Hover over error does not prompt user to import api

I created a New Maven project in Eclipse and then created a new "Feature file", "Step Definition" & "Testrunner". However I see quite a few error messages when I hover over the different annotations on the Test Runner class. See below {1 to 5}
I did a lot of research over the internet and added a few dependencies in the pom.xml file but nothing seem to work.
I currently have the following dependencies in my pom.xml file
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<!-- https://mvnrepository.com/artifact/io.cucumber/cucumber-java -->
<groupId>io.cucumber</groupId>
<artifactId>cucumber-java</artifactId>
<version>6.8.2</version>
<!-- https://mvnrepository.com/artifact/io.cucumber/cucumber-junit -->
<groupId>io.cucumber</groupId>
<artifactId>cucumber-junit</artifactId>
<version>6.8.2</version>
<scope>test</scope>
When I hover my mouse over Cucumber I get the following message
Cucumber cannot be resolved to a variable
When I hover my mouse over #RunWith I get the following
RunWith cannot be resolved to a type
When I hover my mouse over #CucumberOptions I get the following
CucumberOptions cannot be resolved to a type
When I hover my mouse over import cucumber.api.junit.Cucumber; I get the following
The import cucumber.api.junit cannot be resolved
When I hover my mouse over import cucumber.api.CucumberOptions; I get the following
The import cucumber.api.CucumberOptions cannot be resolved
Seems like its got to do something with the jre file. Here are the solutions i have tried based on responses I could find in stackoverflow
A] Some solutions over the internet suggested that I add the following dependency to the pom.xml file, but it did not resolve the issue
<groupId>info.cukes</groupId>
<artifactId>cucumber-junit</artifactId>
<version>1.2.6</version>
<type>pom</type>
<scope>test</scope>
NOTE:- Even after adding these to the pom.xml file i do not see it under Maven Dependencies list.
B] added import cucumber.junit.Cucumber;
but even this did not help
C] Commented out #RunWith(Cucumber, class) and added the following dependency to the pom.xml file
<groupId>info.cukes</groupId>
<artifactId>cucumber-junit</artifactId>
<version>1.2.5</version>
<type>pom</type>
<scope>test</scope>
NOTE:- The error message against "import cucumber.api.CucumberOptions;" disappeared but error message against "import cucumber.api.junit.Cucumber;" still shows.
Regards,
Rohit
I was able to resolve this issue.
Solution:- Verified if the Eclipse Environment was using JUNIT4
Steps:
Clicked on Project--> Properties--> Java Build Path --> Library
Then clicked on Add Library
On the "Select the Library type to add" window selected JUNIT & clicked on Next & then Finish
JUNIT showed up on the Java Build Path window
Clicked on Apply and Close.

OpenAPI java generated client Class not found error

I have an issue with OpenAPI generator for Java client.
I am using nodeJs openApi generator to generate a Java api client with :
npx openapi-generator generate -i .\swagger.yaml -g java -o ./output -c config.yaml
then use mvn package to package the output and integrate it to my spring-boot application.
My problem appears everytime I try to create an instance of my api. I get this error :
Caused by: java.lang.NoClassDefFoundError: io/gsonfire/GsonFireBuilder
I checked to generated pom and I can see that the dependencies are there:
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>${gson-version}</version>
</dependency>
<dependency>
<groupId>io.gsonfire</groupId>
<artifactId>gson-fire</artifactId>
<version>${gson-fire-version}</version>
</dependency>
Thanks

How to test code dependent on environment variables using JUnit mockito?

I am trying to add import "org.junit.contrib.java.lang.system.EnvironmentVariables;" but getting error This(org.junit.cont....) import cannot be resolved. I added below dependency:
<dependency>
<groupId>com.github.stefanbirkner</groupId>
<artifactId>system-rules</artifactId>
<version>1.2.0</version>
<scope>test</scope>
</dependency>
any other way to set environment variable using junit mockito.
Can i set the env variable using powermock ?
Use a newer version, like 1.19.0.
I am not sure when the class was introduced, but its later than 1.2.0.
Edit:
The GitHub list a change date of May 18, 2018, so it should be at least
1.18.0.

Kafka Embedded with Spark. Dependencies problems

I'm trying to use Spark Streaming 2.0.0 with Kafka 0.10. I'm using to my integration test https://github.com/manub/scalatest-embedded-kafka but I have some problems starting the server. When I tried with Spark 2.2.0 it works.
<dependency>
<groupId>net.manub</groupId>
<artifactId>scalatest-embedded-kafka_2.11</artifactId>
<version>${embedded-kafka.version}</version> -->I tried many versions.
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
<version>2.0.2</version>
</dependency>
An exception or error caused a run to abort: kafka.server.KafkaServer$.$lessinit$greater$default$2()Lorg/apache/kafka/common/utils/Time;
java.lang.NoSuchMethodError: kafka.server.KafkaServer$.$lessinit$greater$default$2()Lorg/apache/kafka/common/utils/Time;
at net.manub.embeddedkafka.EmbeddedKafkaSupport$class.startKafka(EmbeddedKafka.scala:467)
at net.manub.embeddedkafka.EmbeddedKafka$.startKafka(EmbeddedKafka.scala:38)
at net.manub.embeddedkafka.EmbeddedKafka$.start(EmbeddedKafka.scala:55)
at iris.orange.ScalaTest$$anonfun$1.apply$mcV$sp(ScalaTest.scala:10)
It seems an problem about dependencies but I didnt' get to work. I chose a embedded kafka which uses the same kafka version.
You need to use the proper version of the spark-streaming-kafka
https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-10_2.10/2.0.0
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-streaming-kafka-0-10 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming-kafka-0-10_2.10</artifactId>
<version>2.0.0</version>
</dependency>

Run spark program locally with intellij

I tried to run a simple test code in intellij IDEA. Here is my code:
import org.apache.spark.sql.functions._
import org.apache.spark.{SparkConf}
import org.apache.spark.sql.{DataFrame, SparkSession}
object hbasetest {
val spconf = new SparkConf()
val spark = SparkSession.builder().master("local").config(spconf).getOrCreate()
import spark.implicits._
def main(args : Array[String]) {
val df = spark.read.parquet("file:///Users/cy/Documents/temp")
df.show()
spark.close()
}
}
My dependencies list:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.1.0</version>
<!--<scope>provided</scope>-->
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.1.0</version>
<!--<scope>provided</scope>-->
</dependency>
when I click with run button, it throw an exception:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.TaskID.<init>(Lorg/apache/hadoop/mapreduce/JobID;Lorg/apache/hadoop/mapreduce/TaskType;I)V
I checked this post, but situation don't change after making modification. Can I get some help with running local spark application in IDEA? THx.
Update: I can run this code with spark-submit. I hope to directly run it with run button in IDEA.
Are you using cloudera sandbox and running this application because in POM.xml i could see CDH dependencies '2.6.0-mr1-cdh5.5.0'.
If you are using cloudera please use the below dependency for your spark scala project because the 'spark-core_2.10' artifact version gets changed.
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>2.10.2</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.0.0-cdh5.1.0</version>
</dependency>
</dependencies>
I used the below reference to run my spark application.
Reference: http://blog.cloudera.com/blog/2014/04/how-to-run-a-simple-apache-spark-app-in-cdh-5/
Here are the settings I use for Run/Debug configuration in IntelliJ:
*Main class:*
org.apache.spark.deploy.SparkSubmit
*VM Options:*
-cp <spark_dir>/conf/:<spark_dir>/jars/* -Xmx6g
*Program arguments:*
--master
local[*]
--conf
spark.driver.memory=6G
--class
com.company.MyAppMainClass
--num-executors
8
--executor-memory
6G
<project_dir>/target/scala-2.11/my-spark-app.jar
<my_spark_app_args_if_any>
spark-core and spark-sql jars are referred in my build.sbt as "provided" dependencies and their versions must match one of the Spark installed in spark_dir. I use Spark 2.0.2 at the moment with hadoop-aws jar version 2.7.2.
It may be late for the reply, but I just had the same issue. You can run with spark-submit, probably you already had related dependencies. My solution is:
Change the related dependencies in Intellij Module Settings for your projects from provided to compile. You may only change part of them but you have to try. Brutal solution is to change all.
If you have further exception after this step such as some dependencies are "too old", change the order of related dependencies in module settings.
I ran into this issue as well, and I also had an old cloudera hadoop reference in my code. (You have to click the 'edited' link in the original poster's link to see his original pom settings).
I could leave that reference in as long as I put this at the top of my dependencies (order matters!). You should match it against your own hadoop cluster settings.
<dependency>
<!-- THIS IS REQUIRED FOR LOCAL RUNNING IN INTELLIJ -->
<!-- IT MUST REMAIN AT TOP OF DEPENDENCY LIST TO 'WIN' AGAINST OLD HADOOP CODE BROUGHT IN-->
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-mapreduce-client-core</artifactId>
<version>2.6.0-cdh5.12.0</version>
<scope>provided</scope>
</dependency>
Note that in 2018.1 version of Intellij, you can check Include dependiencies with "Provided" Scope which is a simple way to keep your pom scopes clean.

Resources