When Running Spark job in hadoop cluster i am getting java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration - apache-spark

When i tried to run my scala code which connects hbase database it works perfectly in my local IDE . But when i run the same in hadoop cluster i am getting "Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfiguration" error .
Please help me in this

Add all the HBase library jars to HADOOP_CLASSPATH -
export HBASE_HOME="YOUR_HBASE_HOME_PATH"
export HADOOP_CLASSPATH="$HADOOP_CLASSPATH:$HBASE_HOME/lib/*"
You can append any external jar needed to HADOOP_CLASSPATH, so that you don't need to explicitly set it in spark-submit command. All dependent jars will be loaded and provided to your Spark application.

Related

Pyspark - spark-submit to an AWS EMR

I have created an EMR cluster (emr-5.36.0) in AWS with the default sparks components (Spark 2.4.8, Hive 2.3.9).
I have installed Pyspark (3.3.0) on an EC2, in an python virtual environment.
From there, I would like to run "spark-submit" commands to the EMR cluster.
To test the command, I am using python the code at the bottom of this page
To configured the YARN_CONF_DIR environment variable on the EC2, I copied the yarn-site.xml file from /etc/hadoop/conf.empty/ on the EMR's master node to a folder on the EC2.
But now, on the EC2, when I try to run spark-submit, I get:
$ export YARN_CONF_DIR=/home/me/spark/
$ spark-submit --master yarn --deploy-mode cluster spark_test.py
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/shaded/javax/ws/rs/core/NoContentException
at org.apache.hadoop.yarn.util.timeline.TimelineUtils.<clinit>(TimelineUtils.java:60)
at org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.serviceInit(YarnClientImpl.java:200)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:191)
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1327)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1764)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:958)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1046)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1055)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.shaded.javax.ws.rs.core.NoContentException
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
... 13 more 22/07/18 18:36:25 INFO ShutdownHookManager: Shutdown hook called
And from here I am basically lost. I tried to google the error but I am still not clear what the error is about. Did I miss a step? An environment variable maybe?
Ultimately, I want to use the SparkSubmitOperator in Airflow, but I figured I should get the "native" command to work first before using the operator (which is just a wrapper).
If you do YARN_CONF_DIR=/etc/hadoop_files/ locally, the content of the folder hadoop_files needs to be the content of the EMR's /etc/hadoop/ folder, not /etc/hadoop/conf.empty/.

Azure Blob Storage Spark

I'm trying to connect Spark to azure blob storage (wasbs).
I add the following jars in the hadoop classpath
com.microsoft.azure_azure-storage-7.0.0.jar
org.apache.hadoop_hadoop-annotations-3.1.2.jar
org.apache.hadoop_hadoop-auth-3.1.2.jar
org.apache.hadoop_hadoop-azure-3.1.2.jar
org.apache.hadoop_hadoop-common-3.1.2.jar
org.eclipse.jetty_jetty-http-9.3.24.v20180605.jar
org.eclipse.jetty_jetty-io-9.3.24.v20180605.jar
org.eclipse.jetty_jetty-security-9.3.24.v20180605.jar
org.eclipse.jetty_jetty-server-9.3.24.v20180605.jar
org.eclipse.jetty_jetty-servlet-9.3.24.v20180605.jar
org.eclipse.jetty_jetty-webapp-9.3.24.v20180605.jar
org.eclipse.jetty_jetty-xml-9.3.24.v20180605.jar
and i try to use spark-submit using:
spark-submit --class mainClass --jars jars/org.apache.hadoop_hadoop-azure-3.1.2.jar,jars/com.microsoft.azure_azure-storage-7.0.0.jar,jars/org.apache.hadoop_hadoop-common-3.1.2.jar myjar.jar
and i get the following exception:
Exception in thread "main" java.lang.NoClassDefFoundError: org/eclipse/jetty/util/ajax/JSON$Convertor
If i remove hadoop-commons from the spark-submit --jars i get:
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StreamCapabilities
and if i add --jars jars/* to include all the jar files along with the jetty-utils i get
java.lang.ClassNotFoundException: my.package.MainClass
i saw similar posts that indicate multiple versions of jetty, but i can't find other versions anywhere.
For the first exception, you're missing jetty util
https://mvnrepository.com/artifact/org.eclipse.jetty/jetty-util/9.3.24.v20180605
And you should verify hadoop classpath returns what you want
For the remaining exceptions, you should verify that you can run hadoop fs - ls wasb://path on each potential Spark executor

Notable to load twritter jar file in spark cluster

I am working on basic spark twitter application. But I am not able to load twitter jar file in spark cluster.
spark-shell --jars /usr/local/Twitter/spark-streaming-twitter_2.11-2.0.1.jar,\
/usr/local/Twitter/twitter4j-core-4.0.6.jar,\
/usr/local/Twitter/twitter4j-stream-4.0.6.jar
I am using above command to add the jar file in spark env. But I am getting file not found exception
18/03/28 09:11:39 ERROR SparkContext: Failed to add
file:/usr/local/Twitter/spark-streaming-twitter_2.11-2.0.1.jar to
Spark environment java.io.FileNotFoundException: Jar
/usr/local/Twitter/spark-streaming-twitter_2.11-2.0.1.jar not found

How to run an interactive spark application from spark-shell/spark-submit

I have a spark app that reads large data, loads it in memory and sets everything in between ready for user to query the dataframe in memory multiple times. Once a query is done, the user is prompted on the console to either continue with new set of input or quit the application.
I can do this very well on the IDE. However, can I run this interactive spark app from spark-shell?
I've used spark job server before to achieve multiple interactive querying on a memory loaded dataframe but not from a shell. Any pointers?
Thanks!
UPDATE 1:
Here is how the project jar looks and its packaged with all the other dependencies.
jar tf target/myhome-0.0.1-SNAPSHOT.jar
META-INF/MANIFEST.MF
META-INF/
my_home/
my_home/myhome/
my_home/myhome/App$$anonfun$foo$1.class
my_home/myhome/App$.class
my_home/myhome/App.class
my_home/myhome/Constants$.class
my_home/myhome/Constants.class
my_home/myhome/RecommendMatch$$anonfun$1.class
my_home/myhome/RecommendMatch$$anonfun$2.class
my_home/myhome/RecommendMatch$$anonfun$3.class
my_home/myhome/RecommendMatch$.class
my_home/myhome/RecommendMatch.class
and ran spark-shell with the following options
spark-shell -i my_home/myhome/RecommendMatch.class --master local --jars /Users/anon/Documents/Works/sparkworkspace/myhome/target/myhome-0.0.1-SNAPSHOT.jar
but shell throws the following message on start up. The jars are loaded as per the environment shown at localhost:4040
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
17/05/16 10:10:01 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/05/16 10:10:06 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException
Spark context Web UI available at http://192.168.0.101:4040
Spark context available as 'sc' (master = local, app id = local-1494909601904).
Spark session available as 'spark'.
That file does not exist
Welcome to
...
UPDATE 2 (using spark-submit)
Tried with full path to jar. Next, tried by copying project jar to bin location.
pwd
/usr/local/Cellar/apache-spark/2.1.0/bin
spark-submit --master local —-class my_home.myhome.RecommendMatch.class --jars myhome-0.0.1-SNAPSHOT.jar
Error: Cannot load main class from JAR file:/usr/local/Cellar/apache-spark/2.1.0/bin/—-class
Try the -i <path_to_file> option to run the scala code in your file or the scala shell :load <path_to_file> function.
Relevant Q&A: Spark : how to run spark file from spark shell
The following command works to run an interactive spark application.
spark-submit /usr/local/Cellar/apache-spark/2.1.0/bin/myhome-0.0.1-SNAPSHOT.jar
Note that is a uber jar built with the main class as entry point and all dependent libraries. Check out http://maven.apache.org/plugins/maven-shade-plugin/

spark on yarn java.io.IOException: No FileSystem for scheme: s3n

My english poor , sorry,but I really need help.
I use spark-2.0.0-bin-hadoop2.7 and hadoop2.7.3. and read log from s3, write result to local hdfs. and I can run spark driver use standalone mode successfully. But when I run the same driver on yarn mode. It's throw
17/02/10 16:20:16 ERROR ApplicationMaster: User class threw exception: java.io.IOException: No FileSystem for scheme: s3n
hadoop-env.sh I add
export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HADOOP_HOME/share/hadoop/tools/lib/*
run hadoop fs -ls s3n://xxx/xxx/xxx, can list files.
I thought it's should be can't find aws-java-sdk-1.7.4.jar and hadoop-aws-2.7.3.jar
how can do.
I'm not using the same versions as you, but here is an extract of my [spark_path]/conf/spark-defaults.conf file that was necessary to get s3a working:
# hadoop s3 config
spark.driver.extraClassPath [path]/guava-16.0.1.jar:[path]/aws-java-sdk-1.7.4.jar:[path]/hadoop-aws-2.7.2.jar
spark.executor.extraClassPath [path]/guava-16.0.1.jar:[path]/aws-java-sdk-1.7.4.jar:[path]/hadoop-aws-2.7.2.jar
spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem
spark.hadoop.fs.s3a.access.key [key]
spark.hadoop.fs.s3a.secret.key [key]
spark.hadoop.fs.s3a.fast.upload true
Alternatively you can specify paths to the jars in a comma-separated format to the --jars option on job submit:
--jars [path]aws-java-sdk-[version].jar,[path]hadoop-aws-[version].‌​‌​jar
Notes:
Ensure the jars are in the same location on all nodes in your cluster
Replace [path] with your path
Replace s3a with your preferred protocol (last time I checked s3a was best)
I don't think guava is required to get s3a working but I can't remember
Stick the JARs into SPARK_HOME/lib, with the rest of the spark bits.
spark.hadoop.fs.s3a.impl org.apache.hadoop.fs.s3a.S3AFileSystem isn't needed; the JAR will be autoscanned and picked up.
don't play with fast.output.enabled on 2.7.x unless you know what you are doing and prepared to tune some of the thread pool options. Start without that option.
Add these jars to $SPARK_HOME/jars:
ws-java-sdk-1.7.4.jar,hadoop-aws-2.7.3.jar,jackson-annotations-2.7.0.jar,jackson-core-2.7.0.jar,jackson-databind-2.7.0.jar,joda-time-2.9.6.jar

Resources