How to get the working directory in executor - apache-spark

I am using the following command to submit Spark job, I hope to send jar and config files to each executor and load it there
spark-submit --verbose \
--files=/tmp/metrics.properties \
--jars /tmp/datainsights-metrics-source-assembly-1.0.jar \
--total-executor-cores 4\
--conf "spark.metrics.conf=metrics.properties" \
--conf "spark.executor.extraClassPath=datainsights-metrics-source-assembly-1.0.jar" \
--class org.microsoft.ofe.datainsights.StartServiceSignalPipeline \
./target/datainsights-1.0-jar-with-dependencies.jar
--files and --jars is used to send files to executors, I found that the files are sent to the working directory of executor like 'worker/app-xxxxx-xxxx/0/
But when job is running, the executor always throws exception saying that it could not find the file 'metrics.properties'or the class which is contained in 'datainsights-metrics-source-assembly-1.0.jar'. It seems that the job is looking for files under another dir rather than working directory.
Do you know how to load the file which is sent to executors?
Here is the trace (The class 'org.apache.spark.metrics.PerfCounterSource' is contained in the jar 'datainsights-metrics-source-assembly-1.0.jar'):
ERROR 2016-01-14 16:10:32 Logging.scala:96 - org.apache.spark.metrics.MetricsSystem: Source class org.apache.spark.metrics.PerfCounterSource cannot be instantiated
java.lang.ClassNotFoundException: org.apache.spark.metrics.PerfCounterSource
at java.net.URLClassLoader$1.run(URLClassLoader.java:366) ~[na:1.7.0_80]
at java.net.URLClassLoader$1.run(URLClassLoader.java:355) ~[na:1.7.0_80]
at java.security.AccessController.doPrivileged(Native Method) [na:1.7.0_80]
at java.net.URLClassLoader.findClass(URLClassLoader.java:354) ~[na:1.7.0_80]
at java.lang.ClassLoader.loadClass(ClassLoader.java:425) ~[na:1.7.0_80]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308) ~[na:1.7.0_80]
at java.lang.ClassLoader.loadClass(ClassLoader.java:358) ~[na:1.7.0_80]
at java.lang.Class.forName0(Native Method) ~[na:1.7.0_80]
at java.lang.Class.forName(Class.java:195) ~[na:1.7.0_80]

It looks like you have a typo in your --jars argument, so it could be that it's not actually loading the file and and continuing silently.

Related

spark.driver.extraLibraryPath override original library path

I have a spark job running on AWS EMR cluster, it need access native lib(*.so), per spark's document (https://spark.apache.org/docs/2.3.0/configuration.html) I need add "spark.driver.extraLibraryPath" and "spark.executor.extraLibraryPath" options in spark-submit command line
spark-submit \
--class test.Clustering \
--conf spark.executor.extraLibraryPath="/opt/test/lib/native" \
--conf spark.driver.extraLibraryPath="/opt/test/lib/native" \
--master yarn \
--deploy-mode client \
s3-etl-prepare-1.0-SNAPSHOT-jar-with-dependencies.jar "$#"
It works as I expected, native lib is loaded, the problem is: during spark job I need doing a distribute lzo indexer MR job which need lzo native library, the lzo code could not load the native gpl library:
21/06/16 09:49:09 ERROR GPLNativeCodeLoader: Could not load native gpl library
java.lang.UnsatisfiedLinkError: no gplcompression in java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1860)
at java.lang.Runtime.loadLibrary0(Runtime.java:870)
at java.lang.System.loadLibrary(System.java:1124)
at com.hadoop.compression.lzo.GPLNativeCodeLoader.<clinit>(GPLNativeCodeLoader.java:32)
at com.hadoop.compression.lzo.LzoCodec.<clinit>(LzoCodec.java:71)
at com.hadoop.compression.lzo.DistributedLzoIndexer.<init>(DistributedLzoIndexer.java:28)
at test.misc.FileHelper.distributIndexLzoFile(FileHelper.scala:260)
at test.scalaapp.Clustering$.main(Clustering.scala:66)
at test.scalaapp.Clustering.main(Clustering.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:853)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:928)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:937)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
it seems "spark.driver.extraLibraryPath" option override or change the whole library path rather than append a new one, how can I keep both gpl lzo native path and my own library path?

Can't setup spark application with spark-atlas-connector

Can't setup my spark application with apache atlas via spark-atlas-connector .
I had clone https://github.com/hortonworks-spark/spark-atlas-connector project and executed mvn package. Then I put all jars in my project and setup config like this:
def main(args: Array[String]): Unit = {
val sparkConf = new SparkConf()
.setAppName("atlas-test")
.setMaster("local[2]")
.set("spark.extraListeners", "com.hortonworks.spark.atlas.SparkAtlasEventTracker")
.set("spark.sql.queryExecutionListeners", "com.hortonworks.spark.atlas.SparkAtlasEventTracker")
.set("spark.sql.streaming.streamingQueryListeners", "com.hortonworks.spark.atlas.SparkAtlasStreamingQueryEventTracker")
val spark = SparkSession.builder()
.config(sparkConf)
.enableHiveSupport()
.getOrCreate()
import spark.implicits._
val df = spark.read.format("kafka")
.option("kafka.bootstrap.servers", BROKER_SERVERS)
.option("subscribe", "foobar")
.option("startingOffset", "earliest")
.load()
df.show()
df.write
.format("kafka")
.option("kafka.bootstrap.servers", BROKER_SERVERS)
.option("topic", "foobar-out")
.save()
}
Atlas is started via docker container which I pulled.
Kafka with Zookeper are stared via docker container which I pulled too.
The job works without spark-atlas-connector but when I want to add a connector it throws exceptions.
19/08/09 16:40:16 ERROR SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Exception when registering SparkListener
at org.apache.spark.SparkContext.setupAndStartListenerBus(SparkContext.scala:2398)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:555)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
at Boot$.main(Boot.scala:21)
at Boot.main(Boot.scala)
Caused by: org.apache.atlas.AtlasException: Failed to load application properties
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:134)
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:86)
at com.hortonworks.spark.atlas.AtlasClientConf.configuration$lzycompute(AtlasClientConf.scala:25)
at com.hortonworks.spark.atlas.AtlasClientConf.configuration(AtlasClientConf.scala:25)
at com.hortonworks.spark.atlas.AtlasClientConf.get(AtlasClientConf.scala:50)
at com.hortonworks.spark.atlas.AtlasClient$.atlasClient(AtlasClient.scala:120)
at com.hortonworks.spark.atlas.SparkAtlasEventTracker.<init>(SparkAtlasEventTracker.scala:33)
at com.hortonworks.spark.atlas.SparkAtlasEventTracker.<init>(SparkAtlasEventTracker.scala:37)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.util.Utils$$anonfun$loadExtensions$1.apply(Utils.scala:2691)
at org.apache.spark.util.Utils$$anonfun$loadExtensions$1.apply(Utils.scala:2680)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at org.apache.spark.util.Utils$.loadExtensions(Utils.scala:2680)
at org.apache.spark.SparkContext$$anonfun$setupAndStartListenerBus$1.apply(SparkContext.scala:2387)
at org.apache.spark.SparkContext$$anonfun$setupAndStartListenerBus$1.apply(SparkContext.scala:2386)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.SparkContext.setupAndStartListenerBus(SparkContext.scala:2386)
... 8 more
Caused by: com.hortonworks.spark.atlas.shade.org.apache.commons.configuration.ConfigurationException: Cannot locate configuration source null
at com.hortonworks.spark.atlas.shade.org.apache.commons.configuration.AbstractFileConfiguration.load(AbstractFileConfiguration.java:259)
at com.hortonworks.spark.atlas.shade.org.apache.commons.configuration.AbstractFileConfiguration.load(AbstractFileConfiguration.java:238)
at com.hortonworks.spark.atlas.shade.org.apache.commons.configuration.AbstractFileConfiguration.<init>(AbstractFileConfiguration.java:197)
at com.hortonworks.spark.atlas.shade.org.apache.commons.configuration.PropertiesConfiguration.<init>(PropertiesConfiguration.java:284)
at org.apache.atlas.ApplicationProperties.<init>(ApplicationProperties.java:69)
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:125)
... 32 more
19/08/09 16:40:16 INFO SparkContext: SparkContext already stopped.
Exception in thread "main" org.apache.spark.SparkException: Exception when registering SparkListener
at org.apache.spark.SparkContext.setupAndStartListenerBus(SparkContext.scala:2398)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:555)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2520)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:935)
at org.apache.spark.sql.SparkSession$Builder$$anonfun$7.apply(SparkSession.scala:926)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:926)
at Boot$.main(Boot.scala:21)
at Boot.main(Boot.scala)
Caused by: org.apache.atlas.AtlasException: Failed to load application properties
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:134)
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:86)
at com.hortonworks.spark.atlas.AtlasClientConf.configuration$lzycompute(AtlasClientConf.scala:25)
at com.hortonworks.spark.atlas.AtlasClientConf.configuration(AtlasClientConf.scala:25)
at com.hortonworks.spark.atlas.AtlasClientConf.get(AtlasClientConf.scala:50)
at com.hortonworks.spark.atlas.AtlasClient$.atlasClient(AtlasClient.scala:120)
at com.hortonworks.spark.atlas.SparkAtlasEventTracker.<init>(SparkAtlasEventTracker.scala:33)
at com.hortonworks.spark.atlas.SparkAtlasEventTracker.<init>(SparkAtlasEventTracker.scala:37)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.spark.util.Utils$$anonfun$loadExtensions$1.apply(Utils.scala:2691)
at org.apache.spark.util.Utils$$anonfun$loadExtensions$1.apply(Utils.scala:2680)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at org.apache.spark.util.Utils$.loadExtensions(Utils.scala:2680)
at org.apache.spark.SparkContext$$anonfun$setupAndStartListenerBus$1.apply(SparkContext.scala:2387)
at org.apache.spark.SparkContext$$anonfun$setupAndStartListenerBus$1.apply(SparkContext.scala:2386)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.SparkContext.setupAndStartListenerBus(SparkContext.scala:2386)
... 8 more
Caused by: com.hortonworks.spark.atlas.shade.org.apache.commons.configuration.ConfigurationException: Cannot locate configuration source null
at com.hortonworks.spark.atlas.shade.org.apache.commons.configuration.AbstractFileConfiguration.load(AbstractFileConfiguration.java:259)
at com.hortonworks.spark.atlas.shade.org.apache.commons.configuration.AbstractFileConfiguration.load(AbstractFileConfiguration.java:238)
at com.hortonworks.spark.atlas.shade.org.apache.commons.configuration.AbstractFileConfiguration.<init>(AbstractFileConfiguration.java:197)
at com.hortonworks.spark.atlas.shade.org.apache.commons.configuration.PropertiesConfiguration.<init>(PropertiesConfiguration.java:284)
at org.apache.atlas.ApplicationProperties.<init>(ApplicationProperties.java:69)
at org.apache.atlas.ApplicationProperties.get(ApplicationProperties.java:125)
... 32 more
19/08/09 16:40:17 INFO ShutdownHookManager: Shutdown hook called
System.setProperty("atlas.conf", "") is the correct solution as noted by OP.
SAC uses ApplicationProperties.java.
Specifically it uses ApplicationProperties.get.
Source code is here:
https://github.com/apache/atlas/blob/master/intg/src/main/java/org/apache/atlas/ApplicationProperties.java#L118
You can see the variable ATLAS_CONFIGURATION_DIRECTORY_PROPERTY is set to "atlas.conf":
https://github.com/apache/atlas/blob/master/intg/src/main/java/org/apache/atlas/ApplicationProperties.java#L43
I believe you have forgotten one more step from the setup documentation. The error you have stems from
Caused by: com.hortonworks.spark.atlas.shade.org.apache.commons.configuration.ConfigurationException: Cannot locate configuration source null
And to quote their README file in the github repo you've posted:
Also make sure atlas configuration file atlas-application.properties is in the Driver's classpath. For example, putting this file into <SPARK_HOME>/conf.
please refer to this from the official spark-atlas-connector github page. The atlas-application.properties file should be reachable.
Also make sure atlas configuration file atlas-application.properties is in the Driver's classpath. For example, putting this file into /conf.
If you're using cluster mode, please also ship this conf file to the remote Drive using --files atlas-application.properties.
The following should do the trick. Please note --files and --driver-class-path options that are necessary to place this configuration file on CLASSPATH and hence available for Atlas Client classes.
Moreover, spark-shell uses paths relative to the Spark Atlas Connector so change accordingly.
$SPARK_HOME/bin/spark-shell \
--jars spark-atlas-connector-assembly/target/spark-atlas-connector-assembly-0.1.0-SNAPSHOT.jar \
--conf spark.extraListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker \
--conf spark.sql.queryExecutionListeners=com.hortonworks.spark.atlas.SparkAtlasEventTracker \
--conf spark.sql.streaming.streamingQueryListeners=com.hortonworks.spark.atlas.SparkAtlasStreamingQueryEventTracker \
--files spark-atlas-connector/src/test/resources/atlas-application.properties \
--driver-class-path spark-atlas-connector/src/test/resources

Spark job not running when jar is in HDFS

I am trying to run a spark job in standalone mode but the command is not picking up the jar from HDFS.The jar is present in the HDFS location and Its working fine when I run it in local mode.
Below is the command I am using
spark-submit --deploy-mode client --master yarn --class com.main.WordCount /spark/wc.jar
Below is my program:
val conf = new SparkConf().setAppName("WordCount").setMaster("yarn")
val spark = new SparkContext(conf)
val file = spark.textFile(args(0))
val count = file.flatMap(f=>f.split(" ")).map(word=>(word,1)).reduceByKey(_+_).collect
count.foreach(println)
And I am getting below error:
Warning: Local jar /spark/wc.jar does not exist, skipping.
java.lang.ClassNotFoundException: com.main.WordCount
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:228)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:693)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
But If i use deploy mode cluster I am getting below error:
Exception in thread "main" java.io.FileNotFoundException: File file:/spark/wc.jar does not exist
at org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:337)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:289)
at org.apache.spark.deploy.yarn.Client.copyFileToRemote(Client.scala:340)
at org.apache.spark.deploy.yarn.Client.org$apache$spark$deploy$yarn$Client$$distribute$1(Client.scala:433)
at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$10.apply(Client.scala:530)
at org.apache.spark.deploy.yarn.Client$$anonfun$prepareLocalResources$10.apply(Client.scala:529)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.deploy.yarn.Client.prepareLocalResources(Client.scala:529)
at org.apache.spark.deploy.yarn.Client.createContainerLaunchContext(Client.scala:834)
at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:167)
at org.apache.spark.deploy.yarn.Client.run(Client.scala:1119)
at org.apache.spark.deploy.yarn.Client$.main(Client.scala:1178)
at org.apache.spark.deploy.yarn.Client.main(Client.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:736)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:185)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:210)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:124)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Could you please clarify what is local mode. There are only two deploy mode client and cluster, the only difference is in client mode Driver program will run on the system and in cluster mode driver program will run from random node in the cluster.
For spark submit command:
When you execute spark submit command spark will pull all the local resources/files defined with --files , --py-files argument as well as Spark Main Jar to temporary HDFS location/directory, which is created by that particular spark application with the application name. when you give HDFS location, it will fail to location the Jar on local machine. It is mandatory to keep the Jar on local.

Directory expansion does not work in standalone deployment mode : Apache Spark

I'm trying to deploy a spark streaming consuming Kafka topic job on a standalone spark cluster using the following command:
./bin/spark-submit --class MaxwellCdc.MaxwellSreaming
~/cdc/cdc_2.11-0.1.jar --jars ~/cdc/kafka_2.11-0.10.0.1.jar,
~/cdc/kafka-clients-0.10.0.1.jar,~/cdc/mysql-connector-java-5.1.12.jar,
~/cdc/spark-streaming-kafka-0-10_2.11-2.2.1.jar
and getting this exception:
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/kafka/common/serialization/StringDeserializer
at MaxwellCdc.MaxwellSreaming$.main(MaxwellSreaming.scala:30)
at MaxwellCdc.MaxwellSreaming.main(MaxwellSreaming.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:775)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:119)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException:
org.apache.kafka.common.serialization.StringDeserializer
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
Any help would be appreciated.
Quoting from the documentation:
When using spark-submit, the application jar along with any jars
included with the --jars option will be automatically transferred to
the cluster. URLs supplied after --jars must be separated by commas.
That list is included in the driver and executor classpaths.
Directory expansion does not work with --jars..
What is directory expansion?
Expanding a file name means converting a relative file name to an absolute one. Since this is done relative to a default directory, you must specify the default directory name as well as the file name to be expanded. It also involves expanding abbreviations like ~/.
Therefore, try providing the absolute path for all the jars that are being provided with --jars option. I hope this helps.

Spark streaming - class not found - HDFS file streaming - java.lang.ClassNotFoundException: com.pepperdata.spark.metrics.PepperdataSparkListener

I have submitted the spark streaming job with the yarn cluster mode.
But I am getting the following error.
SparkSubmit Command:
export SPARK_CLASSPATH=/usr/hdp/current/hbase-client/lib/hbase-common.jar:/usr/hdp/current/hbase-client/lib/hbase-client.jar:/usr/hdp/current/hbase-client/lib/hbase-server.jar:/usr/hdp/current/hbase-client/lib/hbase-protocol.jar:/usr/hdp/current/hbase-client/lib/guava-12.0.1.jar:/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar
spark-submit --master yarn-cluster --keytab /etc/security/keytabs/srvc_egsc_hdpuser.service.keytab --principal srvc_egsc_hdpuser#EAPKDC.HOUSTON.HP.COM --queue sc_streaming --class com.reni.scmplatform.data.producer.DPMain --executor-memory 5g --driver-memory 8g --conf spark.sql.shuffle.partitions=10 --conf spark.default.parallelism=50 --jars /usr/hdp/current/hbase-client/lib/hbase-common.jar,/usr/hdp/current/hbase-client/lib/hbase-client.jar,/usr/hdp/current/hbase-client/lib/hbase-server.jar,/usr/hdp/current/hbase-client/lib/hbase-protocol.jar,/usr/hdp/current/hbase-client/lib/guava-12.0.1.jar,/usr/hdp/current/hbase-client/lib/htrace-core-3.1.0-incubating.jar --files /etc/spark/conf/hbase-site.xml,/etc/spark/conf/hive-site.xml hdfs://EAPROD/EA/supplychain/streaming/logistics/entaly/jars/DataProducer-assembly-1.0.15-SNAPSHOT.jar --platform.framework.hdfs.logging.dir=/EA/supplychain/process/logs/logistics/entaly/dataProducer --platform.framework.logging.level=info --platform.framework.logging.publish=true
Error:
18/03/12 05:14:30 ERROR ApplicationMaster: User class threw exception: org.apache.spark.SparkException: Exception when registering SparkListener
org.apache.spark.SparkException: Exception when registering SparkListener
at org.apache.spark.SparkContext.setupAndStartListenerBus(SparkContext.scala:2154)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:578)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2280)
at org.apache.spark.streaming.StreamingContext.<init>(StreamingContext.scala:140)
at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:877)
at org.apache.spark.streaming.StreamingContext$$anonfun$getOrCreate$1.apply(StreamingContext.scala:877)
at scala.Option.map(Option.scala:145)
at org.apache.spark.streaming.StreamingContext$.getOrCreate(StreamingContext.scala:877)
at com.reni.scmplatform.data.producer.helper.DPStreamEventHandler.start(DPStreamEventHandler.scala:63)
at com.reni.scmplatform.data.producer.DPMain$.main(DPMain.scala:27)
at com.reni.scmplatform.data.producer.DPMain.main(DPMain.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:561)
Caused by: java.lang.ClassNotFoundException: com.pepperdata.spark.metrics.PepperdataSparkListener
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.util.Utils$.classForName(Utils.scala:175)
at org.apache.spark.SparkContext$$anonfun$setupAndStartListenerBus$1.apply(SparkContext.scala:2122)
at org.apache.spark.SparkContext$$anonfun$setupAndStartListenerBus$1.apply(SparkContext.scala:2119)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:34)
at org.apache.spark.SparkContext.setupAndStartListenerBus(SparkContext.scala:2119)
... 15 more
18/03/12 05:14:30 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
18/03/12 05:14:30 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
18/03/12 05:14:30 INFO ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: org.apache.spark.SparkException: Exception when registering SparkListener)
You should add the JAR containing the missing class to the job classpath by using the --jars option (see this answer: spark submit add multiple jars in classpath)
Moreover, I use sbt-assembly plugin to take care of these things for you:
addSbtPlugin("com.eed3si9n" % "sbt-assembly" % "0.14.3")
Then build with sbt compile assemble and all the jars needed for your application will be included in the job jar sent to Yarn.

Resources