How to share files on master node to executors in Spark, How to use --files argument? - apache-spark

Can someone please explain, How can i ship my files in master to all executors using --files argument in spark-submit
/bin/spark-submit --master yarn --queue development --conf spark.memory.offHeap.enabled=true --conf spark.memory.offHeap.size=128G --files /keras/mnist.npz
But this gives me error. I am new to spark.
Exception in thread "main" java.lang.IllegalArgumentException: Missing application resource.

Obviously you didn't specify the application class on this command. Find more details on Running Spark On Yarn.

Related

Spark --jars option added jar are not working

I am trying to add redshift jar using spark-submit option:
Running command on Spark 2.1.0
spark-submit --class Test --master spark://xyz.local:7077 --executor-cores 4 --total-executor-cores 32 --executor-memory 6G --driver-memory 4G --driver-cores 2 --deploy-mode cluster -jars s3a://d11-batch-jobs-on-spark/jars/redshift-jdbc42-1.2.10.1009.jar,s3a://mybucket/jars/spark-redshift_2.11-3.0.0-preview1.jar s3a://mybucket/jars/app.jar
and in code I am reading from redshift table but getting
ClassNotFoundException: com.databricks.spark.redshift.DefaultSource
What am I doing wrong?
I'm having issues using the --jars as well...
My advise is, for packages in the Maven repository, to use --packages instead of --jars, as it resolves other dependencies withing those packages.
USAGE
spark-submit --packages <groupId>:<artifactId>:<version>
In your case, except all other options and args, it'd look like this:
spark-submit --packages com.amazon.redshift:redshift-jdbc42:1.2.10.1009
You can find IDs and version from an XML-style provided by Maven after following the link to your desired version.
The accepted answer to this question provides more info on --jars and -packages

Spark metrics sink doesn't expose executor's metrics

I'm using Spark on YARN with
Ambari 2.7.4
HDP Standalone 3.1.4
Spark 2.3.2
Hadoop 3.1.1
Graphite on Docker latest
I was trying to get Spark metrics with Graphite sink following this tutorial.
Advanced spark2-metrics-properties in Ambari are:
driver.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink
executor.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink
worker.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink
master.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink
*.sink.graphite.host=ap-test-m.c.gcp-ps.internal
*.sink.graphite.port=2003
*.sink.graphite.protocol=tcp
*.sink.graphite.period=10
*.sink.graphite.unit=seconds
*.sink.graphite.prefix=app-test
*.source.jvm.class=org.apache.spark.metrics.source.JvmSource
Spark submit:
export HADOOP_CONF_DIR=/usr/hdp/3.1.4.0-315/hadoop/conf/; spark-submit --class com.Main --master yarn --deploy-mode client --driver-memory 1g --executor-memory 10g --num-executors 2 --executor-cores 2 spark-app.jar /data
As a result I'm only getting driver metrics.
Also, I was trying to add metrics.properties to spark-submit command together with global spark metrics props, but that didn't help.
And finally, I tried conf in spark-submit and in java SparkConf:
--conf "spark.metrics.conf.driver.sink.graphite.class"="org.apache.spark.metrics.sink.GraphiteSink"
--conf "spark.metrics.conf.executor.sink.graphite.class"="org.apache.spark.metrics.sink.GraphiteSink"
--conf "worker.sink.graphite.class"="org.apache.spark.metrics.sink.GraphiteSink"
--conf "master.sink.graphite.class"="org.apache.spark.metrics.sink.GraphiteSink"
--conf "spark.metrics.conf.*.sink.graphite.host"="host"
--conf "spark.metrics.conf.*.sink.graphite.port"=2003
--conf "spark.metrics.conf.*.sink.graphite.period"=10
--conf "spark.metrics.conf.*.sink.graphite.unit"=seconds
--conf "spark.metrics.conf.*.sink.graphite.prefix"="app-test"
--conf "spark.metrics.conf.*.source.jvm.class"="org.apache.spark.metrics.source.JvmSource"
But that didn't help either.
CSVSink also gives only driver metrics.
UPD
When I submit job in cluster mode - I'm getting the same metrics as in Spark History Server. But the jvm metrics are still absent.
Posting to a dated question, but maybe it will help.
Seems like executors do not have metrics.properties file on their filesystems.
One way to confirm this would be to look at the executor logs:
2020-01-16 10:00:10 ERROR MetricsConfig:91 - Error loading configuration file metrics.properties
java.io.FileNotFoundException: metrics.properties (No such file or directory)
at org.apache.spark.metrics.MetricsConfig.loadPropertiesFromFile(MetricsConfig.scala:132)
at org.apache.spark.metrics.MetricsConfig.initialize(MetricsConfig.scala:55)
at org.apache.spark.metrics.MetricsSystem.<init>(MetricsSystem.scala:95)
at org.apache.spark.metrics.MetricsSystem$.createMetricsSystem(MetricsSystem.scala:233)
To fix this on yarn pass two parameters to spark-submit:
$ spark-submit \
--files metrics.properties \
--conf spark.metrics.conf=metrics.properties
The --files option ensures that files specified in the option will be shared to executors.
The spark.metrics.conf option specifies a custom file location for the metrics.
Another way to fix the issue would be to place the metrics.properties file into $SPARK_HOME/conf/metrics.properties on both the driver and executor before starting the job.
More on metrics here: https://spark.apache.org/docs/latest/monitoring.html

spark-submit does not work with my jar located in hdfs

Here is my situation:
Apache spark version 2.4.4
Hadoop version 2.7.4
My application jar is located in hdfs.
My spark-submit looks like this:
/software/spark-2.4.4-bin-hadoop2.7/bin/spark-submit \
--class com.me.MyClass --master spark://host2.local:7077 \
--deploy-mode cluster \
hdfs://host2.local:9000/apps/myapps.jar
I get this error:
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.tracing.SpanReceiverHost.get(Lorg/apache/hadoop/conf/Configuration;Ljava/lang/String;)Lorg/apache/hadoop/tracing/SpanReceiverHost;
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:634)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:619)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:149)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2598)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2632)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2614)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.spark.deploy.DependencyUtils$$anonfun$resolveGlobPaths$2.apply(DependencyUtils.scala:144)
at org.apache.spark.deploy.DependencyUtils$$anonfun$resolveGlobPaths$2.apply(DependencyUtils.scala:139)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.TraversableLike$$anonfun$flatMap$1.apply(TraversableLike.scala:241)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at scala.collection.TraversableLike$class.flatMap(TraversableLike.scala:241)
at scala.collection.AbstractTraversable.flatMap(Traversable.scala:104)
at org.apache.spark.deploy.DependencyUtils$.resolveGlobPaths(DependencyUtils.scala:139)
at org.apache.spark.deploy.DependencyUtils$$anonfun$resolveAndDownloadJars$1.apply(DependencyUtils.scala:61)
at org.apache.spark.deploy.DependencyUtils$$anonfun$resolveAndDownloadJars$1.apply(DependencyUtils.scala:64)
at scala.Option.map(Option.scala:146)
at org.apache.spark.deploy.DependencyUtils$.resolveAndDownloadJars(DependencyUtils.scala:60)
at org.apache.spark.deploy.worker.DriverWrapper$.setupDependencies(DriverWrapper.scala:96)
at org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:60)
at org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
Any pointer how to solve this, please?
Thank you.
There is no need to transfer the jar into cluster, you can run your jar from your local id itself with executable permission.
Once your application is build transfer the .jar to your unix user account and give it executable permissions. Have a look at the below spark submit:-
spark-submit --master yarn --deploy-mode cluster --queue default
--files "full path of your properties file" --driver-memory 4G
--num-executors 8 --executor-cores 1 --executor-memory 4G
--class "main class name"
"full path of the jar which you have transferred to your local unix id"
You can use other spark submit configuration parameters if you want. Please note that in some version you have to use spark2-submit instead of spark-submit if there are multiple spark version involved.
--deploy-mode cluster will help in this case. taking the jars to cluster will be taken care by yarn cluster.

Spark YARN on EMR - JavaSparkContext - IllegalStateException: Library directory does not exist

I have Java Spark job that works on manually deployed Spark 1.6.0 in standalone mode on an EC2.
I am spark-submitting this job to a EMR 5.3.0 cluster on the master using YARN but it fails.
Spark-submit line is,
spark-submit --class <startclass> --master yarn --queue default --deploy-mode cluster --conf spark.eventLog.enabled=true --conf spark.eventLog.dir=hdfs://`hostname -f`:8020/tmp/ourSparkLogs --driver-memory 4G --executor-memory 4G --executor-cores 2 hdfs://`hostname -f`:8020/data/x.jar yarn-client
The "yarn-client" is the first argument to the x.jar application and is fed to the SparkContext as setMaster,
conf.setMaster(args[0]);
When I submit it, it starts out running fine, until I initialize the JavaSparkContext from a SparkConf,
JavaSparkContext sc = new JavaSparkContext(conf);
... and then Spark crashes.
In the YARN log, I can see the following,
yarn logs -applicationId application_1487325147456_0051
...
17/02/17 16:27:13 WARN Client: Neither spark.yarn.jars nor spark.yarn.archive is set, falling back to uploading libraries under SPARK_HOME.
17/02/17 16:27:13 INFO Client: Deleted staging directory hdfs://ip-172-31-8-237.eu-west-1.compute.internal:8020/user/ec2-user/.sparkStaging/application_1487325147456_0052
17/02/17 16:27:13 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalStateException: Library directory '/mnt/yarn/usercache/ec2-user/appcache/application_1487325147456_0051/container_1487325147456_0051_01_000001/assembly/target/scala-2.11/jars' does not exist; make sure Spark is built.
...
Noting the WARN of spark.yarn.jars flag missing, I found a spark yarn JAR file in
/usr/lib/spark/jars/
... and uploaded it to HDFS per Cloudera's guide on how to run YARN applications on Spark and tried to add that conf, so this became my spark-submit line,
spark-submit --class <startclass> --master yarn --queue default --deploy-mode cluster --conf spark.eventLog.enabled=true --conf spark.eventLog.dir=hdfs://`hostname -f`:8020/tmp/ourSparkLogs --conf spark.yarn.jars=hdfs://`hostname -f`:8020/sparkyarnlibs/spark-yarn_2.11-2.1.0.jar --driver-memory 4G --executor-memory 4G --executor-cores 2 hdfs://`hostname -f`:8020/data/x.jar yarn-client
But that did not work and gave this:
Could not find or load main class org.apache.spark.deploy.yarn.ApplicationMaster
I am really puzzled as to what that Library error is caused by and how to proceed onwards from here.
You have specified "--deploy-mode cluster" and yet are calling conf.setMaster("yarn-client") from the code. Using a master URL of "yarn-client" means "use YARN as the master, and use client mode (not cluster mode)", so I wouldn't be surprised if this is somehow confusing Spark because on one hand you're telling it to use cluster mode and on the other you're telling it to use client mode.
By the way, using a master URL like "yarn-client" or "yarn-cluster" is actually deprecated because the "-client" or "-cluster" part is not really part of the Master but rather is the deploy mode. That is, "--master yarn-client" is really more of a shortcut/alias for "--master yarn --deploy-mode client", and similarly "--master yarn-cluster" just means "--master yarn --deploy-mode cluster".
My recommendation would be to not call conf.setMaster() from your code, since the master is already set to "yarn" automatically in /etc/spark/conf/spark-defaults.conf. For this reason, you also don't need to pass "--master yarn" to spark-submit.
Lastly, it sounds like you need to decide whether you really want to use client deploy mode or cluster deploy mode. With client deploy mode, the driver runs on the master instance, and with cluster deploy mode, the driver runs in a YARN container on one of the core/task instances. See https://spark.apache.org/docs/latest/running-on-yarn.html for more information.
If you want to use client deploy mode, you don't need to pass anything extra because it's already the default. If you want to use cluster deploy mode, pass "--deploy-mode cluster".

Dependency is not distributed to Spark cluster

I'm trying to execute Spark job on Mesos cluster that depends on spark-cassandra-connector library, but it keeps failing with
Exception in thread "main" java.lang.NoClassDefFoundError: com/datastax/spark/connector/package$
As I understand from spark documentation
JARs and files are copied to the working directory for each SparkContext on the executor nodes.
...
Users may also include any other dependencies by supplying a comma-delimited list of maven coordinates with --packages.
But it seems that only pucker-assembly-1.0.jar task jar is distributed.
I'm running spark 1.6.1 with scala 2.10.6.
And here's spark-submit command I'm executing:
spark-submit --deploy-mode cluster
--master mesos://localhost:57811
--conf spark.ssl.noCertVerification=true
--packages datastax:spark-cassandra-connector:1.5.1-s_2.10
--conf spark.cassandra.connection.host=10.0.1.83,10.0.1.86,10.0.1.85
--driver-cores 3
--driver-memory 4000M
--class SimpleApp
https://dripit-spark.s3.amazonaws.com/pucker-assembly-1.0.jar
s3n://logs/E1SR85P3DEM3LU.2016-05-05-11.ceaeb015.gz
So why isn't spark-cassandra-connector distributed to all my spark executers?
You should use the correct Maven coordinate syntax:
--packages com.datastax.spark:spark-cassandra-connector_2.10:1.6.0
See
https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector_2.10
http://spark.apache.org/docs/latest/submitting-applications.html
http://spark.apache.org/docs/latest/programming-guide.html#using-the-shell

Resources