Dependency is not distributed to Spark cluster - apache-spark

I'm trying to execute Spark job on Mesos cluster that depends on spark-cassandra-connector library, but it keeps failing with
Exception in thread "main" java.lang.NoClassDefFoundError: com/datastax/spark/connector/package$
As I understand from spark documentation
JARs and files are copied to the working directory for each SparkContext on the executor nodes.
...
Users may also include any other dependencies by supplying a comma-delimited list of maven coordinates with --packages.
But it seems that only pucker-assembly-1.0.jar task jar is distributed.
I'm running spark 1.6.1 with scala 2.10.6.
And here's spark-submit command I'm executing:
spark-submit --deploy-mode cluster
--master mesos://localhost:57811
--conf spark.ssl.noCertVerification=true
--packages datastax:spark-cassandra-connector:1.5.1-s_2.10
--conf spark.cassandra.connection.host=10.0.1.83,10.0.1.86,10.0.1.85
--driver-cores 3
--driver-memory 4000M
--class SimpleApp
https://dripit-spark.s3.amazonaws.com/pucker-assembly-1.0.jar
s3n://logs/E1SR85P3DEM3LU.2016-05-05-11.ceaeb015.gz
So why isn't spark-cassandra-connector distributed to all my spark executers?

You should use the correct Maven coordinate syntax:
--packages com.datastax.spark:spark-cassandra-connector_2.10:1.6.0
See
https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector_2.10
http://spark.apache.org/docs/latest/submitting-applications.html
http://spark.apache.org/docs/latest/programming-guide.html#using-the-shell

Related

Spark --jars option added jar are not working

I am trying to add redshift jar using spark-submit option:
Running command on Spark 2.1.0
spark-submit --class Test --master spark://xyz.local:7077 --executor-cores 4 --total-executor-cores 32 --executor-memory 6G --driver-memory 4G --driver-cores 2 --deploy-mode cluster -jars s3a://d11-batch-jobs-on-spark/jars/redshift-jdbc42-1.2.10.1009.jar,s3a://mybucket/jars/spark-redshift_2.11-3.0.0-preview1.jar s3a://mybucket/jars/app.jar
and in code I am reading from redshift table but getting
ClassNotFoundException: com.databricks.spark.redshift.DefaultSource
What am I doing wrong?
I'm having issues using the --jars as well...
My advise is, for packages in the Maven repository, to use --packages instead of --jars, as it resolves other dependencies withing those packages.
USAGE
spark-submit --packages <groupId>:<artifactId>:<version>
In your case, except all other options and args, it'd look like this:
spark-submit --packages com.amazon.redshift:redshift-jdbc42:1.2.10.1009
You can find IDs and version from an XML-style provided by Maven after following the link to your desired version.
The accepted answer to this question provides more info on --jars and -packages

Spark metrics sink doesn't expose executor's metrics

I'm using Spark on YARN with
Ambari 2.7.4
HDP Standalone 3.1.4
Spark 2.3.2
Hadoop 3.1.1
Graphite on Docker latest
I was trying to get Spark metrics with Graphite sink following this tutorial.
Advanced spark2-metrics-properties in Ambari are:
driver.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink
executor.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink
worker.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink
master.sink.graphite.class=org.apache.spark.metrics.sink.GraphiteSink
*.sink.graphite.host=ap-test-m.c.gcp-ps.internal
*.sink.graphite.port=2003
*.sink.graphite.protocol=tcp
*.sink.graphite.period=10
*.sink.graphite.unit=seconds
*.sink.graphite.prefix=app-test
*.source.jvm.class=org.apache.spark.metrics.source.JvmSource
Spark submit:
export HADOOP_CONF_DIR=/usr/hdp/3.1.4.0-315/hadoop/conf/; spark-submit --class com.Main --master yarn --deploy-mode client --driver-memory 1g --executor-memory 10g --num-executors 2 --executor-cores 2 spark-app.jar /data
As a result I'm only getting driver metrics.
Also, I was trying to add metrics.properties to spark-submit command together with global spark metrics props, but that didn't help.
And finally, I tried conf in spark-submit and in java SparkConf:
--conf "spark.metrics.conf.driver.sink.graphite.class"="org.apache.spark.metrics.sink.GraphiteSink"
--conf "spark.metrics.conf.executor.sink.graphite.class"="org.apache.spark.metrics.sink.GraphiteSink"
--conf "worker.sink.graphite.class"="org.apache.spark.metrics.sink.GraphiteSink"
--conf "master.sink.graphite.class"="org.apache.spark.metrics.sink.GraphiteSink"
--conf "spark.metrics.conf.*.sink.graphite.host"="host"
--conf "spark.metrics.conf.*.sink.graphite.port"=2003
--conf "spark.metrics.conf.*.sink.graphite.period"=10
--conf "spark.metrics.conf.*.sink.graphite.unit"=seconds
--conf "spark.metrics.conf.*.sink.graphite.prefix"="app-test"
--conf "spark.metrics.conf.*.source.jvm.class"="org.apache.spark.metrics.source.JvmSource"
But that didn't help either.
CSVSink also gives only driver metrics.
UPD
When I submit job in cluster mode - I'm getting the same metrics as in Spark History Server. But the jvm metrics are still absent.
Posting to a dated question, but maybe it will help.
Seems like executors do not have metrics.properties file on their filesystems.
One way to confirm this would be to look at the executor logs:
2020-01-16 10:00:10 ERROR MetricsConfig:91 - Error loading configuration file metrics.properties
java.io.FileNotFoundException: metrics.properties (No such file or directory)
at org.apache.spark.metrics.MetricsConfig.loadPropertiesFromFile(MetricsConfig.scala:132)
at org.apache.spark.metrics.MetricsConfig.initialize(MetricsConfig.scala:55)
at org.apache.spark.metrics.MetricsSystem.<init>(MetricsSystem.scala:95)
at org.apache.spark.metrics.MetricsSystem$.createMetricsSystem(MetricsSystem.scala:233)
To fix this on yarn pass two parameters to spark-submit:
$ spark-submit \
--files metrics.properties \
--conf spark.metrics.conf=metrics.properties
The --files option ensures that files specified in the option will be shared to executors.
The spark.metrics.conf option specifies a custom file location for the metrics.
Another way to fix the issue would be to place the metrics.properties file into $SPARK_HOME/conf/metrics.properties on both the driver and executor before starting the job.
More on metrics here: https://spark.apache.org/docs/latest/monitoring.html

Dependency is not added to Spark + Zeppelin

I can't add custom dependency to the spark classpath from zeppelin.
Environment:
AWS EMR: Zeppelin 0.8.0, Spark 2.4.0
extra configs for spark interpreter:
spark.jars.ivySettings /tmp/ivy-settings.xml
spark.jars.packages my-group-name:artifact_2.11:version
The files from my-group-name were appeared at
spark.yarn.dist.jars
spark.yarn.secondary.jars
But not accessible via zeppelin notebook (checking by import my.lab._)
However, when i am running the same configs for spark-shell it is working on both local machine, and ssh on emr cluster
and imports are available from spark-shell
Sun.java.command for zeppelin:
org.apache.spark.deploy.SparkSubmit --master yarn-client ... --conf spark.jars.packages=my-group-name:artifact_2.11:version ... --conf spark.jars.ivySettings=/tmp/ivy-settings.xml ... --class org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer /usr/lib/zeppelin/interpreter/spark/spark-interpreter-0.8.0.jar <IP ADDRESS> 34717 :
Spark submit on emr:
spark-shell --master yarn-client --conf spark.jars.ivySettings="/tmp/ivy-settings.xml" --conf spark.jars.packages="my-group-name:artifact_2.11:version"
Any advices where to look for the errors?
You can try to add your jar directly to Zeppelin, in Interpreter settings.
http://zeppelin.apache.org/docs/0.8.0/usage/interpreter/dependency_management.html
Or, add jar to spark libs (in my case it's /usr/hdp/current/spark2/jars/ directory).

How to share files on master node to executors in Spark, How to use --files argument?

Can someone please explain, How can i ship my files in master to all executors using --files argument in spark-submit
/bin/spark-submit --master yarn --queue development --conf spark.memory.offHeap.enabled=true --conf spark.memory.offHeap.size=128G --files /keras/mnist.npz
But this gives me error. I am new to spark.
Exception in thread "main" java.lang.IllegalArgumentException: Missing application resource.
Obviously you didn't specify the application class on this command. Find more details on Running Spark On Yarn.

SnappyData Smart Connector - how to run jobs

I'm reading the documentation and I would like to ask you to help me understand the SnappyData Smart Connector point.
There is a few different examples in documentation how should I use spark-submit e.g:
example 1
./bin/spark-submit --deploy-mode cluster --class somePackage.someClass
--master spark://localhost:7077 --conf spark.snappydata.connection=localhost:1527
--packages "SnappyDataInc:snappydata:1.0.0-s_2.11"
example 2
// Start the Spark standalone cluster from SnappyData base directory
$ sbin/start-all.sh
// Submit AirlineDataSparkApp to Spark Cluster with snappydata's
locator host port.
$ bin/spark-submit --class io.snappydata.examples.AirlineDataSparkApp --master spark://masterhost:7077 --conf spark.snappydata.connection=locatorhost:clientPort --conf spark.ui.port=4041 $SNAPPY_HOME/examples/jars/quickstart.jar
example 3
$ <Spark_Product_Home>/bin/spark-submit --master local[*] --conf
spark.snappydata.connection=localhost:1527 --class
org.apache.spark.examples.snappydata.SmartConnectorExample --
packages SnappyDataInc:snappydata:1.0.0-s_2.11
<SnappyData_Product_Home>/examples/jars/quickstart.jar
Let say I have Spark cluster on 3 hosts : 1 master and 3 workers
I would like to use SnappyData cluster as datasource for my current spark environment.
Should I use command from example 1 or 2 or 3?
Could you also explain to me what is --deploy-mode argument in spark-submit - http://snappydatainc.github.io/snappydata/affinity_modes/connector_mode/
what is different between cluster mode and client mode for spark-submit?
Thank you in advance for any help.
Regards,
Deploy-mode is explained here. No different when using SnappyData. When running your own Spark cluster (any Spark distro compatible with Spark 2.1) then working with SnappyData only requires you to configure the Snappy locator (e.g. localhost:1527).

Resources