Dependency is not added to Spark + Zeppelin - apache-spark

I can't add custom dependency to the spark classpath from zeppelin.
Environment:
AWS EMR: Zeppelin 0.8.0, Spark 2.4.0
extra configs for spark interpreter:
spark.jars.ivySettings /tmp/ivy-settings.xml
spark.jars.packages my-group-name:artifact_2.11:version
The files from my-group-name were appeared at
spark.yarn.dist.jars
spark.yarn.secondary.jars
But not accessible via zeppelin notebook (checking by import my.lab._)
However, when i am running the same configs for spark-shell it is working on both local machine, and ssh on emr cluster
and imports are available from spark-shell
Sun.java.command for zeppelin:
org.apache.spark.deploy.SparkSubmit --master yarn-client ... --conf spark.jars.packages=my-group-name:artifact_2.11:version ... --conf spark.jars.ivySettings=/tmp/ivy-settings.xml ... --class org.apache.zeppelin.interpreter.remote.RemoteInterpreterServer /usr/lib/zeppelin/interpreter/spark/spark-interpreter-0.8.0.jar <IP ADDRESS> 34717 :
Spark submit on emr:
spark-shell --master yarn-client --conf spark.jars.ivySettings="/tmp/ivy-settings.xml" --conf spark.jars.packages="my-group-name:artifact_2.11:version"
Any advices where to look for the errors?

You can try to add your jar directly to Zeppelin, in Interpreter settings.
http://zeppelin.apache.org/docs/0.8.0/usage/interpreter/dependency_management.html
Or, add jar to spark libs (in my case it's /usr/hdp/current/spark2/jars/ directory).

Related

FileNotFound error when running spark-submit

I am trying to run the spark-submit command on my Hadoop cluster
Here is a summary of my Hadoop Cluster:
The cluster is built using 5 VirtualBox VM's connected on an internal network
There is 1 namenode and 4 datanodes created.
All the VM's were built from the Bitnami Hadoop Stack VirtualBox image
When I run the following command:
spark-submit --class org.apache.spark.examples.SparkPi $SPARK_HOME/examples/jars/spark-examples_2.12-3.0.3.jar 10
I receive the following error:
java.io.FileNotFoundException: File file:/home/bitnami/sparkStaging/bitnami/.sparkStaging/application_1658417340986_0002/__spark_conf__.zip does not exist
I also get a similar error when trying to create a sparkSession using PySpark:
spark = SparkSession.builder.appName('appName').getOrCreate()
I have tried/verified the following
environment variables: HADOOP_HOME, SPARK_HOME AND HADOOP_CONF_DIR have been set in my .bashrc file
SPARK_DIST_CLASSPATH and HADOOP_CONF_DIR have been defined in spark-env.sh
Added spark.master yarn, spark.yarn.stagingDir file:///home/bitnami/sparkStaging and spark.yarn.jars file:///opt/bitnami/hadoop/spark/jars/ in spark-defaults.conf
I believe spark.yarn.stagingDir needs to be an HDFS path.
More specifically, the "YARN Staging directory" needs to be available on all Spark executors, not just a local file path from where you run spark-submit
The path that isn't found is being reported from the YARN cluster, where /home/bitnami might not exist, or the Unix user running the Spark executor containers does not have access to that path.
Similarly, spark.yarn.jars (or spark.yarn.archive) should be HDFS paths because these will get downloaded, in parallel, across all executors.
Since the spark job is supposed to be submitted to the Hadoop cluster managed by YARN, master and deploy-mode has to be set. From the spark 3.3.0 docs:
# Run on a YARN cluster in cluster deploy mode
export HADOOP_CONF_DIR=XXX
./bin/spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn \
--deploy-mode cluster \
--executor-memory 20G \
--num-executors 50 \
/path/to/examples.jar \
1000
Or programatically:
spark = SparkSession.builder().appName('appName').master("yarn").config("spark.submit.deployMode","cluster").getOrCreate()

Load properties file in Spark classpath during spark-submit execution

I'm installing the Spark Atlas Connector in a spark submit script (https://github.com/hortonworks-spark/spark-atlas-connector)
Due to security restrictions, I can't put the atlas-application.properties in the spark/conf repository.
I used the two options in the spark-submit :
--driver-class-path "spark.driver.extraClassPath=hdfs:///directory_to_properties_files" \
--conf "spark.executor.extraClassPath=hdfs:///directory_to_properties_files" \
When I launch the spark-submit, I encounter this issue :
20/07/20 11:32:50 INFO ApplicationProperties: Looking for atlas-application.properties in classpath
20/07/20 11:32:50 INFO ApplicationProperties: Looking for /atlas-application.properties in classpath
20/07/20 11:32:50 INFO ApplicationProperties: Loading atlas-application.properties from null
Please find CDP Atals Configuration article.
https://community.cloudera.com/t5/Community-Articles/How-to-pass-atlas-application-properties-configuration-file/ta-p/322158
Client Mode:
spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode client --driver-java-options="-Datlas.conf=/tmp/" /opt/cloudera/parcels/CDH/jars/spark-examples*.jar 10
Cluster Mode:
sudo -u spark spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --files /tmp/atlas-application.properties --conf spark.driver.extraJavaOptions="-Datlas.conf=./" /opt/cloudera/parcels/CDH/jars/spark-examples*.jar 10

spark-submit on kubernetes cluster does not recognise k8s --master property

I have successfully installed a Kubernetes cluster and can verify this by:
C:\windows\system32>kubectl cluster-info
Kubernetes master is running at https://<ip>:<port>
KubeDNS is running at https://<ip>:<port>/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Then I am trying to run the SparkPi with the Spark I downloaded from https://spark.apache.org/downloads.html .
spark-submit --master k8s://https://192.168.99.100:8443 --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=2 --conf spark.kubernetes.container.image=gettyimages/spark c:\users\<username>\Desktop\spark-2.4.0-bin-hadoop2.7\examples\jars\spark-examples_2.11-2.4.0.jar
I am getting this error:
Error: Master must either be yarn or start with spark, mesos, local
Run with --help for usage help or --verbose for debug output
I tried versions 2.4.0 and 2.3.3. I also tried
spark-submit --help
to see what I can get regarding the --master property. This is what I get:
--master MASTER_URL spark://host:port, mesos://host:port, yarn, or local.
According to the documentation [https://spark.apache.org/docs/latest/running-on-kubernetes.html] on running Spark workloads in Kubernetes, spark-submit does not even seem to recognise the k8s value for master. [ included in possible Spark masters: https://spark.apache.org/docs/latest/submitting-applications.html#master-urls ]
Any ideas? What would I be missing here?
Thanks
Issue was my CMD was recognising a previous spark-submit version I had installed(2.2) even though i was running the command from the bin directory of spark installation.

SnappyData Smart Connector - how to run jobs

I'm reading the documentation and I would like to ask you to help me understand the SnappyData Smart Connector point.
There is a few different examples in documentation how should I use spark-submit e.g:
example 1
./bin/spark-submit --deploy-mode cluster --class somePackage.someClass
--master spark://localhost:7077 --conf spark.snappydata.connection=localhost:1527
--packages "SnappyDataInc:snappydata:1.0.0-s_2.11"
example 2
// Start the Spark standalone cluster from SnappyData base directory
$ sbin/start-all.sh
// Submit AirlineDataSparkApp to Spark Cluster with snappydata's
locator host port.
$ bin/spark-submit --class io.snappydata.examples.AirlineDataSparkApp --master spark://masterhost:7077 --conf spark.snappydata.connection=locatorhost:clientPort --conf spark.ui.port=4041 $SNAPPY_HOME/examples/jars/quickstart.jar
example 3
$ <Spark_Product_Home>/bin/spark-submit --master local[*] --conf
spark.snappydata.connection=localhost:1527 --class
org.apache.spark.examples.snappydata.SmartConnectorExample --
packages SnappyDataInc:snappydata:1.0.0-s_2.11
<SnappyData_Product_Home>/examples/jars/quickstart.jar
Let say I have Spark cluster on 3 hosts : 1 master and 3 workers
I would like to use SnappyData cluster as datasource for my current spark environment.
Should I use command from example 1 or 2 or 3?
Could you also explain to me what is --deploy-mode argument in spark-submit - http://snappydatainc.github.io/snappydata/affinity_modes/connector_mode/
what is different between cluster mode and client mode for spark-submit?
Thank you in advance for any help.
Regards,
Deploy-mode is explained here. No different when using SnappyData. When running your own Spark cluster (any Spark distro compatible with Spark 2.1) then working with SnappyData only requires you to configure the Snappy locator (e.g. localhost:1527).

Dependency is not distributed to Spark cluster

I'm trying to execute Spark job on Mesos cluster that depends on spark-cassandra-connector library, but it keeps failing with
Exception in thread "main" java.lang.NoClassDefFoundError: com/datastax/spark/connector/package$
As I understand from spark documentation
JARs and files are copied to the working directory for each SparkContext on the executor nodes.
...
Users may also include any other dependencies by supplying a comma-delimited list of maven coordinates with --packages.
But it seems that only pucker-assembly-1.0.jar task jar is distributed.
I'm running spark 1.6.1 with scala 2.10.6.
And here's spark-submit command I'm executing:
spark-submit --deploy-mode cluster
--master mesos://localhost:57811
--conf spark.ssl.noCertVerification=true
--packages datastax:spark-cassandra-connector:1.5.1-s_2.10
--conf spark.cassandra.connection.host=10.0.1.83,10.0.1.86,10.0.1.85
--driver-cores 3
--driver-memory 4000M
--class SimpleApp
https://dripit-spark.s3.amazonaws.com/pucker-assembly-1.0.jar
s3n://logs/E1SR85P3DEM3LU.2016-05-05-11.ceaeb015.gz
So why isn't spark-cassandra-connector distributed to all my spark executers?
You should use the correct Maven coordinate syntax:
--packages com.datastax.spark:spark-cassandra-connector_2.10:1.6.0
See
https://mvnrepository.com/artifact/com.datastax.spark/spark-cassandra-connector_2.10
http://spark.apache.org/docs/latest/submitting-applications.html
http://spark.apache.org/docs/latest/programming-guide.html#using-the-shell

Resources