How to specify --jars in spark-submit? - apache-spark

I want to add a couple of gcs connector jars to spark-submit. These will allow my spark application to read files from google storage.
I can do the following in python:
from pyspark import pandas as pd
spark = SparkSession.builder.config("spark.jars","/usr/local/share/google/dataproc/lib/gcs-connector.jar,/usr/l
ocal/share/google/dataproc/lib/gcs-connector-hadoop3-2.2.3.jar").getOrCreate()
pd.read_csv("gs://monsoon-credittech.appspot.com/mar19/training_data.csv",nrows=2)
But, I cannot achieve the same with spark-submit utility
spark-submit --master yarn --jars /usr/local/share/google/dataproc/lib/gcs-connector-hadoop3-2.2.3.jar, /usr/local/share/google/dataproc/lib/gcs-connector.jar testing_dep.py --num 2 --path gs://monsoon-credittech.appsp
ot.com/mar19/training_data.csv
output:
2022-01-08 16:34:43,605 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" org.apache.spark.SparkException: No main class set in JAR; please specify one with --class.
at org.apache.spark.deploy.SparkSubmit.error(SparkSubmit.scala:972)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:492)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:898)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1043)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1052)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Why is the spark-submit asking me to provide --class. My entry point is testing_dep.py. I do not have --class to be specified. How can I achieve adding those jars?

Related

Pyspark - Failed to get main class in JAR with error 'File file:/home/xpto/spark/, does not exist'

Im using pyspark to write into kafka.
When I run the command:
bin/spark-submit --packages org.apache.spark:spark-streaming-kafka-0-10-assembly_2.12:3.0.1,org.apache.spark:spark-sql-kafka-0-10_2.11:2.0.2 --jars /home/xpto/spark/jars/spark-streaming-kafka-0-10-assembly_2.12-3.0.1.jar , /home/xpto/spark/jars/spark-sql-kafka-0-10_2.11-2.0.2.jar , /home/xpto/spark/jars/kafka-clients-2.6.0.jar --verbose --master local[2] /home/xavy/Documents/PersonalProjects/Covid19Analysis/pyspark_job_to_write_data_to_kafkatopic.py
Im receiving an error:
:: retrieving :: org.apache.spark#spark-submit-parent-ad9bf9ab-6d6d-4edd-bd1f-4b3145c2457f
confs: [default]
0 artifacts copied, 7 already retrieved (0kB/3ms)
20/11/22 18:35:02 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" org.apache.spark.SparkException: Failed to get main class in JAR with error 'File file:/home/xpto/spark/, does not exist'. Please specify one with --class.
at org.apache.spark.deploy.SparkSubmit.error(SparkSubmit.scala:936)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:457)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:871)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:180)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:203)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:90)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1007)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1016)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I dont know which class the spark is asking for...
Im running this locally in my pc, not sure if is the right way to do it.
Can someone help and point me to the right direction?
So, spacing matters - make sure you don't put spaces in your file paths
For example, you've put this path in your packages line
, /home/xpto/spark/jars/spark-sql-kafka-0-10_2.11-2.0.2.jar
Not clear why you give a local file path when getting them from maven should work fine. However, you need to use consistent Spark versions... You've mixed 3.x and 2.x as well as Scala 2.12 and 2.11
You also shouldn't don't need both spark-streaming-kafka and spark-sql-kafka
Regarding the error, the syntax that it thinks you've tried to use is for Java
spark-submit [options] --class MainClass application.jar
For python applications, you might want to use --py-files

spark2-submit throwing error with multiple packages (--packages)

I'm trying to submit following Spark2 job on CDH 5.16 cluster and it's only taking first parameter of --packages option and throwing error for second parameter
spark2-submit --packages com.databricks:spark-xml_2.11:0.4.1, com.databricks:spark-csv_2.11:1.5.0 /path/to/python-script
Exception in thread "main" org.apache.spark.SparkException: Cannot load main class from JAR com.databricks:spark-csv_2.11:1.5.0 with URI com.databricks. Please specify a class through --class.
at org.apache.spark.deploy.SparkSubmitArguments.error(SparkSubmitArguments.scala:657)
at org.apache.spark.deploy.SparkSubmitArguments.loadEnvironmentArguments(SparkSubmitArguments.scala:224)
at org.apache.spark.deploy.SparkSubmitArguments.<init>(SparkSubmitArguments.scala:116)
at org.apache.spark.deploy.SparkSubmit$$anon$2$$anon$1.<init>(SparkSubmit.scala:911)
at org.apache.spark.deploy.SparkSubmit$$anon$2.parseArguments(SparkSubmit.scala:911)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:81)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Running this job in CDH5.16 cluster and spark installed with Spark2 CSD
Thanks in advance.
Don't give space between packages
spark2-submit --packages com.databricks:spark-xml_2.11:0.4.1,com.databricks:spark-csv_2.11:1.5.0 /path/to/python-script

Exception in thread "main" java.lang.IllegalStateException: Cannot retrieve files with 'spark' scheme without an active SparkEnv

I'm very new to spark and cassandra, got one sample from github and tried to run the application from the below link
spark-on-cassandra-quickstart
After jar file generated, Tried executing with the below syntax
C:\Users\user\Desktop\softwares\spark-2.4.3-bin-hadoop2.7\spark-2.4.3-bin-hadoop2.7\bin>spark-submit --class com.github.boneill42.JavaDemo --master spark://localhost:7077
C:\Users\user\git\spark-on-cassandra-quickstart\target/spark-on-cassandra-0.0.1-SNAPSHOT-jar-with-dependencies.jar spark://localhost:7077 localhost
Below is the issue I'm facing
19/06/08 22:59:49 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.lang.IllegalStateException: Cannot retrieve files with 'spark' scheme without an active SparkEnv.
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:690)
at org.apache.spark.deploy.DependencyUtils$.downloadFile(DependencyUtils.scala:137)
at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:367)
at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:367)
at scala.Option.map(Option.scala:146)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:366)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:143)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Please help me in resolving the issue
In your case, It seems you want to start in standalone mode
spark://HOST:PORT Connect to the given Spark standalone cluster master.
The port must be whichever one your master is configured to use, which is 7077 by default.
Do you start spark master and worker first ?
launch master
./sbin/start-master.sh
launch worker
./bin/spark-class org.apache.spark.deploy.worker.Worker spark://localhost:7077 -c 1 -m 512M
After start master and worker, then you can submit your job again.

PredictionIO train failed in HDInsight Yarn Cluster

I have tried to run pio train command in HDInsight Spark cluster using following command
pio train -- --deploy-mode cluster --master yarn
But there is following error have been provided
2018-11-05 11:40:05 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.io.IOException: No FileSystem for scheme: wasb
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2660)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
at org.apache.spark.deploy.yarn.Client$$anonfun$5.apply(Client.scala:121)
at org.apache.spark.deploy.yarn.Client$$anonfun$5.apply(Client.scala:121)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.deploy.yarn.Client.<init>(Client.scala:121)
at org.apache.spark.deploy.yarn.YarnClusterApplication.start(Client.scala:1520)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:894)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:198)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:228)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:137)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
2018-11-05 11:40:07 INFO ShutdownHookManager:54 - Shutdown hook called
I use following script to test connection and there is no issue, script successful connect and return available items from Azure Storage
hadoop fs -ls wasb://my_container_name#my_blob_account_name.blob.core.windows.net
Has anyone ideas with solution of the issue?
Had the same issues, where hadoop would support the wasb:// protocole, but not pio According to https://github.com/hning86/articles/blob/master/hadoopAndWasb.md
You have to use the hadoop-azure-2.7.1.jar and azure-storage-2.0.0.jar in your CLASSPATH
To solve this issue, you need to add the two jars to the CLASSPATH of pio itself.
With PredictionIO 0.13.1, according to /usr/local/pio/bin/compute-classpath.sh, this can be acheived by adding the jar to the subdirectory plugins
ls /usr/local/pio/plugins/azure-storage-2.0.0.jar
ls /usr/local/pio/plugins/hadoop-azure-2.7.1.jar

running spark on yarn as client

I'm trying to run a spark job with yarn using:
./bin/spark-submit --class "KafkaToMaprfs" --master yarn --deploy-mode client /home/mapr/kafkaToMaprfs/target/scala-2.10/KafkaToMaprfs.jar
But facing this error:
/opt/mapr/hadoop/hadoop-2.7.0 17/01/03 11:19:26 WARN NativeCodeLoader:
Unable to load native-hadoop library for your platform... using
builtin-java classes where applicable 17/01/03 11:19:38 ERROR
SparkContext: Error initializing SparkContext.
org.apache.spark.SparkException: Yarn application has already ended!
It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.(SparkContext.scala:530)
at KafkaToMaprfs$.main(KafkaToMaprfs.scala:61)
at KafkaToMaprfs.main(KafkaToMaprfs.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:752)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) 17/01/03 11:19:39 WARN MetricsSystem: Stopping a MetricsSystem that is
not running Exception in thread "main"
org.apache.spark.SparkException: Yarn application has already ended!
It might have been killed or unable to launch application master.
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.waitForApplication(YarnClientSchedulerBackend.scala:124)
at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:64)
at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:144)
at org.apache.spark.SparkContext.(SparkContext.scala:530)
at KafkaToMaprfs$.main(KafkaToMaprfs.scala:61)
at KafkaToMaprfs.main(KafkaToMaprfs.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:752)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
I have a multi node cluster, i'm deploying the application from a remote node.
I'm using spark 1.6.1 and hadoop 2.7.x versions.
I didn't set the cluster, so I couldn't find where the mistake lies.
Can anyone please help me fix this?
In my case i'm using MapR distribution.And i didn't configure the environment.
So, when i dug down to the all the conf folders.I made some changes in the below files,
1. In Spark-env.sh,Make sure these values are set right.
export SPARK_LOG_DIR=
export SPARK_PID_DIR=
export HADOOP_HOME=
export HADOOP_CONF_DIR=
export JAVA_HOME=
export SPARK_SUBMIT_OPTIONS=
2. yarn-env.sh.
Make sure the yarn_conf_dir, and java_home are set with correct values.
3. In spark-defaults.conf
1.spark.driver.extraClassPath
2.set value for HADOOP_CONF_DIR
4. HADOOP_CONF_DIR and JAVA_HOME in $SPARK_HOME/conf/spark-env.sh
1.export HADOOP_CONF_DIR=/opt/mapr/hadoop/hadoop-2.7.0/etc/hadoop
2.export JAVA_HOME =
5.spark assembly jar
1.Copy the following JAR file from the local file system to a world-readable location on MapR-FS: Substitute your Spark version and
specific JAR file name in the command.
/opt/mapr/spark/spark-/lib/spark-assembly--hadoop-mapr-.jar
Now i'm able to run my spark application as YARN-CLIENT smoothly using spark-submit.
These are basic essentials to make spark connect with yarn.
Correct me if i missed any other things.

Resources