How to deploy war file in spark-submit command (spark) - apache-spark

I am using
spark-submit --class main.Main --master local[2] /user/sampledata/parser-0.0.1-SNAPSHOT.jar
to run a java-spark code. Is it possible to run this code using war file instead of jar, since I am looking to deploy it on tomcat.
I tried war file but it gives class not found exception.

Related

Could not load class | Spark-submit Intellij

I know this was asked here before but my case is a bit different I think.
I have a working simple project in Intellij
enter image description here
when I run through Intellij the program works fine and i can see results but whenever i export as a jar to run locally through spark-submit it fails with error "failed to load class"
Im running using : spark-submit --class com.CarbonEmission --master local[*] MyPath\TestSparkJar.jar
Below is my sbt :
enter image description here
I've been stuck over this for some days now I hope someone can help.
The required --class file is not added in exported jar. Please extract and see the file

how to add third party library to spark running on local machine

i am listening to eventhub stream and have seen article to attach library to cluster(databricks) and my code runs file.
For debugging i am running the code on local machine/cluster, but it fails for missing library. How can i add library when running on local machine.
i tried sparkcontext.addfile(fullpathtojar), but still same error.
You can use spark-submit --packages
Example: spark-submit --packages org.postgresql:postgresql:42.1.1
You would need to find the package that you are using and check the compatibility with spark.
With a single jar file you'd use spark-submit --jars instead.
i used spark-submit --packages {package} and it works.

Spark-submit error - Cannot load main class from JAR file

I am trying to run on Hadoop with Spark but I have a "Cannot load main class from JAR file" error.
How can I fix this?
Try copying main.py and the the additional python files to a local file:// path instead of having them in hdfs.
You need to pass the additional python files with the --py-files argument from a local directory as well.
Assuming you copy the python files to your working directory where you are launching spark-submit from, try the following command:
spark-submit \
--name "Final Project" \
--py-files police_reports.py,three_one_one.py,vehicle_volumn_count.py \
main.py

Pyspark Ipython notebook add dependencies

I am running Pyspark and with ipython notebook and I am trying to add a jar file to the spark env.
I tried Doing the sc.addjar to add the jar but that dint work. Any other suggestions?
I know you can do spark-submit --jars , but I am doing spark submit
I also tried env variable SPARK_CLASS_PATH and placed the jar in that folder and restarted the ipython/spark , it picked it up as a jar but dint do anything.

Error: Unrecognized option: --packages

I'm porting an existing script from BigInsights to Spark on Bluemix. I'm trying to run the following against Spark on Bluemix:
./spark-submit.sh --vcap ./vcap.json --deploy-mode cluster \
--master https://x.x.x.x:8443 --jars ./truststore.jar \
--packages org.elasticsearch:elasticsearch-spark_2.10:2.3.0 \
./export_to_elasticsearch.py ...
However, I get the following error:
Error: Unrecognized option: --packages
How can I pass the --packages parameter?
Bluemix uses a customized Spark version, with a customized spark-submit.sh script that only supports a subset of the original script parameters. You can see all the configuration properties and parameters you can use on its documentation.
Additionally, you can download the Bluemix version of the script from this link, and there you can see that there is no argument --packages.
Therefore, the problem with your approach is that the Bluemix version of spark-submit does not accept the --packages parameter, probably due to security reasons. However, alternatively, you can download the jar for the package you want (and maybe a fat jar for the dependencies) and upload them using the --jars parameter. Note: To avoid the necessity of uploading the jar files each time you call spark-submit, you can pre-upload them using curl. The details of this procedure can be found on this link.
Adding to Daniel's post, while using the method to pre-upload your package, you might want to upload your package to "${cluster_master_url}/tenant/data/libs", since Spark service sets these four spark properties "spark.driver.extraClassPath", "spark.driver.extraLibraryPath", "spark.executor.extraClassPath", and "spark.executor.extraLibraryPath" to ./data/libs/*
Reference: https://console.ng.bluemix.net/docs/services/AnalyticsforApacheSpark/index-gentopic3.html#spark-submit_properties

Resources