What is the proper way of running a Spark application on YARN using Oozie (with Hue)? - apache-spark

I have written an application in Scala that uses Spark.
The application consists of two modules - the App module which contains classes with different logic, and the Env module which contains environment and system initialization code, as well as utility functions.
The entry point is located in Env, and after initialization, it creates a class in App (according to args, using Class.forName) and the logic is executed.
The modules are exported into 2 different JARs (namely, env.jar and app.jar).
When I run the application locally, it executes well. The next step is to deploy the application to my servers. I use Cloudera's CDH 5.4.
I used Hue to create a new Oozie workflow with a Spark task with the following parameters:
Spark Master: yarn
Mode: cluster
App name: myApp
Jars/py files: lib/env.jar,lib/app.jar
Main class: env.Main (in Env module)
Arguments: app.AggBlock1Task
I then placed the 2 JARs inside the lib folder in the workflow's folder (/user/hue/oozie/workspaces/hue-oozie-1439807802.48).
When I run the workflow, it throws a FileNotFoundException and the application does not execute:
java.io.FileNotFoundException: File file:/cloudera/yarn/nm/usercache/danny/appcache/application_1439823995861_0029/container_1439823995861_0029_01_000001/lib/app.jar,lib/env.jar does not exist
However, when I leave the Spark master and mode parameters empty, it all works properly, but when I check spark.master programmatically it is set to local[*] and not yarn. Also, when observing the logs, I encountered this under Oozie Spark action configuration:
--master
null
--name
myApp
--class
env.Main
--verbose
lib/env.jar,lib/app.jar
app.AggBlock1Task
I assume I'm not doing it right - not setting Spark master and mode parameters and running the application with spark.master set to local[*]. As far as I understand, creating a SparkConf object within the application should set the spark.master property to whatever I specify in Oozie (in this case yarn) but it just doesn't work when I do that..
Is there something I'm doing wrong or missing?
Any help will be much appreciated!

I managed to solve the problem by putting the two JARs in the user directory /user/danny/app/ and specifying the Jar/py files parameter as ${nameNode}/user/danny/app/env.jar. Running it caused a ClassNotFoundException to be thrown, even though the JAR was located at the same folder in HDFS. To work around that, I had to go to the settings and add the following to the options list: --jars ${nameNode}/user/danny/app/app.jar. This way the App module is referenced as well and the application runs successfully.

Related

To get application id in particular file after spark-submit in cluster deploy mode

I want to get application id in text file at local when I deploy application in cluster mode.
For this I had edited log4j.properties file and configs it for client but I is not working .
I had also followed this blog :https://largecats.github.io/blog/2020/09/21/spark-cluster-mode-collect-log/ but do not get satisfactory result.
I had also follow this spark-submit in cluster deploy mode get application id to console but it is showing application id on console.
so, please anyone help me , I am stuck there of a week but do not get proper solution.
You should set the tag to your Spark app during submitting and later ask Yarn based on tag value.
--conf spark.yarn.tags=tag-name

Issues while using setConf in SparkLauncher from Windows

I am trying to trigger Pyspark code using SparkLauncher from Windows.
When I use
.setConf(SparkLauncher.DRIVER_MEMORY, "1G")
or any other configuration, the following error message is thrown,
--conf "spark.driver.memory' is not recognized as an internal or external command
Also, I need to add multiple dependency jars. For example, when I use
addJar("D:\\jars\\elasticsearch-spark-20_2.11-6.0.0-rc2.jar")
it is working. But when it is used multiple times
.addJar("D:\\jars\\elasticsearch-spark-20_2.11-6.0.0-rc2.jar")
.addJar("D:\\jars\\mongo-spark-connector_2.11-2.2.0.jar")
the following error is thrown
The filename, directory name, or volume label syntax is incorrect.
The same code works in Linux environment.
Could someone please help me on this?

How to implement spark.ui.filter

I have a spark cluster set up on 2 CentOS machines. I want to secure the web UI of my cluster (master node). I have made a BasicAuthenticationFilter servlet. I am unable to understand:
how should I use spark.ui.filter to secure my web UI.
Where should I place the servlet/jar file.
Kindly help.
I also needed to handle this security problem to prevent unauthorized access to spark standalone UI. At last I fixed it after surfing on the web, the procedure is :
code and compile a java filter using standard basic authentication protocol, I refered to this [blog]: http://lambda.fortytools.com/post/26977061125/servlet-filter-for-http-basic-auth
packaged above filter class as a jar file, put it in $spark_home/jars/
add config lines in $spark_home/conf/spark-default.conf as :
spark.ui.filters xxx.BasicAuthFilter # the full class name
spark.test.BasicAuthFilter.params user=foo,password=cool,realm=some
the username and password need to provide to access the spark UI, “realm” is insignificant whatever you typed
restart all slave and master process and test to find it works
Hi place the jar file in all the nodes in the folder /opt/spark/conf/. In terminal, type the following commands:
Navigate to the directory /usr/local/share/jupyter/kernels/pyspark/kernel.json
Edit the file kernel.json
Add the following argument to the PYSPARK_SUBMIT_ARGS --jars /opt/spark/conf/filterauth.jar –conf spark.ui.filters=authenticate.MyFilter
Here, filterauth.jar is the jar file created and authenticate.MyFilter represents <package name>.<class name>
Hope this answers your query. :)

Example Oozie job works from Hue, but not from command line: SparkMain not found

I've successfully run the example Spark workflow ("Copy a file by launching a Spark Java program") provided in the Hue Oozie workflow editor (in the Cloudera 5.5.1 QuickStart VM).
I'm now trying to run it manually using the oozie commandline tool:
oozie job -oozie http://localhost:11000/oozie -config job.properties -run
The workflow XML is basically unchanged - I have copied it to HDFS and have the following job.properties:
nameNode=hdfs://localhost:8020
jobTracker=localhost:8032
oozie.wf.application.path=/user/cloudera/workflows/spark-scala/spark-scala.xml
input=/user/hue/oozie/workspaces/data/sonnets.txt
output=here
The job is accepted and appears in the Hue web dashboard, but is killed after a few seconds, and the logs report:
Launcher exception: java.lang.ClassNotFoundException: Class org.apache.oozie.action.hadoop.SparkMain not found
What is the problem here?
Oozie doesn't include the libraries for the Spark action, by default - you need to add the following to the job.properties:
oozie.use.system.libpath=true
(Clicking on the previously successful Hue workflow in the Hue Dashboard, you can select the Configuration tab to see the properties that Hue has provided)

Change Python path in CDH for pyspark

I need to change the python that is being used with my CDH5.5.1 cluster. My research pointed me to set PYSPARK_PYTHON in spark-env.sh. I tried that manually without success. I then used Cloudera Manager to set the variable in both the 'Spark Service Environment Advanced Configuration Snippet' and 'Spark Service Advanced Configuration Snippet' & about everywhere else that referenced spark-env-sh. This hasn't worked and I'm at a lost where to go next.
You need to add the PYSPARK_PYTHON variable to the YARN configuration :
YARN (MR2 Included) Service Environment Advanced Configuration Snippet (Safety Valve)
Do that, restart the cluster and you are good to go.

Resources