Example Oozie job works from Hue, but not from command line: SparkMain not found - apache-spark

I've successfully run the example Spark workflow ("Copy a file by launching a Spark Java program") provided in the Hue Oozie workflow editor (in the Cloudera 5.5.1 QuickStart VM).
I'm now trying to run it manually using the oozie commandline tool:
oozie job -oozie http://localhost:11000/oozie -config job.properties -run
The workflow XML is basically unchanged - I have copied it to HDFS and have the following job.properties:
nameNode=hdfs://localhost:8020
jobTracker=localhost:8032
oozie.wf.application.path=/user/cloudera/workflows/spark-scala/spark-scala.xml
input=/user/hue/oozie/workspaces/data/sonnets.txt
output=here
The job is accepted and appears in the Hue web dashboard, but is killed after a few seconds, and the logs report:
Launcher exception: java.lang.ClassNotFoundException: Class org.apache.oozie.action.hadoop.SparkMain not found
What is the problem here?

Oozie doesn't include the libraries for the Spark action, by default - you need to add the following to the job.properties:
oozie.use.system.libpath=true
(Clicking on the previously successful Hue workflow in the Hue Dashboard, you can select the Configuration tab to see the properties that Hue has provided)

Related

Azure Synapse Spark LIVY_JOB_STATE_ERROR

i'm experimenting the following error when executing any cell in my notebook:
LIVY_JOB_STATE_ERROR: Livy session has failed. Session state: Killed. Error code: LIVY_JOB_STATE_ERROR. [(my.synapse.spark.pool.name) WorkspaceType: CCID:<(hexcode)>] [Monitoring] Livy Endpoint=[https://hubservice1.westeurope.azuresynapse.net:8001/api/v1.0/publish/8dda5837-2f37-4a5d-97b9-0994b59e17f0]. Livy Id=[3] Job failed during run time with state=[error]. Source: Dependency.
My notebook was working ok till yesterday, the thing that i changed is the spark pool that was using spark 2.4 to spark 3.2(preview). Such change was made by a terraform template deploy, could this be the source of the issue? if so how to prevent it?
The issue was fixed by deleting and creating my spark pool again via the azure portal, still not sure what configuration inside my terraform template created the issue but at least this fixes the problem for now.

Spark 2.1.0 Web UI not showing "Application Detail UI"

I recently upgraded by Spark 1.4.1 setup to the latest 2.1.0. I'm running Spark in Standalone mode. Everything seems to work fine except the web UI.
The post here shows that this can happen for large application logs. But I'm pretty sure my application log is not large (~8.5MB).
I have copied the same setup that I was using in 1.4.1 with following parameters as
spark.eventLog.enabled true
spark.eventLog.dir /LOCALPATH/spark/event-log
spark.history.fs.logDirectory /LOCALPATH/spark/event-log
The web UI shows current and previous applications. However, the "Application Detail UI" link is only available for running/active applications and does not show for completed applications. I've checked the eventLog directory and it does have non-empty log file for completed applications.
Attached are images for reference as well. Am I missing some new property introduced in ver 2.1.0? I've gone through the documentation multiple times and couldn't find any.
EDIT: Got it working (for any future references)
I got it working through spark history server after following the steps explained Spark History Server. In particular, I added a log4j property
log4j.logger.org.apache.spark.deploy.history=INFO
in
SPARK_HOME/conf/log4j.properties
and start the history server and access the history server interface through
<history-server-host>:18080
where history-server-host is usually the same as your master node.

How to implement spark.ui.filter

I have a spark cluster set up on 2 CentOS machines. I want to secure the web UI of my cluster (master node). I have made a BasicAuthenticationFilter servlet. I am unable to understand:
how should I use spark.ui.filter to secure my web UI.
Where should I place the servlet/jar file.
Kindly help.
I also needed to handle this security problem to prevent unauthorized access to spark standalone UI. At last I fixed it after surfing on the web, the procedure is :
code and compile a java filter using standard basic authentication protocol, I refered to this [blog]: http://lambda.fortytools.com/post/26977061125/servlet-filter-for-http-basic-auth
packaged above filter class as a jar file, put it in $spark_home/jars/
add config lines in $spark_home/conf/spark-default.conf as :
spark.ui.filters xxx.BasicAuthFilter # the full class name
spark.test.BasicAuthFilter.params user=foo,password=cool,realm=some
the username and password need to provide to access the spark UI, “realm” is insignificant whatever you typed
restart all slave and master process and test to find it works
Hi place the jar file in all the nodes in the folder /opt/spark/conf/. In terminal, type the following commands:
Navigate to the directory /usr/local/share/jupyter/kernels/pyspark/kernel.json
Edit the file kernel.json
Add the following argument to the PYSPARK_SUBMIT_ARGS --jars /opt/spark/conf/filterauth.jar –conf spark.ui.filters=authenticate.MyFilter
Here, filterauth.jar is the jar file created and authenticate.MyFilter represents <package name>.<class name>
Hope this answers your query. :)

When trying to register a UDF using Python on I get an error about Spark BUILD with HIVE

Exception: ("You must build Spark with Hive. Export 'SPARK_HIVE=true' and run build/sbt assembly", Py4JJavaError(u'An error occurred while calling None.org.apache.spark.sql.hive.HiveContext.\n', JavaObject id=o54))
This happens whenever I create a UDF on a second notebook in Jupyter on IBM Bluemix Spark as a Service.
If you are using IBM Bluemix Spark as a Service, execute the following command in a cell of the python notebook :
!rm -rf /gpfs/global_fs01/sym_shared/YPProdSpark/user/spark_tenant_id/notebook/notebooks/metastore_db/*.lck
Replace spark_tenant_id with the actual one. You can find the tenant id using the following command in a cell of the notebook:
!whoami
I've run into these errors as well. Only the first notebook you launch will have access to the hive context. From here
By default Hive(Context) is using embedded Derby as a metastore. It is intended mostly for testing and supports only one active user.

What is the proper way of running a Spark application on YARN using Oozie (with Hue)?

I have written an application in Scala that uses Spark.
The application consists of two modules - the App module which contains classes with different logic, and the Env module which contains environment and system initialization code, as well as utility functions.
The entry point is located in Env, and after initialization, it creates a class in App (according to args, using Class.forName) and the logic is executed.
The modules are exported into 2 different JARs (namely, env.jar and app.jar).
When I run the application locally, it executes well. The next step is to deploy the application to my servers. I use Cloudera's CDH 5.4.
I used Hue to create a new Oozie workflow with a Spark task with the following parameters:
Spark Master: yarn
Mode: cluster
App name: myApp
Jars/py files: lib/env.jar,lib/app.jar
Main class: env.Main (in Env module)
Arguments: app.AggBlock1Task
I then placed the 2 JARs inside the lib folder in the workflow's folder (/user/hue/oozie/workspaces/hue-oozie-1439807802.48).
When I run the workflow, it throws a FileNotFoundException and the application does not execute:
java.io.FileNotFoundException: File file:/cloudera/yarn/nm/usercache/danny/appcache/application_1439823995861_0029/container_1439823995861_0029_01_000001/lib/app.jar,lib/env.jar does not exist
However, when I leave the Spark master and mode parameters empty, it all works properly, but when I check spark.master programmatically it is set to local[*] and not yarn. Also, when observing the logs, I encountered this under Oozie Spark action configuration:
--master
null
--name
myApp
--class
env.Main
--verbose
lib/env.jar,lib/app.jar
app.AggBlock1Task
I assume I'm not doing it right - not setting Spark master and mode parameters and running the application with spark.master set to local[*]. As far as I understand, creating a SparkConf object within the application should set the spark.master property to whatever I specify in Oozie (in this case yarn) but it just doesn't work when I do that..
Is there something I'm doing wrong or missing?
Any help will be much appreciated!
I managed to solve the problem by putting the two JARs in the user directory /user/danny/app/ and specifying the Jar/py files parameter as ${nameNode}/user/danny/app/env.jar. Running it caused a ClassNotFoundException to be thrown, even though the JAR was located at the same folder in HDFS. To work around that, I had to go to the settings and add the following to the options list: --jars ${nameNode}/user/danny/app/app.jar. This way the App module is referenced as well and the application runs successfully.

Resources