Is there any way to submit spark job using API - apache-spark

I am able to submit spark job on linux server using console. But is there any API or some framework that can enable to submit spark job in linux server?

You can use port 7077 to submit spark jobs in you spark cluster instead of using spark-submit.
val spark = SparkSession
.builder()
.master(spark://master-machine:7077)

you can look into Livy server. It is in GA mode in Hortonworks and Cloudera distros of Apache Hadoop. We have had good success with it. its documentation is good enough to get started with. Spark jobs start instantaneously when submitted via Livy since it has multiple SparkContexts running inside it.

Related

Spark context is created every time a batch job is executed in yarn

I am wondering, is there any way where i create the spark-context once in the YARN cluster, then the incoming jobs will re-use that context. The context creation takes 20s or sometimes more in my cluster. I am using pyspark for scripting and livy to submit jobs.
No, you can't just have a standing SparkContext running in Yarn. Maybe another idea is to run in client mode, where the client has it's own SparkContext (this is the method used by tools like Apache Zeppelin and the spark-shell).
An option would be to use Apache Livy. Livy is an additional server in your Yarn cluster that provides an interface for clients that want to run Spark jobs on the cluster. One of Livy's features is that you can
Have long running Spark Contexts that can be used for multiple Spark jobs, by multiple clients
If the client is written in Scala or Java it is possible to use a programmatic API:
LivyClient client = new LivyClientBuilder()....build();
Object result = client.submit(new SparkJob(sparkParameters)).get();
All other clients can use a REST API.

Spark job submission using Airflow by submitting batch POST method on Livy and tracking job

I want to use Airflow for orchestration of jobs that includes running some pig scripts, shell scripts and spark jobs.
Mainly on Spark jobs, I want to use Apache Livy but not sure whether it is good idea to use or run spark-submit.
What is best way to track Spark job using Airflow if even I submitted?
My assumption is you an application JAR containing Java / Scala code that you want to submit to remote Spark cluster. Livy is arguably the best option for remote spark-submit when evaluated against other possibilities:
Specifying remote master IP: Requires modifying global configurations / environment variables
Using SSHOperator: SSH connection might break
Using EmrAddStepsOperator: Dependent on EMR
Regarding tracking
Livy only reports state and not progress (% completion of stages)
If your'e OK with that, you can just poll the Livy server via REST API and keep printing logs in console, those will appear on task logs in WebUI (View Logs)
Other considerations
Livy doesn't support reusing SparkSession for POST/batches request
If that's imperative, you'll have to write your application code in PySpark and use POST/session requests
References
How to submit Spark jobs to EMR cluster from Airflow?
livy/examples/pi_app
rssanders3/livy_spark_operator_python_example
Useful links
How to submit Spark jobs to EMR cluster from Airflow?
Remote spark-submit to YARN running on EMR

Apache Spark and Livy cluster

Scenario :
I have spark cluster and I also want to use Livy.
I am new about Livy
Problem :
I built
my spark cluster by using docker swarm and I will also create a
service for Livy.
Can Livy communicate with external spark master and
send a job to external spark master? If it is ok, which configuration
need to be done? Or Livy should be installed on spark master node?
I think is a little late, but I hope this help you.
sorry for my english, but I am mexican, you can use docker to send jobs via livy, but also you can use livy to send jobs throw Livy REST API.
The livy server can be outside of the spark cluster, you only need to send a conf file to livy that points to you spark cluster.
It looks you are running spark standalone, easist way to configure livy to work is that livy lives on spark master node, if you already have YARN on your cluster machines, you can install livy on any node and run spark application in yarn-cluster or yarn-client mode.

Kafka Spark Streaming

I was trying to build Kafka and spark streaming use case. In that, Spark Streaming is consuming streaming from Kafka. And we are enhancing stream and storing enhanced stream into some target system.
My question here is that does it make sense to run spark streaming job in yarn-cluster or yarn-client mode? (Hadoop is not involved here)
What I think Spark streaming job should run only local mode but another question is how to improve the performance of spark streaming job.
Thanks,
local[*]
This is specific to run the job in local mode
Usually we use this to perform POC's and on a very small data.
You can debug the job to understand how each line of code is working.
But, you need to be aware that since the job is running in your local you cannot get the most out of sparks distributed architecture.
yarn-client
your driver program is running on the yarn client where you type the command to submit the spark application . But, the tasks are still executed on the Executors.
yarn-cluster
In cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application. This is the finest way of running the spark job to be benefited by the advantages provided by a cluster manager
I hope this gives you a clarity on how you may want to deploy your spark job.
Infact, Spark provides you a very clean documentation explaining various deployment strategies with examples.
https://spark.apache.org/docs/latest/running-on-yarn.html
the difference will be with yarn-client, you will force the spark job to choose the host where you run spark-submit as the driver , because in yarn-cluster , the choice won't be the same host everytime you run it
so the best choice is to always choose yarn-cluster to avoide overloading the same host if you are going to submit multi job in the same host with yarn-client

Use SparkLauncher to programmatically submit a spark job to the dse spark cluster

I am relatively new to Spark and DSE and I am trying to submit a spark job to the DSE spark cluster programmatically?
I am using the org.apache.spark.launcher.SparkLauncher api. I tried following the documentation for the SparkLauncher.
Process launcher = new SparkLauncher().setAppName("appName")
.setAppResource("spark-job.jar")
.setSparkHome("spark-home")
.setMainClass("main-class")
.setVerbose(true).launch();
launcher.waitFor();
But it doesn't seem to launch the job on the dse cluster. I can trigger the job manually using: dse spark-submit command
Will appreciate any help here. Thanks !
I believe this has something to do with not setting your sparkHOme. Identify your spark home In DSE and then add
.setSparkHome("sparkHomeDir")
And You rather use SparkHandle than blocking wait.
SparkAppHandle handle = launcher.startApplication();

Resources