connecting to a live spark application from a remote system - apache-spark

I have a spark application deployed on the cluster. I want to run the application with some variables passed from another application running on a remote machine. For example I will pass a query string from the application running remotely and I want my spark application to listen to that and process the query and give back the response to the caller.
Is it possible to do with any library or feature provided by spark.

A Spark application is like any other application. An application can take remote commands in a million different ways. Perhaps most common is to make the application an HTTP server. Then it can be remote controlled through a web interface or a REST API.
If you're using Spark through Scala, the Play Framework is a popular option.

Related

Spark job as a web service?

A peer of mine has created code that opens a restful api web service within an interactive spark job. The intent of our company is to use his code as a means of extracting data from various datasources. He can get it to work on his machine with a local instance of spark. He insists that this is a good idea and it is my job as DevOps to implement it with Azure Databricks.
As I understand it interactive jobs are for one-time analytics inquiries and for the development of non-interactive jobs to be run solely as ETL/ELT work between data sources. There is of course the added problem of determining the endpoint for the service binding within the spark cluster.
But I'm new to spark and I have scarcely delved into the mountain of documentation that exists for all the implementations of spark. Is what he's trying to do a good idea? Is it even possible?
The web-service would need to act as a Spark Driver. Just like you'd run spark-shell, run some commands , and then use collect() methods to bring all data to be shown in the local environment, that all runs in a singular JVM environment. It would submit executors to a remote Spark cluster, then bring the data back over the network. Apache Livy is one existing implementation for a REST Spark submission server.
It can be done, but depending on the process, it would be very asynchronous, and it is not suggested for large datasets, which Spark is meant for. Depending on the data that you need (e.g. highly using SparkSQL), it'd be better to query a database directly.

Spark Application as a Rest Service

I have a question regarding a specific spark application usage.
So I want our Spark application to run as a REST API Server, like Spring Boot Applications, therefore it will not be a batch process, instead we will load the application and then we want to keep the application live (no call to spark.close()) and to use the application as Realtime query engine via some API which we will define. I am targeting to deploy it to Databricks. Any suggestions will be good.
I have checked Apache Livy, but not sure whether it will be good option or not.
Any suggestions will be helpful.
Spark isn't designed to run like this; it has no REST API server frameworks other than the HistoryServer and Worker UI built-in
If you wanted a long-running Spark action, then you could use Spark Streaming and issue actions to it via raw sockets, Kafka, etc. rather than HTTP methods
Good question let's discuss step by step
You can create it and it's working fine , following is example :
https://github.com/vaquarkhan/springboot-microservice-apache-spark
I am sure you must be thinking to create Dataset or Data frame and keep into memory and use as Cache (Redis,Gemfire etc ) but here is catch
i) If you have data in few 100k then you really not needed Apache Spark power Java app is good to return response really fast.
ii) If you have data in petabyte then loading into memory as dataset or data frame will not help as Apache Spark doesn’t support indexing since Spark is not a data management system but a fast batch data processing engine, and Gemfire you have flexibility to add index to fast retrieval of data.
Work Around :
Using Apache Ignite’s(https://ignite.apache.org/) In-memory indexes (refer Fast
Apache Spark SQL Queries)
Using data formats that supports indexing like ORC, Parquet etc.
So Why not use Sparing application with Apache Spark without using spark.close().
Spring application as micro service you need other services either on container or PCF/Bluemix/AWS /Azure/GCP etc and Apache Spark has own world and need compute power which is not available on PCF.
Spark is not a database so it cannot "store data". It processes data and stores it temporarily in memory, but that's not presistent storage.
Once Spark job submit you have to wait results in between you cannot fetch data.
How to use Spark with Spring application as Rest API call :
Apache Livy is a service that enables easy interaction with a Spark cluster over a REST interface. It enables easy submission of Spark jobs or snippets of Spark code, synchronous or asynchronous result retrieval, as well as Spark Context management, all via a simple REST interface or an RPC client library.
https://livy.apache.org/

Can I get a web UI for spark in my local browser , when I running instances on server?

So I work at a place where, I have a laptop and everyday I connect to a remote server using shell, and do everything(run jupyter notebook,use pyspark for spark jobs) on the server.
I want to keep a log of all the server resources that I am using when I run my spark job(memory,cpu usage etc).
I though one way I could do this is my looking at web UI, but I cant connect to the web UI.
I got all the proeperties of my driver ip, port and everything using sc._conf.getAll()
I am running spark on Yarn client
(u'spark.master', u'yarn-client'),
and tried those on web browser but could not connect.

Apache spark implementation in Nodejs Application

I want to implement apache spark in my nodejs application,
I have tried implementing Eclairjs but having some issues implementing it.
Eclairjs appears to be dead
if you want to access spark from node, I would recommend using livy
livy is a service that runs a spark session, and exposes a rest api to that session.
there seem to a be node client already: https://www.npmjs.com/package/node-livy-client
(I never used the node client, so I can't say if it's any good)

A Spark long running program as web server

I have written multiple spark driver programs that load some data from HDFS to data frames and accomplish spark sql queries on it and persist the results in HDFS again. Now I need to provide a long running java program in order to receive requests and their some parameters(such as the number of top rows should be returned) from a web application (e.g. a dashboard) via post and get and send back the results to web application. My web application is somewhere out of the Spark cluster. Briefly my goal is to send requests and their accompanying data from web application via something such as POST to long running java program. then it receives the request and runs the corresponding spark driver (spark app) and returns the results for example in JSON format.
Is there any solution to develop this use case?
Is Livy a good choise? If your answer is positive what should I do?
Take a look at The Spark JobServer. I think it has the ability to shard RDD's between jobs which can be a huge performance boost.
https://github.com/spark-jobserver/spark-jobserver

Resources