Spark Application as a Rest Service - apache-spark

I have a question regarding a specific spark application usage.
So I want our Spark application to run as a REST API Server, like Spring Boot Applications, therefore it will not be a batch process, instead we will load the application and then we want to keep the application live (no call to spark.close()) and to use the application as Realtime query engine via some API which we will define. I am targeting to deploy it to Databricks. Any suggestions will be good.
I have checked Apache Livy, but not sure whether it will be good option or not.
Any suggestions will be helpful.

Spark isn't designed to run like this; it has no REST API server frameworks other than the HistoryServer and Worker UI built-in
If you wanted a long-running Spark action, then you could use Spark Streaming and issue actions to it via raw sockets, Kafka, etc. rather than HTTP methods

Good question let's discuss step by step
You can create it and it's working fine , following is example :
https://github.com/vaquarkhan/springboot-microservice-apache-spark
I am sure you must be thinking to create Dataset or Data frame and keep into memory and use as Cache (Redis,Gemfire etc ) but here is catch
i) If you have data in few 100k then you really not needed Apache Spark power Java app is good to return response really fast.
ii) If you have data in petabyte then loading into memory as dataset or data frame will not help as Apache Spark doesn’t support indexing since Spark is not a data management system but a fast batch data processing engine, and Gemfire you have flexibility to add index to fast retrieval of data.
Work Around :
Using Apache Ignite’s(https://ignite.apache.org/) In-memory indexes (refer Fast
Apache Spark SQL Queries)
Using data formats that supports indexing like ORC, Parquet etc.
So Why not use Sparing application with Apache Spark without using spark.close().
Spring application as micro service you need other services either on container or PCF/Bluemix/AWS /Azure/GCP etc and Apache Spark has own world and need compute power which is not available on PCF.
Spark is not a database so it cannot "store data". It processes data and stores it temporarily in memory, but that's not presistent storage.
Once Spark job submit you have to wait results in between you cannot fetch data.
How to use Spark with Spring application as Rest API call :
Apache Livy is a service that enables easy interaction with a Spark cluster over a REST interface. It enables easy submission of Spark jobs or snippets of Spark code, synchronous or asynchronous result retrieval, as well as Spark Context management, all via a simple REST interface or an RPC client library.
https://livy.apache.org/

Related

Spark job as a web service?

A peer of mine has created code that opens a restful api web service within an interactive spark job. The intent of our company is to use his code as a means of extracting data from various datasources. He can get it to work on his machine with a local instance of spark. He insists that this is a good idea and it is my job as DevOps to implement it with Azure Databricks.
As I understand it interactive jobs are for one-time analytics inquiries and for the development of non-interactive jobs to be run solely as ETL/ELT work between data sources. There is of course the added problem of determining the endpoint for the service binding within the spark cluster.
But I'm new to spark and I have scarcely delved into the mountain of documentation that exists for all the implementations of spark. Is what he's trying to do a good idea? Is it even possible?
The web-service would need to act as a Spark Driver. Just like you'd run spark-shell, run some commands , and then use collect() methods to bring all data to be shown in the local environment, that all runs in a singular JVM environment. It would submit executors to a remote Spark cluster, then bring the data back over the network. Apache Livy is one existing implementation for a REST Spark submission server.
It can be done, but depending on the process, it would be very asynchronous, and it is not suggested for large datasets, which Spark is meant for. Depending on the data that you need (e.g. highly using SparkSQL), it'd be better to query a database directly.

Retrieve graphical information using Spark Structured Streaming

Spark Streaming provided a "Streaming" tab within the deployed Web UI (http://localhost:4040 for running applications or http://localhost:18080 for completed applications, both by default) for each application executed, where graphs representative of application performance could be obtained, which is no more available using Spark Structured Streaming. In my case, I am developing a streaming application with Spark Structured Streaming that reads from a Kafka broker and I would like to obtain a graph of records processed per second, such as the one I could obtain when using Spark Streaming instead of Spark Structured Streaming, among other graphical information.
What is the best alternative to achieve this? I am using Spark 3.0.1 (via pyspark library), and deploying my application on a YARN cluster.
I've checked Monitoring Structured Streaming Applications Using Web UI by Jacek Laskowski, but it is still not very clear how to obtain this type of information in a graphic way.
Thank you in advance!
I managed to get what I wanted. For some reason I still don't know, the Spark History Server UI for completed apps (on http://localhost:18080 by default) did not show the new tab ("Structured Streaming" tab) that is available for Spark Structured Streaming applications that are executed on Spark 3.0.1. However, the web UI that I managed to access through the URL http://localhost:4040 does show me the information that I wanted to retrieve. You just need to click on the 'runId' link of the streaming query from which you want to get the statistics.
If you can't see this tab, based on my personal experience, I recommend the following:
Upgrade to Spark latest version (currently 3.0.1)
Consult this information on the UI deployed at port 4040 while the application is running, instead of port 18080 when the application has finished.
I found the Web UI official documentation from latest Apache Spark very useful to achieve this.
Most metrics informations you see in spark UI is exported by spark.
If spark UI don't fit your requirement, you could retrieve theses metrics and process it.
you can use a sink to export the data, for exemple to csv, prometheus, ... or via rest API.
you should take a look at spark monitoring : https://spark.apache.org/docs/latest/monitoring.html

Apache Spark & Machine Learning - Using in production

Im having some difficulties figuring out how to use spark's machine learning capabilities in a real life production environment.
What i want to do is the following:
Develop a new ml model using notebooks
Serve the learned model using REST api (something like POST - /api/v1/mymodel/predict)
Let say the ml training process is handled by a notebook, and once the model requirements are fulfilled it's saved into an hdfs file, to be later loaded by a spark application
I know i could write a long running spark application that exposes the api and run it on my spark cluster, but i don't think this is really a scalable approach, because even if the data transformations and the ml functions would run on the workers node, the http/api related code would still run on one node, the one on wich spark-submit is invoked (correct me if i'm wrong).
One other approach is to use the same long running application, but in a local-standalone cluster. I could deploy the same application as many times as i want, and put a load balancer in front of it. With this approach the http/api part is handled fine, but the spark part is not using the cluster capabilities at all (this could not be a problem, due to fact that it should only perform a single prediction per request)
There is a third approach wich uses SparkLauncher, wich wraps the spark job in a separate jar, but i don't really like flying jars, and it is difficult to retrieve the result of the prediction (a queue maybe, or hdfs)
So basically the question is: what is the best approach to consume spark's ml models through rest api?
Thank You
you have three options
trigger batch ML job via spark api spark-jobserver, upon client request
trigger batch ML job via scheduler airflow , write output to DB, expose DB via rest to client
keep structured-streaming / recursive functionon to scan input data source, update / append DB continuously, expose DB via rest to client
If you have single prediction per request, and your data input is constantly updated, I would suggest option 3, which would transform data in near-real-time at all times, and client would have constant access to output, you can notify client when new data is completed by sending notification via rest or sns, you could keep pretty small spark cluster that would handle data ingest, and scale rest service and DB upon request / data volume (load balancer)
If you anticipate rare requests where data source is updated periodically lets say once a day, option 1 or 2 will be suitable as you can launch bigger cluster and shut it down when completed.
Hope it helps.
The problem is you don't want to keep your spark cluster running and deploy your REST API inside it for the prediction as it's slow.
So to achieve real-time prediction with low latency, Here are a couple of solutions.
What we are doing is Training the model, exporting the model and use the model outside Spark to do the Prediction.
You can export the model as a PMML file if the ML Algorithm you used is supported by the PMML standards. Spark ML Models can be exported as JPMML file using the jpmml library. And then you can create your REST API and use JPMML Evaluator to predict using your Spark ML Models.
MLEAP MLeap is a common serialization format and execution engine for machine learning pipelines. It supports Spark, Scikit-learn and Tensorflow for training pipelines and exporting them to an MLeap Bundle. Serialized pipelines (bundles) can be deserialized back into Spark for batch-mode scoring or the MLeap runtime to power realtime API services. It supports multiple platforms, though I have just used it for Spark ML models and it works really well.

A Spark long running program as web server

I have written multiple spark driver programs that load some data from HDFS to data frames and accomplish spark sql queries on it and persist the results in HDFS again. Now I need to provide a long running java program in order to receive requests and their some parameters(such as the number of top rows should be returned) from a web application (e.g. a dashboard) via post and get and send back the results to web application. My web application is somewhere out of the Spark cluster. Briefly my goal is to send requests and their accompanying data from web application via something such as POST to long running java program. then it receives the request and runs the corresponding spark driver (spark app) and returns the results for example in JSON format.
Is there any solution to develop this use case?
Is Livy a good choise? If your answer is positive what should I do?
Take a look at The Spark JobServer. I think it has the ability to shard RDD's between jobs which can be a huge performance boost.
https://github.com/spark-jobserver/spark-jobserver

web URL information to apache spark in web app

I am currently are trying to retrieve information from the EPA into our web app, which needs to utilize ibm bluemix and apache spark. The information that we are gathering from the EPA is this:
https://aqs.epa.gov/api and ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/
But not only are we gathering historical data, we also want to update the data by inserting new data every hour into the web app. Hence concerning this I have a few questions:
1) Do we need to open a hdfs to store all the data? Or could we just retrieve the data by its URL and store it in a dataframe? IBM bluemix said it would provide 5 GB of storage, so how would one utilize that to store the historical data and store updated data per hour?
2) If we are going to update the data per hour by inserting new data into the data storage / data frame, should we still use spark streaming? If yes, how would we use spark streaming for URL data? A lot of resources I see online is only useful if one has an hdfs / formal database.
What we are doing currently is that we import the URLs through pandas:
url = "https://aqs.epa.gov/api/rawData?user=sogun3#gmail.com&pw=baycrane57&format=JSON&param=44201&bdate=20110501&edate=20110501&state=37&county=063"
import urllib2
content = urllib2.urlopen(url).read()
print content
However, if we use this method, it means that spark needs to be running 24-7 to ensure that the most updated data is utilized. How does one configure spark to run 24-7? Or is there a better method to process all the data and put them nicely in a dataframe so that the data could be accessed easily later?
Also, in a web app, can one still use iPython for data processing? Or is iPython just for interacting with the data and understanding the data experimentally?
Thanks a lot!
You have options ;-) If you need to read the source EPA data and then process it before you use it in your web app, then you can use the spark service to ETL (Extract Transform Load) the source data from EPA web site, manipulate or wrangle the data into the shape and size you want, and then save it into a storage service like Bluemix Object Storage. You web app would then read the data in the format you want directly from object storage. However, if the source EPA data is largely in a format you want to use in the web app, then you most certainly can create RDDs directly from web site and pull in the data as and when you need it. These datasets look small from my quick peek, so I don't think you need to worry about spark pulling it directly into memory for you to work on it; i.e. no need to try to store it locally with spark in the bluemix service cluster. Besides, there is no HDFS currently provided by the spark service; so as mentioned earlier, you would use an external storage service. re: "IBM bluemix said it would provide 5 GB of storage", that is intended for storing your personal and 3rd-party spark libraries and such.
re: "spark needs to be running 24-7". The spark service runs 24x7. Your spark code running on the service will run for as long as you program it to run ;-)
IPython (or Jupyter notebooks) is intended as a REPL for the web. So, yes, interactive. In your case, you can certainly write your spark code in an IPython notebook and have that run for as long as necessary, pulling and processing the EPA data for the web app, storing it in say object storage. The web app can then pull the data it needs from object storage. It is said that in the future APIs we will provided for the spark service, at which point your web app could talk directly to the spark service; in the meantime, you can certainly make something work with notebooks.

Resources