Im having some difficulties figuring out how to use spark's machine learning capabilities in a real life production environment.
What i want to do is the following:
Develop a new ml model using notebooks
Serve the learned model using REST api (something like POST - /api/v1/mymodel/predict)
Let say the ml training process is handled by a notebook, and once the model requirements are fulfilled it's saved into an hdfs file, to be later loaded by a spark application
I know i could write a long running spark application that exposes the api and run it on my spark cluster, but i don't think this is really a scalable approach, because even if the data transformations and the ml functions would run on the workers node, the http/api related code would still run on one node, the one on wich spark-submit is invoked (correct me if i'm wrong).
One other approach is to use the same long running application, but in a local-standalone cluster. I could deploy the same application as many times as i want, and put a load balancer in front of it. With this approach the http/api part is handled fine, but the spark part is not using the cluster capabilities at all (this could not be a problem, due to fact that it should only perform a single prediction per request)
There is a third approach wich uses SparkLauncher, wich wraps the spark job in a separate jar, but i don't really like flying jars, and it is difficult to retrieve the result of the prediction (a queue maybe, or hdfs)
So basically the question is: what is the best approach to consume spark's ml models through rest api?
Thank You
you have three options
trigger batch ML job via spark api spark-jobserver, upon client request
trigger batch ML job via scheduler airflow , write output to DB, expose DB via rest to client
keep structured-streaming / recursive functionon to scan input data source, update / append DB continuously, expose DB via rest to client
If you have single prediction per request, and your data input is constantly updated, I would suggest option 3, which would transform data in near-real-time at all times, and client would have constant access to output, you can notify client when new data is completed by sending notification via rest or sns, you could keep pretty small spark cluster that would handle data ingest, and scale rest service and DB upon request / data volume (load balancer)
If you anticipate rare requests where data source is updated periodically lets say once a day, option 1 or 2 will be suitable as you can launch bigger cluster and shut it down when completed.
Hope it helps.
The problem is you don't want to keep your spark cluster running and deploy your REST API inside it for the prediction as it's slow.
So to achieve real-time prediction with low latency, Here are a couple of solutions.
What we are doing is Training the model, exporting the model and use the model outside Spark to do the Prediction.
You can export the model as a PMML file if the ML Algorithm you used is supported by the PMML standards. Spark ML Models can be exported as JPMML file using the jpmml library. And then you can create your REST API and use JPMML Evaluator to predict using your Spark ML Models.
MLEAP MLeap is a common serialization format and execution engine for machine learning pipelines. It supports Spark, Scikit-learn and Tensorflow for training pipelines and exporting them to an MLeap Bundle. Serialized pipelines (bundles) can be deserialized back into Spark for batch-mode scoring or the MLeap runtime to power realtime API services. It supports multiple platforms, though I have just used it for Spark ML models and it works really well.
Related
A peer of mine has created code that opens a restful api web service within an interactive spark job. The intent of our company is to use his code as a means of extracting data from various datasources. He can get it to work on his machine with a local instance of spark. He insists that this is a good idea and it is my job as DevOps to implement it with Azure Databricks.
As I understand it interactive jobs are for one-time analytics inquiries and for the development of non-interactive jobs to be run solely as ETL/ELT work between data sources. There is of course the added problem of determining the endpoint for the service binding within the spark cluster.
But I'm new to spark and I have scarcely delved into the mountain of documentation that exists for all the implementations of spark. Is what he's trying to do a good idea? Is it even possible?
The web-service would need to act as a Spark Driver. Just like you'd run spark-shell, run some commands , and then use collect() methods to bring all data to be shown in the local environment, that all runs in a singular JVM environment. It would submit executors to a remote Spark cluster, then bring the data back over the network. Apache Livy is one existing implementation for a REST Spark submission server.
It can be done, but depending on the process, it would be very asynchronous, and it is not suggested for large datasets, which Spark is meant for. Depending on the data that you need (e.g. highly using SparkSQL), it'd be better to query a database directly.
I have just started learning Spark and have setup a small cluster too on top of Yarn and Hdfs. I have submitted small jobs for testing as well. However, I want to know if we can use Spark as a realtime constraint engine. Let me give an example. I want to build a web app where a user provides a JSON object via API. The user also provides some constraints (if some_value < 100 remove the the row etc). Can I perform these operations concurrently using Spark and provide quick response to the client (response to be provided as a response of the API). One important requirement being, I will have to provide response in a very short time, lesser than a second.
I see that Spark is all about submitting jobs and processing them concurrently. Can the above requirements be fulfilled by Spark?
I have a question regarding a specific spark application usage.
So I want our Spark application to run as a REST API Server, like Spring Boot Applications, therefore it will not be a batch process, instead we will load the application and then we want to keep the application live (no call to spark.close()) and to use the application as Realtime query engine via some API which we will define. I am targeting to deploy it to Databricks. Any suggestions will be good.
I have checked Apache Livy, but not sure whether it will be good option or not.
Any suggestions will be helpful.
Spark isn't designed to run like this; it has no REST API server frameworks other than the HistoryServer and Worker UI built-in
If you wanted a long-running Spark action, then you could use Spark Streaming and issue actions to it via raw sockets, Kafka, etc. rather than HTTP methods
Good question let's discuss step by step
You can create it and it's working fine , following is example :
https://github.com/vaquarkhan/springboot-microservice-apache-spark
I am sure you must be thinking to create Dataset or Data frame and keep into memory and use as Cache (Redis,Gemfire etc ) but here is catch
i) If you have data in few 100k then you really not needed Apache Spark power Java app is good to return response really fast.
ii) If you have data in petabyte then loading into memory as dataset or data frame will not help as Apache Spark doesn’t support indexing since Spark is not a data management system but a fast batch data processing engine, and Gemfire you have flexibility to add index to fast retrieval of data.
Work Around :
Using Apache Ignite’s(https://ignite.apache.org/) In-memory indexes (refer Fast
Apache Spark SQL Queries)
Using data formats that supports indexing like ORC, Parquet etc.
So Why not use Sparing application with Apache Spark without using spark.close().
Spring application as micro service you need other services either on container or PCF/Bluemix/AWS /Azure/GCP etc and Apache Spark has own world and need compute power which is not available on PCF.
Spark is not a database so it cannot "store data". It processes data and stores it temporarily in memory, but that's not presistent storage.
Once Spark job submit you have to wait results in between you cannot fetch data.
How to use Spark with Spring application as Rest API call :
Apache Livy is a service that enables easy interaction with a Spark cluster over a REST interface. It enables easy submission of Spark jobs or snippets of Spark code, synchronous or asynchronous result retrieval, as well as Spark Context management, all via a simple REST interface or an RPC client library.
https://livy.apache.org/
I have a Spark streaming application which trains a model and periodically stores the model to HFS. In a http based web service, I would like to POST some values and retrieve a prediction for it. The service should also reload the model on demand (e.g. via GET request).
I implemented a web server with Spark and Spray, it works for proof-of-concept. But I'm not sure if it is a good design solution. What about providing the web server to external services if it runs on a cluster? How can I define on which node the service will be available? I'm not even sure if it is the right idea to use prediction models in this way. Maybe the best-practice is to integrate Spark in a standalone application and access the model on the shared filesystem (e.g. hfs), but this will lack cluster support, wont't it?
Summary: What is the best-practice design to build a prediction web service with Apache Spark.
I am surveying Google Dataflow and Apache Spark to decide which one is more suitable solution for our bigdata analysis business needs.
I found there are Spark SQL and MLlib in the spark platform to do structured data query and machine learning.
I wonder is there any corresponding solution in the Google Dataflow platform?
It would help if you could expand a bit on your specific use case(s). What are you trying to accomplish in relation to "Bigdata analysis"? The short answer... it depends :-)
Here are some key architectural points to consider in relation to Google Cloud Dataflow v. Spark and Hadoop MR.
Resource Mgmt: Cloud Dataflow is a completely on demand execution environment. Specifically - when you execute a job in Dataflow the resources are allocated on demand for that job only. There is no sharing/contention of resources across jobs. In comparison to a Spark or MapReduce cluster you would typically deploy a cluster of X nodes and then submit jobs and then tune the node resources across jobs. Of course you can build up and tear down these clusters, but the Dataflow model is geared towards hands free dev ops in relation to resource management. If want to optimize resource usage to job demands Dataflow is a solid model to control cost and nearly forget about resource tuning. If you prefer a multi-tenant style cluster I'd suggest you look at Google Cloud Dataproc as it provides the on demand cluster management aspects like Dataflow, but focused on class Hadoop workloads like MR, Spark, Pig, ...
Interactivity: Currently Cloud Dataflow does not provide an interactive mode. Meaning once you submit a job the work resources are bound to the graph that was submitted AND the majority of the data is loaded into resources as needed. Spark can be a better model if you want to load data into the cluster via in memory RDD's and then dynamically execute queries. The challenge is that as your data sizes and query complexity increases you will have to handle the devOps. Now if most of your queries can be expressed in SQL syntax you may want to look at BigQuery. BigQuery provides the "on demand" aspects of Dataflow and enables you to interactively execute queries over massive amounts of data e.g petabytes. The biggest advantage in my opinion of BigQuery is that you do not have think/worry about hardware allocation to deal with your data sizes. Meaning as your data sizes grow you don't have to think about hardware (memory and disk size) configuration.
Programming Model: Dataflow's programming model is functionally biased vs. a classic MapReduce model. There are many similarities between Spark and Dataflow in terms of API primitives. Things to consider: 1) Dataflow's primary programming language is Java. There is a Python SDK in the works. The Dataflow Java SDK in open sourced and has been ported to Scala. Today, Spark has more SDK surface choice with GraphX, Streaming, Spark SQL, and ML. 2) Dataflow is a unified programming model for batch and streaming based DAG development. The goal was to remove the complexity and cost switching when moving between batch and streaming models. The same graph can seamlessly run in either mode. 3) Today, Cloud Dataflow does not support converging/iterative based graph execution. If you need the power of something like MLib then Spark is the way to go. Keep in mind this is the state of things today.
Streaming & Windowing: Dataflow (building on top of the unified programming model) was architected to be a highly reliable, durable, and scalable execution environment for streaming. One of the key differences between Dataflow and Spark is that Dataflow enables you to easily process data in terms of its true event time vs. solely processing it at it's arrival time into the graph. You can window data into fixed, sliding, session or custom windows based on event time or arrival time. Dataflow also provides Triggers (applied to Windows) that enable you to control how you want to handle late arriving data. Net-net you dial in the level of correctness control to meet the needs of your analysis. For example, lets say you have a mobile game that interacts with a 100 edge nodes. These nodes create 10000's events second related to game play. Let's say a group of nodes can't communicate with your back end streaming analysis system. In the case of Dataflow - once that data does arrive - you can control how you'd like to handle the data in relation to your query correctness needs. Dataflow also provides the ability to upgrade your streaming jobs while they are in flight. For example, let's say you discover a logical bug in a transform. You can upgrade your in flight job without losing your existing Windowed state. Net-net you can keep you business running.
Net-net:
- if you are really primarily doing ETL style work (filtering, shaping, joining, ...) or batch style MapReduce Dataflow is a great path if you want minimal devOps.
- if you need to implement ML style graphs, go the Spark path and give Dataproc a try
- if you are doing ML and you first need to do ETL to clean up your training data implement a hybrid with Dataflow and Dataproc
- if you need interactivity Spark is a solid choice, but so is BigQuery if you are/can express your queries in SQL
- if you need to process your ETL and or MR jobs over streams, Dataflow is a solid choice.
So... what are you scenarios?
I've tried both :
Dataflow is still very young, the is no "out-of-the-box" solution for doing ML with it (even though you could implement algorithms in transforms), you could output the processes data to cloud storage and read it later with another tool.
Spark would be recommended but you would have to manage your cluster yourself.
However there is a good alternative: Google Dataproc
You can develop analysis tools with spark and deploy them with one command on your cluster, dataproc will manage the cluster itself without having to tweak the configuration.
I have built code using spark,DataFlow .Let me put my thoughts.
Spark/DataProc: I have used spark (Pyspark) a lot for ETL. You can use SQL and any programming language of your choice. Lot of functions are available (Including Window functions). Build your dataframe and write your transformation and it can be super fast. Once data is cached , any operation on the Dataframe will quick.
You can simply build hive external table on the GCS. Then you can use Spark for ETL and Load data into Big Query. This is for Batch processing.
For streaming you can use spark Streaming and load data into Big query.
Now if you have cluster allready then you have think whether to move to Google cloud or not. I found Data proc (Google Cloud Hadoop/Spark) offering is better as you don't have to worry many cluster managements..
DataFlow : It's know as apache beam. Here you can write your code in Java/Python or any other language. You can execute the code in any framework (Spark/MR/Flink).This is a unified model. Here you can do both batch processing and Stream Data processing.
google now offers both programming models- mapreduce and spark.
Cloud DataFlow and Cloud DataProc they are respectively