How can one train (fit) a model in a distributed big data platform (e.g Apache Spark) yet use that model in a stand alone machine (e.g. JVM) with as little dependency as possible?
I heard of PMML yet I am not sure if it is enough. Also Spark 2.0 supports persistent model saving yet I am not sure what is necessary to load and run those models.
Apache Spark persistence is about saving and loading Spark ML pipelines in JSON data format (think of it as Python's pickle mechanism, or R's RDS mechanism). These JSON data structures map to Spark ML classes. They don't make sense on other platforms.
As for PMML, then you can convert Spark ML pipelines to PMML documents using the JPMML-SparkML library. You can execute PMML documents (doesn't matter whether they came from Apache Spark, Python or R) using the JPMML-Evaluator library. If you're using Apache Maven to manage and build your project, then JPMML-Evaluator can be included by adding just one dependency declaration to your project's POM.
Related
I want to use some of the classifiers provided by MLLib (random forests, etc) but I want to use them without connecting to a Spark cluster.
If I need to somehow run some Spark stuff in-process so that I have a Spark context to use, that's fine. But I haven't been able to find any information or an example for such a use case.
So my two questions are:
Is there a way to use the MLLib classifiers without a Spark context at all?
Otherwise, can I use them by starting a Spark context in-process, without needing any kind of actual Spark installation?
org.apache.spark.mllib models:
Cannot be trained without Spark cluster.
Usually can be used for predictions without cluster, with exception to distributed models like ALS.
org.apache.spark.ml models:
Require Spark cluster for training.
Require Spark cluster for predictions although it might change in the future (https://issues.apache.org/jira/browse/SPARK-10413)
There is a number of third party tools which are designed to export Spark ml models to the form which can be used in Spark agnostic environment (jpmml-spark and modeldb to enumerate a few, without special preference).
Spark mllib models have limited PMML support as well.
Commercial vendors usually provide their own tools for productionizing Spark models.
You can of course use local "cluster", but it is probably still a bit to heavy for most of possible applications. Starting a full context take at least a few seconds, and has significant memory footprint.
Also:
Best Practice to launch Spark Applications via Web Application?
How to serve a Spark MLlib model?
Im having some difficulties figuring out how to use spark's machine learning capabilities in a real life production environment.
What i want to do is the following:
Develop a new ml model using notebooks
Serve the learned model using REST api (something like POST - /api/v1/mymodel/predict)
Let say the ml training process is handled by a notebook, and once the model requirements are fulfilled it's saved into an hdfs file, to be later loaded by a spark application
I know i could write a long running spark application that exposes the api and run it on my spark cluster, but i don't think this is really a scalable approach, because even if the data transformations and the ml functions would run on the workers node, the http/api related code would still run on one node, the one on wich spark-submit is invoked (correct me if i'm wrong).
One other approach is to use the same long running application, but in a local-standalone cluster. I could deploy the same application as many times as i want, and put a load balancer in front of it. With this approach the http/api part is handled fine, but the spark part is not using the cluster capabilities at all (this could not be a problem, due to fact that it should only perform a single prediction per request)
There is a third approach wich uses SparkLauncher, wich wraps the spark job in a separate jar, but i don't really like flying jars, and it is difficult to retrieve the result of the prediction (a queue maybe, or hdfs)
So basically the question is: what is the best approach to consume spark's ml models through rest api?
Thank You
you have three options
trigger batch ML job via spark api spark-jobserver, upon client request
trigger batch ML job via scheduler airflow , write output to DB, expose DB via rest to client
keep structured-streaming / recursive functionon to scan input data source, update / append DB continuously, expose DB via rest to client
If you have single prediction per request, and your data input is constantly updated, I would suggest option 3, which would transform data in near-real-time at all times, and client would have constant access to output, you can notify client when new data is completed by sending notification via rest or sns, you could keep pretty small spark cluster that would handle data ingest, and scale rest service and DB upon request / data volume (load balancer)
If you anticipate rare requests where data source is updated periodically lets say once a day, option 1 or 2 will be suitable as you can launch bigger cluster and shut it down when completed.
Hope it helps.
The problem is you don't want to keep your spark cluster running and deploy your REST API inside it for the prediction as it's slow.
So to achieve real-time prediction with low latency, Here are a couple of solutions.
What we are doing is Training the model, exporting the model and use the model outside Spark to do the Prediction.
You can export the model as a PMML file if the ML Algorithm you used is supported by the PMML standards. Spark ML Models can be exported as JPMML file using the jpmml library. And then you can create your REST API and use JPMML Evaluator to predict using your Spark ML Models.
MLEAP MLeap is a common serialization format and execution engine for machine learning pipelines. It supports Spark, Scikit-learn and Tensorflow for training pipelines and exporting them to an MLeap Bundle. Serialized pipelines (bundles) can be deserialized back into Spark for batch-mode scoring or the MLeap runtime to power realtime API services. It supports multiple platforms, though I have just used it for Spark ML models and it works really well.
Basically what I need to do is to integrate the CTBNCToolkit with Apache Spark, so this toolkit can take advantage of the concurrency and clustering features of Apache Spark.
In general I want to know is there any way exposed by Apache Spark developers to integrate any Java/Scala library in a fashion that the machine learning library can run on top of Spark's concurrency management?
So the goal is to make the stand alone machine learning libraries faster and concurrent.
No, that's not possible.
So what you want is that any algorithm runs on Spark. But, to parallelize the work, Spark uses RDDs or Datasets. So in order to run your tasks in parallel, the algorithms would have to use these classes.
The only thing that you could try, is to write your own Spark program, that makes use of any other library. But I'm not sure whether that's possible in your case. However, isn't Spark ML enough for you?
as said in here
there are some overlap between UIMA and spark in distribution infrastructures. I was planning to use UIMA with spark. (now i am moving to UIMAFit) Can any one tell me what are the problems we really face when we develop uima with spark.
And what are the possible encounters.
(Sorry I haven't done any research on this.)
The main problem is accessing objects because UIMA tries to re instantiate objects when running their analyse engines. if the objects has local references then there will be a problem with accessing from a remote spark cluster. some RDD functions might not work within UIMA context. however if you don't use a separate remote cluster then there won't be a problem. (I am talking about uima-fit 2.2)
Apache Apex - is an open source enterprise grade unified stream and batch processing platform. It is used in GE Predix platform for IOT.
What are the key differences between these 2 platforms?
Questions
From a data science perspective, how is it different from Spark?
Does Apache Apex provide functionality like Spark MLlib? If we have to built scalable ML models on Apache apex how to do it & which language to use?
Will data scientists have to learn Java to built scalable ML models? Does it have python API like pyspark?
Can Apache Apex be integrated with Spark and can we use Spark MLlib on top of Apex to built ML models?
Apache Apex an engine for processing streaming data. Some others which try to achieve the same are Apache storm, Apache flink. Differenting factor for Apache Apex is: it comes with built-in support for fault-tolerance, scalability and focus on operability which are key considerations in production use-cases.
Comparing it with Spark: Apache Spark is actually a batch processing. If you consider Spark streaming (which uses spark underneath) then it is micro-batch processing. In contrast, Apache apex is a true stream processing. In a sense that, incoming record does NOT have to wait for next record for processing. Record is processed and sent to next level of processing as soon as it arrives.
Currently, work is under progress for adding support for integration of Apache Apex with machine learning libraries like Apache Samoa, H2O
Refer https://issues.apache.org/jira/browse/SAMOA-49
Currently, it has support for Java, Scala.
https://www.datatorrent.com/blog/blog-writing-apache-apex-application-in-scala/
For Python, you may try it using Jython. But, I haven't not tried it myself. So, not very sure about it.
Integration with Spark may not be good idea considering they are two different processing engines. But, Apache apex integration with Machine learning libraries is under progress.
If you have any other questions, requests for features you can post them on mailing list for apache apex users: https://mail-archives.apache.org/mod_mbox/incubator-apex-users/