Using AWS Sagemaker for model performance without creating endpoint - pytorch

I've been using Amazon Sagemaker Notebooks to build a pytorch model for an NLP task.
I know you can use Sagemaker to train, deploy, hyper parameter tuning, and model monitoring.
However, it looks like you have to create an inference endpoint in order to monitor the model's inference performance.
I already have a EC2 instance setup to perform inference tasks on our model, which is currently on a development box and rather not use an endpoint to make
Is it possible to use Sagemaker to train, run hyperparam tuning and model eval without creating an endpoint.

If you don't want to keep an inference endpoint up, one option is to use SageMaker Processing to run a job that takes your trained model and test dataset as input, performs inference and computes evaluation metrics, and saves them to S3 in a JSON file.
This Jupyter notebook example steps through (1) preprocessing training and test data, (2) training a model, then (3) evaluating the model

You can deploy your model on AWS SageMaker by using two approaches: set up an endpoint and create a batch transform job. I guess you probably can try the latter.
The good thing about using a batch transform job is that you can specify the S3 bucket path for both input and output data. When the job is completed, it will upload the output to the s3 path directly.

Related

Vertex ai custom model training for pyspark ml model

Is it possible to train a spark/pyspark ML lib model using VertexAI custom container model building? I couldn't find any reference in the vertex ai documents regarding spark model training. For distributed processing model building only options available are PyTorch or TensorFlow.
It is possible with custom containers if you leverage the Spark Kubernetes operator but this is not a well documented workflow and will require complex set up. GCP's preferred way to run Spark jobs is on Dataproc https://cloud.google.com/dataproc which supports PySpark, SparkR, Scala. You can still trigger a Dataproc Spark job from Vertex Pipelines and save the model for predictions in Vertex via MLeap.

Can I train a model using the notebook instance of AWS sagemaker?

I want to train a PyTorch based model using AWS SageMaker but I don't know how to use the ML Training services. So I was wondering if it is possible to simply train the model using the notebook instance and uploading the model to the S3 bucket? Also, how would I ensure That I do not end up paying more than I need to?

How can I retrieve the result of a trained model with Kubeflow Fairing?

I am using Kubeflow fairing to train a TensorFlow model on Kubernetes. The training succeeds but now I want to serve a prediction endpoint.
How can I retrieve the saved TensorFlow session (weights, biases etc.) from the training step so that I can do this? At the moment the result of the training step is saved inside the Docker container running on the Kubernetes cluster.
I had misunderstood the scope of Kubeflow fairing - at the time of writing it doesn't support copying the trained model from the fairing job to where the code was run from, nor is this necessarily desirable.
I instead used the Minio instance provisioned by Kubeflow to store and retrieve tarballs of trained models.

Tensorflow Model Deployment in GCP without Tensorflow Serving

Machine Learning Model: Tensorflow Based (version 1.9) & Python version 3.6
Data Input: From Bigquery
Data Output: To Bigquery
Production prediction frequency: Monthly
I have a developed a Tensorflow based machine learning model. I have trained it locally and want to deploy it in Google Cloud Platform for predictions.
The model reads input data from Google Bigquery and the output predictions has to be written in Google Bigquery. There are some data preparation scripts which has to be run before the model prediction is run. Currently I cannot use BigQuery ML in Production as it is in Beta stage. Additionally as it is a batch prediction I don't think Tensorflow Serving will be a good choice.
Strategies which I have tried for deployment:
Use Google ML Engine for prediction: This approach creates output part files on GCS. These have to be combined and written to Google Bigquery. So in this approach I have to spin up a VM just to execute the data preparation script and ML Engine output to Google Bigquery script. This adds up to 24x7 cost of VM just for running two scripts in a month.
Use Dataflow for data preparation script execution along with Google ML Engine: Dataflow uses python 2.7 while the model is developed in Tensorflow version 1.9 and python version 3.6. So this approach cannot be used.
Google App Engine: Using this approach a complete web application has to be developed in order to serve predictions. As the predictions are in batch this approach is not suitable. Additionally flask/django has to be integrated with the code in order to use it.
Google Compute Engine: Using this approach the VM would be running 24x7 just for running monthly predictions and running two scripts. The would cause a lot of cost overhead.
I would like to know what is best deployment approach for Tensorflow models which has some pre and post processing scripts.
Regarding the option 3, Dataflow can read from BigQuery and store the prepared data in BigQuery at the end of the job.
Then you can have Tensorflow use BigQueryReader to data from BigQuery.
Another that you can use is Datalab, this is a notebook in which you can prepare your data and then use it for your prediction.
I've also not found this process flow easy or intuitive. There are two new updates which might help in your project:
BigQuery ML now allows you to import TensorFlow models link - there are some limitations but this may eliminate some of the back and forth data movement between BQ and cloud storage or other environments.
Cloud DataFlow supports Python 3 in alpha (check the Apache Beam roadmap - link )

Apache Spark & Machine Learning - Using in production

Im having some difficulties figuring out how to use spark's machine learning capabilities in a real life production environment.
What i want to do is the following:
Develop a new ml model using notebooks
Serve the learned model using REST api (something like POST - /api/v1/mymodel/predict)
Let say the ml training process is handled by a notebook, and once the model requirements are fulfilled it's saved into an hdfs file, to be later loaded by a spark application
I know i could write a long running spark application that exposes the api and run it on my spark cluster, but i don't think this is really a scalable approach, because even if the data transformations and the ml functions would run on the workers node, the http/api related code would still run on one node, the one on wich spark-submit is invoked (correct me if i'm wrong).
One other approach is to use the same long running application, but in a local-standalone cluster. I could deploy the same application as many times as i want, and put a load balancer in front of it. With this approach the http/api part is handled fine, but the spark part is not using the cluster capabilities at all (this could not be a problem, due to fact that it should only perform a single prediction per request)
There is a third approach wich uses SparkLauncher, wich wraps the spark job in a separate jar, but i don't really like flying jars, and it is difficult to retrieve the result of the prediction (a queue maybe, or hdfs)
So basically the question is: what is the best approach to consume spark's ml models through rest api?
Thank You
you have three options
trigger batch ML job via spark api spark-jobserver, upon client request
trigger batch ML job via scheduler airflow , write output to DB, expose DB via rest to client
keep structured-streaming / recursive functionon to scan input data source, update / append DB continuously, expose DB via rest to client
If you have single prediction per request, and your data input is constantly updated, I would suggest option 3, which would transform data in near-real-time at all times, and client would have constant access to output, you can notify client when new data is completed by sending notification via rest or sns, you could keep pretty small spark cluster that would handle data ingest, and scale rest service and DB upon request / data volume (load balancer)
If you anticipate rare requests where data source is updated periodically lets say once a day, option 1 or 2 will be suitable as you can launch bigger cluster and shut it down when completed.
Hope it helps.
The problem is you don't want to keep your spark cluster running and deploy your REST API inside it for the prediction as it's slow.
So to achieve real-time prediction with low latency, Here are a couple of solutions.
What we are doing is Training the model, exporting the model and use the model outside Spark to do the Prediction.
You can export the model as a PMML file if the ML Algorithm you used is supported by the PMML standards. Spark ML Models can be exported as JPMML file using the jpmml library. And then you can create your REST API and use JPMML Evaluator to predict using your Spark ML Models.
MLEAP MLeap is a common serialization format and execution engine for machine learning pipelines. It supports Spark, Scikit-learn and Tensorflow for training pipelines and exporting them to an MLeap Bundle. Serialized pipelines (bundles) can be deserialized back into Spark for batch-mode scoring or the MLeap runtime to power realtime API services. It supports multiple platforms, though I have just used it for Spark ML models and it works really well.

Resources