Deployment of Machine Learning models in Azure machine learning through mlflow - azure

I am new to MLOps. It would be helpful if someone can share the procedure to use mlflow in deployment of Machine Learning models in azure machine learning.
I don't want to use databricks.
A sample run step by step with example will be helpful.. thanks in anticipation

Related

Download and use Azure AutoML model on local machine?

I've used Azure AutoML to build and train a classification model. However, instead of deploying the model to a web service or real-time endpoint, I'd like to be able to download the model and run it on my local machine.
I've attempted to follow https://learn.microsoft.com/en-us/azure/machine-learning/v1/how-to-deploy-local, however it's quite vague and I'm a beginner to using ML models so got stuck quite quickly.
In Azure ML SDK v1 you can download the model and deploy locally. Here is the document and the sample notebook to deploy locally .
You can download the model:
From the portal, by selecting the Models tab, selecting the desired model, and on the Details page, selecting Download.
From the command line, by using az ml model download.
By using the Python SDK Model.download() method.

How to export a MLFlow Model from Azure Databricks as an Azure DevOps Artifacts for CD phase?

I am trying to create an MLOps Pipeline using Azure DevOps and Azure Databricks. From Azure DevOps, I am submitting a Databricks job to a cluster, which trains a Machine Learning Model and saves it into MLFlow Model Registry with a custom flavour (using PyFunc Custom Model).
Now after the job gets over, I want to export this MLFlow Object (with all dependencies - Conda dependencies, two model files - one .pkl and one .h5, the Python Class with load_context() and predict() functions defined so that after exporting I can import it and call predict as we do with MLFlow Models).
How do I export this entire MLFlow Model and save it as an AzureDevOps Artifact to be used in the CD phase (where I will deploy it to an AKS cluster with a custom base image)?
There is no official way to export a Databricks MLflow run from one workspace to another. However, there is an "unofficial" tool that does most of the job with the main limitation being that notebook revisions linked to a run cannot be exported due to lack of a REST API endpoint for this.
https://github.com/amesar/mlflow-export-import
Probably you needn't to use the artifacts, there is an azure devops extension (Machine Learning), it can access artifacts in the AzureML workspace, and trigger the release pipeline. You can refer to link below for the steps:
https://github.com/Azure-Samples/MLOpsDatabricks/blob/master/docs/release-pipeline.md

Deployment deep learning system with some models with MLaaS

I read some articles with deployment examples and they were about deploying one model but not a whole deep learning system.
If I want to deploy my project including launch of multiple deep models built with different frameworks (Pytorch, tensorflow) then what's good option for that:
build Docker image with whole project and deploy it with ml
service (azure, aws lambda etc);
or deploy every single model with
chosen MLaaS and and elsewhere deploy the logic that makes requests
to the above models;
I would appreciate any reference/link on the subject.
Thanx.
We have public open source release of Many Models solution accelerator. The accelerator is now available on GitHub and open to everyone: Many Models: https://aka.ms/many-models.
• Check out a blog on Many Models from MTC AI Architect Sam here
Check this document using designer: https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-model-designer? Basically you can register a trained model in Designer bring it out with SDK/CLI to deploy it.
One approach with current integration between Azure ML and Azure DevOps is to setup a release pipeline in Azure DevOps which is triggered by the model registration in your Dev workspace model registry which them deploys to your Prod workspace.
There is guidance and examples in this repo
https://github.com/Microsoft/MLOpsPython
And more general guidance for MLops at http://aka.ms/mlops
This also allows for putting integration tests into your release process or other steps like approval processes if needed using DevOps functionality.

Update Azure ML realtime endpoint

I'm generating a machine learning modell using Azure Auto ML. Is there a way to update my published real-time endpoint without deleting it first?
Thanks in advance.
One approach with current integration between Azure ML and Azure DevOps is to setup a release pipeline in Azure DevOps which is triggered by the model registration in your Dev workspace model registry.
There is guidance and examples in this repo
https://github.com/Microsoft/MLOpsPython
And more general guidance for MLops at http://aka.ms/mlops
Please follow the below for retraining.
https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/ai/mlops-python#retraining-pipeline

Migrate Centos AWS image to Azure VM

We have the AWS centos system and we have our software installed on it. Now we want to move this EC2 instance to Azure. What is the process and best way approach that we can follow.
Any Document or article will help.
Azure has a guide for this specifically
https://learn.microsoft.com/en-us/azure/site-recovery/migrate-tutorial-aws-azure

Resources