Can I build ML models with microsoft azure and then export them for use off the cloud? - azure

I have recently been doing some work with time-series analysis and Microsoft Azure has some good resources for building models. I've never worked on anything like this, or for that matter, with Microsoft Azure before (I'm a student - sorry for the lack of experience!)
Is it possible to build a model on Azure - specifically I'm interesting in building a multivariate time-series analysis model - and then export it to be run on my own hardware? I'm not really interested in renting cloud space to run it.
Any advice or insight would be great - thanks!

Yes you can do that. Once you build a model with an experiment, There is a model tab on the portal that allows you to download the model. Something like below.
Below examples will provide you some guidance on deploying to local machines.
how to Deploy models trained with Azure Machine Learning on your local machines and Introducing Multivariate Anomaly Detection

Related

VM or Azure ML for training Deep Learning Algorithms

I'm trying to train a deep-learning model for a 512x512 model with TensorFlow. Normally, I would do it with Google Colab or another GPU in the cloud provider. However, due to security reasons, I am going to train the model in Azure which have instances with GPU restricted. My current options are the following:
-Request a Standard_NC4as_T4_v3 as a computing instance for Azure Machine Learning Studio and train everything in Azure Notebooks. I currently have the dataset there.
-Request an NC4as_T4_v3 for a VM and get the NVIDIA image to train the model in a VM. Getting the data from Azure Machine Learning Studio is not a problem.
Both options have the T4 GPU (16GB vRAM) because I did similar experiments in the past and it was good for the job. Before requesting access to an instance, I would like to know which option is better and more likely to be accepted.
I've tried to train a model in the currently available computing instances (Tesla K80 and M60), but they don't have enough power and are out of date with the latest libraries. Tried to work with the only GPU instance available at the moment (NV8as_v4) but it has an AMD GPU and is not intended for Deep Learning training.
VM or ML Studio will not give much difference but the feasibility with Azure ML studio in validation of the images and then we are using the deep learning models. Computational power can be scalable in the form of clusters and instances when we use the azure can be increased in the node count.
In ML Studio we need to use the attached computes to increase the capacity of the computation.

Matchbox recommender in azure machine learning workspace

The matchbox recommender available in http://studio.azureml.net doesn't seem to have a counterpart in http://ml.azure.com (which it appears is the newer portal for azure ml). Here only the plain SVD recommender is available, which doesn't take user or item features. This is a feature takeaway from the matchbox.
Is there an ETA when matchbox would be made available in the azure machine learning services? Either via SDK or designer.
Thanks.
we don't have a plan to bring back the matchbox yet. If you are looking for recommender algorithms, this repo could be a good reference for best practices: https://github.com/Microsoft/Recommenders. Please let me know if this can unblock you or if you are looking for specific thing in matchbox.

Design for a Cloud Native Application in Azure for ML Insights and Actions

I have an idea whereby I intend to build a cloud native application for algorithmic trading, ideally by consuming all PaaS and SaaS (no IaaS), and I'd like to get some feedback on how I intend to build it. The concept is pretty straight-forward in that I intend to consume financial trading data from an external SaaS solution via an API query, feed that data into various Azure PaaS solutions (most notably ML for modeling), and then take some action. Here is a high-level diagram I've come up with so far:
Solution Overview
As a note, while I'm familiar with Azure, I'm not a Azure cloud engineer and have limited experience in actually building solutions myself. Subsequently, I intend to use this project as a foundation to further educate myself.
When starting on the build, I immediately questioned whether I should or shouldn't use Event Hubs. Conceptually it makes sense, in that I'm decoupling the production of a data stream from the consumption of it. Presumably, this facilitates less complications when / if I need to update the data feed(s) in the future. I also thought about where the data is stored... should it be a SQL database, or more simply, an Azure Table? The idea here is that the trading data will need to be stored for regression testing as my iterate through my models. All that said, looking for some insights from anybody that may have experience in this space.
Thanks!
There's no real question in here. Take a look on the architecture reference provided by Microsoft: https://learn.microsoft.com/en-us/azure/architecture/reference-architectures/

Difference between Databricks Runtime ML & ML Flow

I am a bit of a newbie to Azure Databricks though I have good experience with Databricks but only on Data Engg side. I am a bit confused about Databricks Runtime ML & ML Flow. What's the difference between them and when to use which one? Thanks
Databricks Runtime for Machine Learning (Databricks Runtime ML) provides a ready-to-go environment for machine learning and data science. It contains multiple popular libraries, including TensorFlow, PyTorch, Keras, and XGBoost along with the ability to do distributed deep learning. IE A bunch of things are preinstalled on the databricks runtime and configured for you.
https://docs.azuredatabricks.net/user-guide/clusters/mlruntime.html
MLFlow is an open source end to end machine learning life lifecycle platform. MLFlow is a way to track experiment runs, deploy models, etc.
https://www.mlflow.org/

Azure ML App - Complete Experince - Train automatically and Consume

I played a bit around with Azure ML studio. So as I understand the process goes like this:
a) Create training experiment. Train it with data.
b) Create Scoring experiment. This will include the trained model from the training experiment. Expose this as a service to be consumed over REST.
Maybe a stupid question but what is the recommended way to get the complete experience like the one i get when I use an app like https://datamarket.azure.com/dataset/amla/mba (Frequently Bought Together API built with Azure Machine Learning).
I mean the following:
a) Expose 2 or more services - one to train the model and the other to consume (test) the trained model.
b) User periodically sends training data to train the model
c) The trained model/models now gets saved available for consumption
d) User is now able to send a dataframe to get the predicted results.
Is there an additional wrapper that needs to be built?
If there is a link documenting this please point me to the same.
The Azure ML retraining API is designed to handle the workflow you describe:
http://azure.microsoft.com/en-us/documentation/articles/machine-learning-retrain-models-programmatically/
Hope this helps,
Roope - Microsoft Azure ML Team
You need to take a look at Azure Data Factory.
I have written a Custom Activity to do the same.
And used the logic to retrain the model in the custom activity.

Resources