Azure OpenAI - training data - azure

Do we have the ability to prevent or limit the use of other shard web locations in Azure OpenAI ( https://learn.microsoft.com/en-us/azure/cognitive-services/openai/how-to/fine-tuning?pivots=programming-language-studio#to-import-training-data-from-an-azure-blob-store ) when training?
For example, if I only want to use training data from my company's internal blobs and make sure no external ones can be used.

Related

Azure architecture design for image platform

I want to build a image sharing platform for customers to use. This platform will take an image provided by a user, create copies of it at multiple resolutions, and store them ready to be shared or downloaded. How to achieve this using azure in a cost effective ways
Im thinking to use Azure functions(for the api calls) storage blobs , event grid and cosmos db for the same.
To keep the costs low keep it simple:
Store data in Blob storage. Price varies based on redundancy, speed of access and location
Azure functions for processing images, consumption plan gives 1M free requests per month
Azure app service to host web site for uploading images, there is a free tier

Azure Machine Learning (AML) Webservice REST API with Multiple endpoints

I've been working on developing an API to serve a machine learning model using Azure Machine Learning (AML) webservice deployments on a Kubernetes target as outlined here: https://learn.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#prepare-to-deploy
My particular use case requires more than simple scoring of data through the model. The API needs to have multiple endpoints to perform different actions related to the model. For example, an endpoint to upload data, an endpoint to delete data, an endpoint to list existing data, an endpoint to score previously uploaded data, an endpoint to change preprocessing parameters, etc...
I can build all of this logic, but I am struggling with the fact that AML web services only provides one endpoint (The service URI ending in "/score"). Is there a way to add more endpoints to an AML service? For example, I would like to have a way for users to be able to POST, GET, DELETE, PUT "/data", GET "/predictions", and POST, GET, DELETE, PUT "/parameters", etc..
Is there a way to do this in AML or is this not the right tool for what I am trying to accomplish? Is there a better solution within Azure that is more suited for my needs?
Thank you!
Azure ML allows controlled rollout/traffic splitting, but doesn't directly support your API design.
I might need to know more about your use case to make a recommendation. Are you looking at implementing incremental learning? What is the motivation for separate endpoints?
-Andon
Your proposal seems like a stateful web server which is more than a REST API service. For example, you need to keep a piece of logic to maintain "ids" of data: if there are two POST /data calls with different data, and the DELETE /data need to operate on the proper one. This is much more than a single performance optimized machine learning service.
I would recommend you creating a separate server with all these logic pieces and only reach Azure Machine Learning service whenever you need it. You could also build a cache in your service to only call Azure ML service when a new data coming in or the local cache expired. It will save you additional money from Azure :-)

Retrain the classification model automatically based on updated data set

We have created an experiment in Azure ML Studio to predict some scheduling activities based on the system data and user data. System data consists of the CPU time, Heap Usage and other system parameters while user data has active sessions of the user and some user-specific data.
Our experiment is working fine and returning the results quite similar to what we are expecting, but we are struggling with the following:-
1) Our experiment is not considering the updated data for training its models.
2) Every time we are required to upload the data and retrain the models manually.
I wonder if it is really possible to feed in live data to the azure experiments using some web-services or by using Azure DB. We are trying to update the data in CSV file that we have created in Azure storage. That probably would solve our 1st query.
Now, this updated data should be considered to train the model periodically automatically.
It would be great if someone could help us out with it?
Note: We are using our model using the web services created with the help of Azure studio.
Step 1 : Create 2 web services with Azure ML Studio ( One for the training model and one for the predictive model)
Step 2: Create endpoint through the web service with the link Manage Endpoint on Azure ML Studio for each web service
Step 3: Create 2 new connections on Azure Data Factory / Find Azure ML (on compute tab) and copy the Endpoint key and API Key that you will find under the Consume tab in the endpoint configuration (the one that you created on step 2) Endpoint Key = Batch Requests Key and API Key = Primary Key
Set Disable Update Resource for the training model endpoint
Set Enable Update Resource for the predictive model endpoint ( Update Resource End Point = Patch key )
Step 4 : Create a pipeline with 2 activities ( ML Batch Execution and ML Update Resource)
Set the AML Linked service for the ML batch Execution with the connection that has disable Update Resource
Set the AML Linked service for the ML Update Resource with the connection that has Enable Update Resource
Step 5 : Set the Web Service Inputs and Outputs
You need to use Azure Data Factory to retrain the ML model.
You need to create a pipeline with the ML Batch Execution and ML Update Resource activities and to call your ML model you need to configure the endpoint on the webservice.
Here is some links to help you :
https://learn.microsoft.com/en-us/azure/data-factory/transform-data-using-machine-learning
https://learn.microsoft.com/en-us/azure/data-factory/update-machine-learning-models

Sync mechanism to azure search - How Reliable is azure search insertion?

How reliable is the insertion mechanism to azure search?
Say, a call on average to upload to azure search. Are there any slas on this? average insertion time for one document, average failure rate for one document.
I'm trying to send data from my database to azure search and I was wondering if it was more reliable to send data directly to azure search, or do a dual write for example to a high available queue like kafka and read from there.
From SLA for Azure Search:
We guarantee at least 99.9% availability for index query requests when
an Azure Search Service Instance is configured with two or more
replicas, and index update requests when an Azure Search Service
Instance is configured with three or more replicas. No SLA is provided
for the Free tier.
Your client code needs to follow the best practices: batch indexing requests, retry on transient failures with an exponential back-off policy, and scale service appropriately based on the size of the documents and indexing load.
Whether or not use an intermediate buffer depends not so much on SLA, but how spiky your indexing load will be, and how decoupled you want your search indexing component to be.
You may also find Capacity planning for Azure Search useful.

Azure ML Web Service for R models shows unpredictable

When publishing an Azure ML Web Service and preloading data in our R model we see inconsistent performance. First calls are slow but following calls are fast, waiting a bit (couple of minutes) for the next call ends up showing longer response times.
The way Azure ML Web Services work in the background means that instances hosting the models are provisioned and moved in a very dynamic multi-tenant environment. Caching data (warming up) can be helpful but this doesn't mean all subsequent calls will land on the same instance with the same data available in the cache.
For models that need a lot of in-memory data there is a limit to what the Azure ML Web Services hosting layer can offer at this point. Microsoft R server could be an alternative to host these big ML workloads and looking at Service Fabric to scale

Resources