I've been working on developing an API to serve a machine learning model using Azure Machine Learning (AML) webservice deployments on a Kubernetes target as outlined here: https://learn.microsoft.com/en-us/azure/machine-learning/service/how-to-deploy-and-where#prepare-to-deploy
My particular use case requires more than simple scoring of data through the model. The API needs to have multiple endpoints to perform different actions related to the model. For example, an endpoint to upload data, an endpoint to delete data, an endpoint to list existing data, an endpoint to score previously uploaded data, an endpoint to change preprocessing parameters, etc...
I can build all of this logic, but I am struggling with the fact that AML web services only provides one endpoint (The service URI ending in "/score"). Is there a way to add more endpoints to an AML service? For example, I would like to have a way for users to be able to POST, GET, DELETE, PUT "/data", GET "/predictions", and POST, GET, DELETE, PUT "/parameters", etc..
Is there a way to do this in AML or is this not the right tool for what I am trying to accomplish? Is there a better solution within Azure that is more suited for my needs?
Thank you!
Azure ML allows controlled rollout/traffic splitting, but doesn't directly support your API design.
I might need to know more about your use case to make a recommendation. Are you looking at implementing incremental learning? What is the motivation for separate endpoints?
-Andon
Your proposal seems like a stateful web server which is more than a REST API service. For example, you need to keep a piece of logic to maintain "ids" of data: if there are two POST /data calls with different data, and the DELETE /data need to operate on the proper one. This is much more than a single performance optimized machine learning service.
I would recommend you creating a separate server with all these logic pieces and only reach Azure Machine Learning service whenever you need it. You could also build a cache in your service to only call Azure ML service when a new data coming in or the local cache expired. It will save you additional money from Azure :-)
Related
I have managed to get the C# and db setup using ListMappings. However, when I try to deploy the split/merge tool to Azure cloud classic the service it states 'The requested VM tier is currently not available in East US for this subscription. Please try another tier or deploy to a different location.' We tried a few other regions with the same result. Do you know if there is a workaround or updated version? Is the split / merge service even still relevant? Has anyone got this service to run on Azure lately?
https://learn.microsoft.com/en-us/azure/azure-sql/database/elastic-scale-overview-split-and-merge
The answer to the question on whether it is still relevant, in my opinion is ...no. Split\merge is no longer relevant with the maturation of elastic pools. Elastic pools with one data base per tenant seem the sustainable way to implement multi tenancy with legacy code. The initial plan was to add keys to each of our tables to have multiple tenants per database. Elastic pools give us the same flexibility without having to make breaking changes our existing code.
Late post here, but we are implementing ElasticScale for a client to split ~50 clients into a database-per-tenant model. I don't think the SplitMerge tool will be used over the long term, just for the initial data migration from one db to many shards, but it has been handy for that purpose. We are using the ElasticScale SDK to allow a single API to route queries to the appropriate shard(s) based on sharding key. Happy to compare notes with you if you are still working on this.
With Azure Data Factory I have built a pipeline to orchestrate the processing of my Azure Analysis Services model trough a dedicated Logic App as explicated in this article, and it works properly.
Now, always using Azure Data Factory (through Logic App), I wish I could also update the list of the user in a specific roles.
In the article mentioned above, to process the Azure Analysis Services models, the Logic App calls a specific API that has the following format:
https:// <rollout>.asazure.windows.net/servers/<serverName>/models/<resource>/refreshes
but this API doesn't seem to work for update the model's roles.
Is there anyone who knows the correct method to be able to update model roles using a specific Logic App?
Thanks for any suggestions
If you don't necessarily need to use the logic app for this, I think it might be possible using Azure automation and the powershell cmdlets for managing azure analysis services:
https://learn.microsoft.com/en-us/azure/analysis-services/analysis-services-refresh-azure-automation
https://learn.microsoft.com/en-us/azure/analysis-services/analysis-services-powershell
https://learn.microsoft.com/en-us/powershell/module/sqlserver/Add-RoleMember?view=sqlserver-ps
One alternative approach might be to have fixed AD groups as members of the tabular model roles and add / remove members from those AD groups. Therefore the tabular model roles would not need to be refreshed, it would simply be a matter of adding or removing members from the AD groups as part of your governance process.
A second approach would be to use dynamic row-level security. Adding records to a Azure SQL DB table is perfectly possible with Logic Apps and could be used to drive security, depending on your requirements. You can then refresh your security dimension with the Logic App. See here for more details:
https://learn.microsoft.com/en-us/power-bi/desktop-tutorial-row-level-security-onprem-ssas-tabular
To answer your question however, the Azure Analysis Services REST API is useful but is not that fully featured, ie it does not contain all possible operations for tabular models or the service. One other missing example I found was backups, ie although it is possible to trigger a pause or resume of the service, it is not possible to trigger a backup of a tabular model via the REST API. I do not believe it is possible to alter role members or at least, the operation is not listed in the REST API, although happy to be corrected if I am wrong. To be more specific, Roles is not mentioned in the list of available objects which can be passed in to the Objects array using the POST / Refreshes eg here. table and partition are the only ones I'm aware of.
There are also no examples on the MS github site:
https://github.com/microsoft/Analysis-Services
Finally, consider calling TMSL via Powershell in an Azure Function, which you can call from Azure Data Factory.
HTH
I understand that Microservices is about independent loosely coupled services. I have read https://en.wikipedia.org/wiki/Microservices.
When it comes to Azure, I understand there are many components like Azure Service Fabric, AKS and also have the option of deploying containers within Azure VMs using Docker or any other containerization tools. However, since Microservices is about developing atmoic individually scalable services, can this also be achieved by deploying each service as an Azure Web API APP within an App Service Plan and configure Auto-Scale based on Performance metrics (though each API APP may not be individually scalable, they can still be individually manageable in terms of deployment, configuration etc)?
Can someone please suggest if this thought process is correct?
Microservices aren't a platform or technology so if you can make small independently deployable services then they are microservices. Sure - some tech helps but it depends on your situation.
If you only need a few services you probably don't need anything complex. Make sure services are well modeled, own their own data and ideally have a good monitoring and deployment pipeline setup. Design for service failure where possible.
Do you need to scale each part independently? Ideally, you should be able to but do services have very different requirements? You could have many small App service plans but that comes at cost of unused resources so split when you need to.
This question and of course the answers are going to be opinion based, but generally when thinking in terms of micros services, think not in terms as things like loads of API's and VM's etc. Instead think in terms of. When i upload an image, its needs to be resized, and the table updated to give a url for the thumb. or when XXX record is updated in database, Run XXX in order to create a report, or update Azure search. and that each service, just knows how to do a single thing only. I.E Resize an image.
Now one could say. I have a system, A repo library, and some functions library. When an image is posted, I upload, then call this, and that etc.
With Micor services. You would instead just add the image to a queue. Create an azure function that has a queue trigger. that would resize and save both the large and the thumb to storage. this would then either update the database, or in true micro service, it would add a queue to store the new info, another function would watch that queue and insert into the database.
You can use the DB queue from anything. You can use the Blob queue from anything. Your main API, does not care how images are handled. You can change your functions one day to maybe save to dropbox, instead of azure blob. All really easy, with no re-build of the API, because the API does not care.
A good example I use it for is email and SMS. My systems dont know how to send an email, or an SMS. They only know how to add to a queue. My microservices. SendEmail and SendSMS do know how to do it, and I can change how and who i send that content with, really easy. I can tomorrow change from Twilio to send grid, without ever telling the API that i've done it.
On a more complex thing. I have approval, at the moment that approval sends an email or SMS to either user or admin, and that can change over time. So I have an SMS server, Email Service and and approvalService. when approval happens, it just adds a config to the queue, The rest is done by a logic app, that knows to send an email to XXX and an SMS to XXX and then update database. My api, is just a post, that creates a queue.
Basically what I am saying here is to get started, maybe porting an existing app. Start with the workflow stuff, like send an email, resize an image, create a report, create a PDF, email 50 subscribers etc. and take all that code out and put into there own micro service that just knows how to do one thing. Then when you grow with confidence, create a workflow from all of these services with Logic Apps, let azure take care of the rest, thats what they want to do.
Currently our DB works in customer's local network and we have client app on C# to consume data. Due to some business needs, we got order to start moving everything to Azure. DB will be moving to Azure SQL.
We had discussion about accessing DB. There are two points:
One guy said that we have to add one more layer between our app (that will be working outside Azure at end-user PCs) and SQL Azure. In other words he suggested adding API service that will be translated all requests to DB, i.e. app(on-premises) -> API service (on Azure)-> SQL Azure. This approach looks more reliable and secure, since we are hiding SQL Azure behind facade of API service and the app talks to our API service only. It looks more like a reverse proxy. Obviously, behind this API we can build more sophisticated structure of DBs.
Another guy suggested connecting directly to DB, i.e. app(on-premises) -> SQL Azure. So far we don't have any plans to change structure of DB or even increase count of DBs. He claims it more simple and we can secure our connection the same way. Having additional service that just re-translates our queries to DB and back looks like wasting time.In the future, if needed, we would add this API.
What would you select and recommend, and why ?
Few notes:
We are going to use Azure AD to authenticate users.
Our application will be moving to Azure too, but later (in 1-2 years), we have plans to create REST API and move to thin client instead of fat client we have right now.
Good performance is our goal, we don't want to add extra things that can decrease it, but security is our most important goal as well.
Certainly an intermediate layer is one way to go. There isn't enough detail to be sure, but I wonder why you don't try the second option. Usually some redevelopment is normal. But if you can get away without it, and you get sufficient performance then that's even better.
I hope this helps.
Thank you.
Guy
If your application is not just a prototype (it sounds like it is not), then I advise you to build the intermediate API. The primary reasons for this are:
Flexibility
Rolling out a new version of an API is simple: You have either only one deployment or you have something like Octopus Deploy that deploys to a few instances at the same time for you. Deploying client applications is usually much more involved: Creating installers, distributing them, making sure users install them, etc.
If you build the API, you will be able to make changes to the DB and hide these changes from the client applications by just modifying the API implementation, but keeping the API interfaces the same. Moving forward, this will simplify the tasks for your team considerably.
Security
As soon as you have different roles/permissions in your system, you will need to implement them with DB security features if you connect to the DB directly. This may work for simple cases, but even there it is a pain to manage.
With an API, you can implement authorization in the API using C#. Like this, you can build whatever you need and you're not restricted by the security features the DB offers.
Also, if you don't take extra care about this, you may end up exposing the DB credentials to the client app, which will be a major security flaw.
Conclusion
Build the intermediate API. Except you have strong reasons not to. As always with architecture considerations, I'm sure there are cases where the above points don't apply. Just make sure you understand all the implications if you decide to go the direct route.
I currently have a couple of WebApi projects that use a few class libraries such as address lookup, bank validation, image storage etc.
Currently they are all in a shared solution but I'm planning to split them up. I thought about moving the libraries into NuGet packages so that they are separate from the API projects and is properly shared.
However, if I make a change to one of these components I will need to build and redeploy the API service even though it's a separate component which has changed.
I thought about putting these components into a separate service but seems a bit of overhead for what it is.
I've been looking at Azure WebJobs and think I may be able to move these components into this instead. I have two questions related to this:
Are WebJobs suitable for calling on demand (not using a queue)? The request will be activated from a user on a web site which calls my API service which then calls the Web Job so it needs to be quick.
Can a WebJob return data? I've seen examples where it does some processing and updates a database but I need a response (ideally Json) back to my API service.
Thanks
According to your requirement, I assume that you could try to leverage Azure Functions by creating a function using the HTTP trigger, which could be triggered by accessing the Function URL with parameters and return the response as you expected. You could follow this tutorial for getting started with Azure Functions.