Is there any way i can implement via api policy to enable cache per operation in azure ?
I want to enable cache per operation in azure apim. How to do that ?
Related
I am struggling to get my Azure batch nodes to start within a Pool that is configured to use a virtual network. The virtual network has been configured with a service endpoint policy that has a "Microsoft.Storage" policy definition and it points at a single storage account. Without the service endpoints defined on the virtual network the Azure batch pool works as expected, but with it the following error occurs and the node never starts.
I have tried creating the Batch account in both Pool allocation modes. This did not seem to make a difference, the pool resizes successfully and then the nodes are stuck in "Starting" mode. In the "User Subscription" mode I found the start-up error because I can see the VM instance in my account:
VM has reported a failure when processing extension 'batchNodeExtension'. Error message: "Enable failed: processing file downloads failed: failed to download file[0]: failed to download file: unexpected status code: actual=403 expected=200" More information on troubleshooting is available at https://aka.ms/VMExtensionCSELinuxTroubleshoot
From what I can determine this is an Azure VM extension that is running to configure the VM for Azure Batch. My base image is Canonical, ubuntuserver, 18.04-lts (batch.node.ubuntu 18.04). I can see that the extensions is attempting to download from:
https://a52a7f3c745c443e8c2cac69.blob.core.windows.net/nodeagentpackage-version9-22-0-2/Ubuntu-18.04/batch_init-ubuntu-18.04-1.8.7.tar.gz (note I removed the SAS token from this URL for posting here)
there are 8 further files that are downloaded and it looks like this is configuring the Batch agent on the node.
The 403 error indicates that the node cannot connect to this storage account, which makes sense given the service endpoint policy. It does not include this storage account within it and this storage account is external to my Azure subscription. I thought that I might be able to add it to the service endpoint policy, but I have no way of determining what Azure subscription it is part of it. If I knew this I thought I could add it like:
Endpoint policy allows you to add specific Azure Storage accounts to allow list, using the resourceID format. You can restrict access to all storage accounts in a subscription
E.g. /subscriptions/subscriptionId (from https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoint-policies-overview)
I tried adding security group rules using service tags for Azure storage, but this did not help. The node still cannot connect and this makes sense given the description of service endpoint policies.
The reason for my interest in this is the following post:
[https://github.com/Azure/Batch/issues/66][1]
I am trying to minimise the bandwidth charges from my storage account by using service endpoints.
I have also tried to create my own VM, but I am not sure whether the "batchNodeExtension" script is run automatically for VMs that you're using with Batch.
I would really appreciate any pointers because I am running out of ideas to try!
Batch requires a generic rule for all of Storage (can be regional variant) as specified at https://learn.microsoft.com/en-us/azure/batch/batch-virtual-network#network-security-groups-specifying-subnet-level-rules. Currently it is mainly used to download our agent and maintain state/get information needed to run tasks.
I am facing the same problem with Azure Machine Learning. We are trying to fight data exfiltration by using the SP Policies in order to prevent sending the data to any non-subscription storage accounts.
Since Azure ML Computes depends on the Batch service, we were unable to run any ML compute if the SP policy is associated to the compute subnet.
Microsoft stated the follwoing:
Filtering traffic on Azure services deployed into Virtual Networks: At this time, Azure Service Endpoint Policies are not supported for any managed Azure services that are deployed into your virtual network.
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoint-policies-overview#scenarios
I understand from this kind of restriction, that any service that use Azure Batch (which almost all services in Azure?) cannot use the SP Policy which make it useless freature...
Finally we endup by removing the SP policy completly from our network architecture and considered it only for scenarios where you to want to restrict customers to access specific storage accounts.
I can configure the Azure CDN against a single storage account presently. What I'm wondering is in the event of a disaster, where that particular region becomes unavailable (outages etc..). If I need to refresh the cache at that point I don't have any regional fallbacks. What is the correct way of supporting multiple storage accounts with the CDN?
One way that I can see it is the Traffic Manager. Traffic Manager receives the request and sends it to one of the X CDNs configured for X Storage Accounts based on performance. That way if one of the regions become unavailable, Traffic Manager should fallback to another one. This is an expensive solution though, so I'm looking for something where I can get one CDN and X Storage Accounts ideally and the CDN should handle the world-wide performance, along with a fallback region.
Here are the steps to configure AFD:
Create AFD from Portal.
Click on Front Door Designer. You will have 3 sections. First is Frontend which will be already configured. Then Baclkend Pools and Routing rules.
Click on Backend Pools and add a new backend pool. Select Storage as Host type and then pick your Primary Storage blob page and provide priority as 1.
Once that is done configure the Health probes. Then add your second Storage blob page and then provide priority as 2.
Configure Routing rules and make sure you have /* as matching pattern. Also you can enable caching in the rule and you can cache based on the query string. Moreover if have a dynamic page, then you can enable dynamic compression.
Once that is done, try accessing AFD URL and check how it works.
Here is the Public Documentation for your reference: https://learn.microsoft.com/en-us/azure/frontdoor/front-door-routing-methods
You try using Azure FrontDoor. It is a combination of CDN and L7 load balancer. You can try implementing your ask with Azure FrontDoor.
Let me know if you face any difficulties.
Using Resource Management API I can remove Azure resource (https://learn.microsoft.com/en-us/rest/api/resources/resources#Resources_DeleteById). This API returns 202 that removal is accepted - the resource is not removed right away though. The response header in my case contains "x-ms-request-id" value. How can I use it to get the status of this operation? Did the operation succeed? In my case I am removing the Log Analytics Solution resource.
Any help is greatly appreciated.
According to your description, I have checked this issue. I assumed that azure would take some time to handle your request, you could leverage Azure resource Get By Id to check your azure resource as follows:
For a simple way, you could leverage resources.azure.com, choose your resource and check the details. I removed my Log Analytics, then I could retrieve the following result:
UPDATE
According to your latest comment, I have checked the REST API again and both tested the operations on ASM and ARM, you could refer to them as follows:
For classic Azure Services (ASM)
You could use Get Operation Status with authentication using a management certificate to check the operation status.
For ARM
You could follow this tutorial about tracking asynchronous Azure operations. You could use the header values returned by the asynchronous REST operations, then request the related URL with authentication using Azure Active Directory to determine the status of your operation.
Based on your azure service, you need to use the ARM approach.
Trying to access metrics such as Requests/sec, Capacity, RUs, etc - programmatically. Have access to API tokens/etc. Not seeing a .NET management nuget package for DocumentDb
TIA
You can read metrics from the Azure Insights SDK. The .NET SDK i currently in preview https://www.nuget.org/packages/Microsoft.Azure.Insights
The Rest API documentation is here https://msdn.microsoft.com/en-us/library/azure/dn931939.aspx?f=255&MSPPError=-2147217396
To list all metrics you can call the following endpoint (you'll need to include the Bearer token in the authorization header)
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resource group of your documentDB}/providers/Microsoft.DocumentDb/databaseAccounts/{documentDB account name}/metricDefinitions?api-version=2015-04-08
This will list all available metric definitions. You can then use a query like this to read the induvidual metrics.
https://management.azure.com/subscriptions/{subecriptionId}/resourceGroups/{resource group}/providers/Microsoft.DocumentDb/databaseAccounts/{DocumentDB account}/metrics?api-version=2015-04-08&$filter=%28name.value%20eq%20%27Total%20Requests%27%29%20and%20timeGrain%20eq%20duration%27PT5M%27%20and%20startTime%20eq%202016-05-28T20%3A26%3A00.0000000Z%20and%20endTime%20eq%202016-05-29T20%3A26%3A00.0000000Z
For more info on reading metrics, see https://blogs.msdn.microsoft.com/cloud_solution_architect/2016/02/23/retrieving-resource-metrics-via-the-azure-insights-api/
Azure API Management has promises of 1000 requests per second for an instance. (I don't know this is a correct rate but let's assume it is). My question is how can we scale web service without scaling its infrastructure just by scaling API Management instance.
For example if Azure API Management supports 1000 requests per second for an instance, then backend service also should support the same request handling threshold in its infrastructure. If this is the case what is really meant by scaling up the web service by Azure API Management.
By using Azure API management you can turn on caching easily, which can significantly reduce the traffic to your back-end. In addition, your API Management instance can be scaled up easily to have more VMs behind it. However, if the back-end cannot handle the traffic (after caching), then you might need a more scalable back-end :)
Miao is correct. However remember Azure API Management scaling will only work with GET request. Plus cache size provided by API Management is of only 1GBas of today [may increase in future]; with no monitoring as of today. So if you need monitoring of API Management cache then use external cache like Redis.
When you talk about scalability it will be at all layers. API Management consumption plan can be good option to think through for auto scaling. Then think of Azure VMSS or App service auto scale for scaling backed APIs. And if your backend APIS are talking to DB then think of something like Autoscale for DB on Azure like SQL Azure HyperScale.
So scalability is not only at API Management level but think carefully at all layers.
Sample implementation of Cache in API Management is here - https://sanganakauthority.blogspot.com/2019/09/improve-azure-api-management.html