I have a simple bit of JSON from an Azure Traffic Manager request, so ideally it would be stored in a blob storage account that is marked with a public access policy to read the blob. When I attempt this - using external endpoint in ATM - I get a 400 HTTP response.
The endpoint shows online in the portal, which is interesting since issuing that URL through the browser also results in a 400 error. I have the health probe pointed at a public blob at the $root container.
My second attempt was to then try an Azure function as the endpoint, and in this case the health probe results in a 'stopped' state. From older articles it appeared this would be returned for a basic App service plan (this is a consumption plan), but I presume that's outdated at this point?
What's the resolution here? This shouldn't be this hard!
According to your description, I checked this issue on my side and I could encounter the same issue as you mentioned. Then I found issues about Traffic Manager and Blob Storage and Integration of Azure Functions with Traffic Manager.
Per my understanding, Traffic Manager does not support integration with Blob Storage, you could add your feature request here.
For integrating with Azure Functions, you need to make sure your Web Apps at the Standard SKU or above are eligible for use with Traffic Manager. For web apps below Standard SKU, you could leverage Azure Functions Proxies. Here are some references, you could refer to them:
Traffic Manager - Web Apps as endpoints
Azure Functions Traffic Manager
Related
I need to use azure cognitive services (speech to text) behind a corporate firewall. The speech to text batch processing has a callback from azure notifying once the process is complete.
(https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/samples/batch/csharp) - see webhookreceiver.cs
Does anyone know what azure cognitive services IP addresses that needs to be whitelisted on the corporate firewall so that i can receive the callback requests from cognitive services?
The response from the call has a resultsUrls array, which contains channel_1 and channel_0. These URL’s are accessible by anyone.
Also, the GET request from step (Make repeated GET https://centralus.cris.ai/api/speechtotext/v2.0/transcriptions/ to find out when the transcription is complete.) that retrieves the list of results is not secured, allowing with the subscription key to view them.
The URI’s / SAS exchanged are only known to you and the service. If those are not distributed further, no one else will have access.
We will have further options like VNET etc in the near future.
I am struggling to get my Azure batch nodes to start within a Pool that is configured to use a virtual network. The virtual network has been configured with a service endpoint policy that has a "Microsoft.Storage" policy definition and it points at a single storage account. Without the service endpoints defined on the virtual network the Azure batch pool works as expected, but with it the following error occurs and the node never starts.
I have tried creating the Batch account in both Pool allocation modes. This did not seem to make a difference, the pool resizes successfully and then the nodes are stuck in "Starting" mode. In the "User Subscription" mode I found the start-up error because I can see the VM instance in my account:
VM has reported a failure when processing extension 'batchNodeExtension'. Error message: "Enable failed: processing file downloads failed: failed to download file[0]: failed to download file: unexpected status code: actual=403 expected=200" More information on troubleshooting is available at https://aka.ms/VMExtensionCSELinuxTroubleshoot
From what I can determine this is an Azure VM extension that is running to configure the VM for Azure Batch. My base image is Canonical, ubuntuserver, 18.04-lts (batch.node.ubuntu 18.04). I can see that the extensions is attempting to download from:
https://a52a7f3c745c443e8c2cac69.blob.core.windows.net/nodeagentpackage-version9-22-0-2/Ubuntu-18.04/batch_init-ubuntu-18.04-1.8.7.tar.gz (note I removed the SAS token from this URL for posting here)
there are 8 further files that are downloaded and it looks like this is configuring the Batch agent on the node.
The 403 error indicates that the node cannot connect to this storage account, which makes sense given the service endpoint policy. It does not include this storage account within it and this storage account is external to my Azure subscription. I thought that I might be able to add it to the service endpoint policy, but I have no way of determining what Azure subscription it is part of it. If I knew this I thought I could add it like:
Endpoint policy allows you to add specific Azure Storage accounts to allow list, using the resourceID format. You can restrict access to all storage accounts in a subscription
E.g. /subscriptions/subscriptionId (from https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoint-policies-overview)
I tried adding security group rules using service tags for Azure storage, but this did not help. The node still cannot connect and this makes sense given the description of service endpoint policies.
The reason for my interest in this is the following post:
[https://github.com/Azure/Batch/issues/66][1]
I am trying to minimise the bandwidth charges from my storage account by using service endpoints.
I have also tried to create my own VM, but I am not sure whether the "batchNodeExtension" script is run automatically for VMs that you're using with Batch.
I would really appreciate any pointers because I am running out of ideas to try!
Batch requires a generic rule for all of Storage (can be regional variant) as specified at https://learn.microsoft.com/en-us/azure/batch/batch-virtual-network#network-security-groups-specifying-subnet-level-rules. Currently it is mainly used to download our agent and maintain state/get information needed to run tasks.
I am facing the same problem with Azure Machine Learning. We are trying to fight data exfiltration by using the SP Policies in order to prevent sending the data to any non-subscription storage accounts.
Since Azure ML Computes depends on the Batch service, we were unable to run any ML compute if the SP policy is associated to the compute subnet.
Microsoft stated the follwoing:
Filtering traffic on Azure services deployed into Virtual Networks: At this time, Azure Service Endpoint Policies are not supported for any managed Azure services that are deployed into your virtual network.
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoint-policies-overview#scenarios
I understand from this kind of restriction, that any service that use Azure Batch (which almost all services in Azure?) cannot use the SP Policy which make it useless freature...
Finally we endup by removing the SP policy completly from our network architecture and considered it only for scenarios where you to want to restrict customers to access specific storage accounts.
I can configure the Azure CDN against a single storage account presently. What I'm wondering is in the event of a disaster, where that particular region becomes unavailable (outages etc..). If I need to refresh the cache at that point I don't have any regional fallbacks. What is the correct way of supporting multiple storage accounts with the CDN?
One way that I can see it is the Traffic Manager. Traffic Manager receives the request and sends it to one of the X CDNs configured for X Storage Accounts based on performance. That way if one of the regions become unavailable, Traffic Manager should fallback to another one. This is an expensive solution though, so I'm looking for something where I can get one CDN and X Storage Accounts ideally and the CDN should handle the world-wide performance, along with a fallback region.
Here are the steps to configure AFD:
Create AFD from Portal.
Click on Front Door Designer. You will have 3 sections. First is Frontend which will be already configured. Then Baclkend Pools and Routing rules.
Click on Backend Pools and add a new backend pool. Select Storage as Host type and then pick your Primary Storage blob page and provide priority as 1.
Once that is done configure the Health probes. Then add your second Storage blob page and then provide priority as 2.
Configure Routing rules and make sure you have /* as matching pattern. Also you can enable caching in the rule and you can cache based on the query string. Moreover if have a dynamic page, then you can enable dynamic compression.
Once that is done, try accessing AFD URL and check how it works.
Here is the Public Documentation for your reference: https://learn.microsoft.com/en-us/azure/frontdoor/front-door-routing-methods
You try using Azure FrontDoor. It is a combination of CDN and L7 load balancer. You can try implementing your ask with Azure FrontDoor.
Let me know if you face any difficulties.
I have an Azure Storage account where I have blobs stored in containers.
I would like to limit the access to this storage account to specific Azure resources and prevent internet connections.
I currently have access limited to IPs from our office locations. This allows us to support the process and use Azure Storage Explorer.
I tried adding the Outgoing IP Addresses from the Logic App but that did not allow access.
Then in the Logic App designer, I get the following Error.
I would like to additionally allow access from an Azure Logic app that would work with data stored there.
Is the IP you allowed known in the list of Logic Apps IPs? If not then I think you will need to whitelist the one on the list.
This is the list of Logic App IP's per country & connector:
Logic App IPs
I am having the same issue. Apparently this configuration is not supported. Quoted from an Azure ticket yesterday:
"Yea we have had couple (sic) customers reporting this issue. Unfortunately this feature is not supported as of now. The azure networking team was working on adding this support for logic apps. As of last month there was no ETA given."
Also, in my storage account logs the failed logic app requests are coming from 10.157.x.x, which I cannot whitelist in the storage account firewall. I even tried "fooling" the firewall by creating a vnet containing that subnet and allowing that. No dice.
Have you used the blob storage connector in your logic app ? Once you add the credential connection details, you'd be able to connect from the logic app.
The full documentation can be found here
I have a simple bit of JSON from an Azure Traffic Manager request, so ideally it would be stored in a blob storage account that is marked with a public access policy to read the blob. When I attempt this - using external endpoint in ATM - I get a 400 HTTP response.
The endpoint shows online in the portal, which is interesting since issuing that URL through the browser also results in a 400 error. I have the health probe pointed at a public blob at the $root container.
My second attempt was to then try an Azure function as the endpoint, and in this case the health probe results in a 'stopped' state. From older articles it appeared this would be returned for a basic App service plan (this is a consumption plan), but I presume that's outdated at this point?
What's the resolution here? This shouldn't be this hard!
According to your description, I checked this issue on my side and I could encounter the same issue as you mentioned. Then I found issues about Traffic Manager and Blob Storage and Integration of Azure Functions with Traffic Manager.
Per my understanding, Traffic Manager does not support integration with Blob Storage, you could add your feature request here.
For integrating with Azure Functions, you need to make sure your Web Apps at the Standard SKU or above are eligible for use with Traffic Manager. For web apps below Standard SKU, you could leverage Azure Functions Proxies. Here are some references, you could refer to them:
Traffic Manager - Web Apps as endpoints
Azure Functions Traffic Manager