Limiting azure container access to app service - azure

I have a resource group on Azure with a web application hosted as an App Service, that uses an Azure SQL Database and storage container for blob storage.
Within the storage account, I want to limit public access within the networking section, to only be enabled from selected virtual networks or IP addresses. If I enable this, I then need to provide access from my App Service within the same resource group. The most appropriate route seems to be by allowing access to resource instance, by adding a resource type by instance name. However, in the drop down list of resource types, there does not seem to be an option for App Service. Is this possible?
I considered allowing specific IP addresses, but the Microsoft documentation suggests that resources in the same region as the storage account use private Azure IP addresses for communication.

• I would suggest you use the private endpoints section since this works almost in the same way as the ‘Resource instance’ type to be selected in the instance name since you want to restrict access to a system-managed assigned identity-based solution like an ‘App service’ wherein the web application is hosted in it.
Thus, for this purpose, you will have to create a private endpoint as shown below with the virtual network deployed in the same region as the storage account as well as the in the same resource group. This will ensure that the private endpoint connection is traversing exclusively through the Microsoft backbone network only. Also, the private DNS zone that is created post creating a private endpoint for the respective blob container or any storage resource that you choose will host the DNS records for that respective storage account resource.
To create a private endpoint, kindly refer to the below snapshots for reference: -
Similarly, in the ‘App Services’ section, you will have to associate the private endpoint created as above in the respective virtual network as shown below. This will ensure that your private endpoint for the storage account resource is selected correctly as above.
• Therefore, in the ‘Microsoft Resource Instance’ section in the storage account, you do not have ‘Microsoft App Services’ as an option by design itself, but through the implementation of private endpoint, we can surely achieve the same as required. Also, ensure to check the option for ‘Allow Azure services on the trusted services list to access this storage account’ as the service ‘Microsoft.AppServices/service’ is in the list of trusted services according to official documentation.
Please find the below links for more details regarding the configuration private endpoint and the Microsoft resource instances in the storage account: -
https://learn.microsoft.com/en-us/azure/storage/common/storage-network-security?tabs=azure-portal#grant-access-to-trusted-azure-services
https://learn.microsoft.com/en-us/answers/questions/41129/app-service-to-access-storage-account-with-firewal.html

Related

Azure Batch within a VNET that has a Service endpoint policy for Storage

I am struggling to get my Azure batch nodes to start within a Pool that is configured to use a virtual network. The virtual network has been configured with a service endpoint policy that has a "Microsoft.Storage" policy definition and it points at a single storage account. Without the service endpoints defined on the virtual network the Azure batch pool works as expected, but with it the following error occurs and the node never starts.
I have tried creating the Batch account in both Pool allocation modes. This did not seem to make a difference, the pool resizes successfully and then the nodes are stuck in "Starting" mode. In the "User Subscription" mode I found the start-up error because I can see the VM instance in my account:
VM has reported a failure when processing extension 'batchNodeExtension'. Error message: "Enable failed: processing file downloads failed: failed to download file[0]: failed to download file: unexpected status code: actual=403 expected=200" More information on troubleshooting is available at https://aka.ms/VMExtensionCSELinuxTroubleshoot
From what I can determine this is an Azure VM extension that is running to configure the VM for Azure Batch. My base image is Canonical, ubuntuserver, 18.04-lts (batch.node.ubuntu 18.04). I can see that the extensions is attempting to download from:
https://a52a7f3c745c443e8c2cac69.blob.core.windows.net/nodeagentpackage-version9-22-0-2/Ubuntu-18.04/batch_init-ubuntu-18.04-1.8.7.tar.gz (note I removed the SAS token from this URL for posting here)
there are 8 further files that are downloaded and it looks like this is configuring the Batch agent on the node.
The 403 error indicates that the node cannot connect to this storage account, which makes sense given the service endpoint policy. It does not include this storage account within it and this storage account is external to my Azure subscription. I thought that I might be able to add it to the service endpoint policy, but I have no way of determining what Azure subscription it is part of it. If I knew this I thought I could add it like:
Endpoint policy allows you to add specific Azure Storage accounts to allow list, using the resourceID format. You can restrict access to all storage accounts in a subscription
E.g. /subscriptions/subscriptionId (from https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoint-policies-overview)
I tried adding security group rules using service tags for Azure storage, but this did not help. The node still cannot connect and this makes sense given the description of service endpoint policies.
The reason for my interest in this is the following post:
[https://github.com/Azure/Batch/issues/66][1]
I am trying to minimise the bandwidth charges from my storage account by using service endpoints.
I have also tried to create my own VM, but I am not sure whether the "batchNodeExtension" script is run automatically for VMs that you're using with Batch.
I would really appreciate any pointers because I am running out of ideas to try!
Batch requires a generic rule for all of Storage (can be regional variant) as specified at https://learn.microsoft.com/en-us/azure/batch/batch-virtual-network#network-security-groups-specifying-subnet-level-rules. Currently it is mainly used to download our agent and maintain state/get information needed to run tasks.
I am facing the same problem with Azure Machine Learning. We are trying to fight data exfiltration by using the SP Policies in order to prevent sending the data to any non-subscription storage accounts.
Since Azure ML Computes depends on the Batch service, we were unable to run any ML compute if the SP policy is associated to the compute subnet.
Microsoft stated the follwoing:
Filtering traffic on Azure services deployed into Virtual Networks: At this time, Azure Service Endpoint Policies are not supported for any managed Azure services that are deployed into your virtual network.
https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-service-endpoint-policies-overview#scenarios
I understand from this kind of restriction, that any service that use Azure Batch (which almost all services in Azure?) cannot use the SP Policy which make it useless freature...
Finally we endup by removing the SP policy completly from our network architecture and considered it only for scenarios where you to want to restrict customers to access specific storage accounts.

Azure DevOps pipeline cannot copy to Azure storage

I've got a pipeline that builds web artefacts and attempts to copy them to my Azure Storage using the Azure File Copy task provided in the Azure Pipelines. I've been trying for the last 2 days to fix this 403 response, stating there is a permissions error.
I have a service connection for this pipeline.
The service connection application registration has user_impersonation for Azure Storage in API Permissions
The service connection application registration has 'Storage Blob Data Contributor' & 'Storage Blob Data Owner' for the target Storage Account, the Resource Group and the Subscription.
Since the storage account uses a Firewall and has IP range whitelisting enabled according to your comment, you should add the agent's IP address to the whitelist.
If you're running your own build agent, it's pretty straightforward.
If you use Microsoft-hosted agent to run your jobs and you need the information about what IP addresses are used, see Microsoft-hosted agents Agent IP ranges.
In some setups, you may need to know the range of IP addresses where agents are deployed. For instance, if you need to grant the hosted agents access through a firewall, you may wish to restrict that access by IP address. Because Azure DevOps uses the Azure global network, IP ranges vary over time. We publish a weekly JSON file listing IP ranges for Azure datacenters, broken out by region. This file is published every Wednesday with new planned IP ranges. The new IP ranges become effective the following Monday. We recommend that you check back frequently to ensure you keep an up-to-date list.
Since there is no API in the Azure Management Libraries for .NET to list the regions for a geography, you must list them manually.
EDIT:
There's a closed (! - but still active) GitHub issue here: AzureDevops don't considerate as 'Microsoft Services'
EDIT 2:
Your hosted agents run in the same Azure geography as your organization. Each geography contains one or more regions. While your agent may run in the same region as your organization, it is not guaranteed to do so. To obtain the complete list of possible IP ranges for your agent, you must use the IP ranges from all of the regions that are contained in your geography. For example, if your organization is located in the United States geography, you must use the IP ranges for all of the regions in that geography.
To determine your geography, navigate to https://dev.azure.com/<your_organization>/_settings/organizationOverview, get your region, and find the associated geography from the Azure geography table. Once you have identified your geography, use the IP ranges from the weekly file for all regions in that geography.

How can I allow an Azure Logic app access to a secured blob storage account

I have an Azure Storage account where I have blobs stored in containers.
I would like to limit the access to this storage account to specific Azure resources and prevent internet connections.
I currently have access limited to IPs from our office locations. This allows us to support the process and use Azure Storage Explorer.
I tried adding the Outgoing IP Addresses from the Logic App but that did not allow access.
Then in the Logic App designer, I get the following Error.
I would like to additionally allow access from an Azure Logic app that would work with data stored there.
Is the IP you allowed known in the list of Logic Apps IPs? If not then I think you will need to whitelist the one on the list.
This is the list of Logic App IP's per country & connector:
Logic App IPs
I am having the same issue. Apparently this configuration is not supported. Quoted from an Azure ticket yesterday:
"Yea we have had couple (sic) customers reporting this issue. Unfortunately this feature is not supported as of now. The azure networking team was working on adding this support for logic apps. As of last month there was no ETA given."
Also, in my storage account logs the failed logic app requests are coming from 10.157.x.x, which I cannot whitelist in the storage account firewall. I even tried "fooling" the firewall by creating a vnet containing that subnet and allowing that. No dice.
Have you used the blob storage connector in your logic app ? Once you add the credential connection details, you'd be able to connect from the logic app.
The full documentation can be found here

Azure Service Bus hostname conflicts resolution

Azure Service Bus Instance has name parameter (For example,sample-name).
This name used to expose endpoint URL to host
sample-name.servicebus.windows.net
Whats happens if another client choose the same sample-name for another instance of Azure Service Bus?
How it is resolved by Azure?
This name is universal in Azure. If you've created a Namespace with some name, then no other user can create a Namespace with that same name.
Essentially the URL for your Azure Service Bus has to be unique in Azure ecosystem i.e. no two users can have a URL <somename>.servicebus.windows.net.
What this means is that if you have a general Azure Subscription and an Azure Subscription in Germany (or China/US Gov), you could create a namespace with same name in there (one in general region and other in Germany/China/US Gov) as the endpoint domain (e.g. servicebus.windows.net) is different in each of these regions.
Azure checks availability of the namespace. If it's taken, the portal will say so:

Azure:limit the access of ARM PaaS services to certain storage accounts

I have a security question related to Azure that I could really do with some guidance on the art of what is possible.
I would like to know if it is possible to restrict what services can be called (i.e what storage account endpoints can be used to write data to) from PaaS services such as service fabric or web apps (ASE). i.e. if I have a web app that writes to storage and someone maliciously altered the code to write to a third party storage account on Azure; is this something I could mitigate in advance by saying this application (i.e. this web app or this SF cluster) can only talk to a particular set of storage accounts or a particular database. So that even if the code was changed to talk to another storage account, it wouldnt be able to. I.e can I explicitly define as part of an environment what storage items an application can talk to; Is this something that is possible?
Azure Storage Accounts have Access Keys and Shared Access Keys that are used to authenticate REST calls to read / write data to them. Your app will be able to perform read / write operations against the Azure Storage Account that it has an access key and connection string for that it uses to connect to it with.
It's not possible to set any kind of firewall rule on an Azure App Service app to prevent it from communicating with certain internet or Azure endpoints. You can set NSG firewall rules with App Service Environment, but you still can only either open or close access; not restrict on certain DNS names or IP Addresses.
Perhaps you should look for a mitigation to this threat in the way applications are deployed, connection strings are managed and code is deployed:
Use Azure Role Based Access control to limit access to the resource in Azure, so unauthorized persons cannot modify deployments
Use a secure way of managing your source code. Remember it is not on the PaaS service, because that only holds the binaries.
Use SAS tokens for application access to storage accounts, not the full access keys. For example, a SAS key could be given write access, not read or list access to a storage account.
If, as a developer, you don't trust the person managing the application deployment, you could even consider signing your application parameters/connection strings. That only protects against tampering though, not against extraction of the connection string.

Resources