I am having a question regarding creating a storage account and providing one container as an end point for continuous logs from multiple telematics device.
The sas key could be provided at the telematics portal. Now, while creating the storage account, I landed on the networking page where I had three options:
Allow access to all public networks
2.All allows to selected Vm networks and IP address
Disable or only private networks
So, if for a secured connected I choose option 2, will I able to bridge a connection and receive logs via sas key ?
I am new to azure and not able to find any answer for this.
Thanks for helping
I tried to select to have access through private ip or designate VM and subnet but it is not allowing me to do anything . It says restricted access
Related
Intro:
I have an asp.net core app service hosted in Azure.
This app service has an API controller that reads/writes to an Azure Table Storage.
The code for this is using Azure.Data.Table library w/ an Access Key that i setup from the Azure portal (for the table storage).
Now, under the storage account / Networking blade, I have selected the "Enabled from All Networks".
Question:
Does this mean this storage account is open to the entire internet? I am confused whether this is secured because my code is accessing it via the Access Key (which I mentioned above).
Thank you.
Regarding the settings above, Enabled from all networks means the storage account endpoints (i.e. blob, table, queue, etc.) will accept traffic from the internet but you still need access key to view any data.
Enabled from selected virtual networks and IP address means that traffic will only be allowed from resources on the same VNET or specific IP address that you've configured e.g. your local device public IP provided by your ISP. This is a more secured method because you essentially whitelisting your what can connect.
Disabled means nothing outside of Azure can access the storage account and you will connect via Private Endpoint.
If your access key is inside your code, then this isn't the best secured method. You would want to have your connection with access key in either Application Settings so it can be retrieved as an environment variable or through Azure Key Vault. Using Key Vault allows you dictate what service or user can retrieve that value.
I will suggest assigning the system assigned managed identity to the app Service. Then use the resource instance rules for storage setting to configure the firewall https://learn.microsoft.com/en-us/azure/storage/common/storage-network-security?tabs=azure-portal#grant-access-from-azure-resource-instances
I have recently come across the Private Endpoint feature in Azure Storage and trying to implement it for secure access from a VNet. However, I am getting access issues while using Firewall, Virtual Network Service Endpoint and Private Endpoint all together.
I have two VNets (VNet1 & VNet2) in my subscription and an on-premises machine with Public IP to connect to Azure Storage. Following is my setup.
VNet1 with a Subnet enabled with Service Endpoint feature is whitelisted in Storage account firewall.
Next, I have created a Private Endpoint to this storage account (for blob service) from VNet2 which is also hosted inside the same Vnet.
Finally, I have whitelisted the Public IP of my on-premises VM to connect to the storage account under Firewall section.
Given the above setup, when I am trying to access this storage account blob containers inside a VM placed under VNet2, I am getting authorization issues.
May I please check if this setup is valid? Do Private Endpoint and Service Endpoint features work in Parallel?
Yes, private endpoints can be created in subnets that use Service Endpoints. Clients in a subnet can thus connect to one storage account using private endpoint, while using service endpoints to access others.
There are multiple ways to connect to storage account:
Using a private endpoint (private link) to connect to storage account: Please find the referred document here.
Using Service Endpoint and Private endpoint: Please find the referred document here.
You can find more details in this public document.
I am unable to access Blob Service from Azure Virtual Machines running in the same region. I have created a storage account and planned to access to selected IP addresses i.e. MY Laptop, My Office PC and My Virtual Machine which is running is Azure. After whitelisting 3 of the IP's I am able to access the Blob Service from MY Laptop and MY Office PC but unable to access the same of Virtual Machine which is running in Azure.
Pl. let me know if anyone is facing similar issues and the resolution. Thanks in Advance.
Check the NSG the VM belongs to and see if you are allowing the VM to communicate outbound, if so check if Azure storage is allowing incoming connections from the network to which the VM is connected to.
Your VM uses the internal network to attempt to access the Storage so adding the public IP won't work and you can't use internal IP's.
The easiest way would be adding the Virtual Network subnet of the VM to the firewall rules and add Azure.Storage as service endpoint to the subnet. If you add the subnet using the Azure Portal the service endpoint will be automatically added as well. Another way would be setting up a private endpoint.
I am currently trying to use Azure Pipelines to build a Docker image and push it to the Azure Container Registry. I have a Service Connection setup, and but the build is failing with "denied." I suspect the reason for this is because my Container Registry is setup to only allow from "selected networks" and is restricted to a few IPs. I validated this by temporarily allowing all networks, and then the build/push succeeded.
Is there any way to get Azure Pipelines to successfully push a Docker image to the Container Registry that is only allowing selected networks? I thought that was what the Service Connection was for?
I'm afraid you're right. The possible reason is that you set it as select networks and do not add the IP address of the DevOps to allow the traffic. As I know, the IP address of the DevOps will change over time, here is the description:
In some setups, you may need to know the range of IP addresses where
agents are deployed. For instance, if you need to grant the hosted
agents access through a firewall, you may wish to restrict that access
by IP address. Because Azure DevOps uses the Azure global network, IP
ranges vary over time.
So you need to allow an IP range, not the single IP address. And it's not a secure way. Well, the most secure way from my experience is that control the access permission for all the people, not the networks. You can create multiple service principals and grant them with different roles to control the permission. For example, use the role AcrPull, it only has permission to pull the images. More details about the roles here. You can even control the permission on the repositories, here is more message about it.
By the way, the firewall to select the networks, I think it's more suitable for the resources inside the Azure, for this, you can use the endpoint to achieve it.
Please make sure that your service connection has AcrPush permission.
You can check it or add if needed here:
(You will find your connection under name 'your-organization-your-project')
On Azure, I have a two-VM set (both classic), whereby my web application resides on one VM, my database on another. Both map to the same DNS and belong to the same Resource Group, but both are acting as standalone cloud services at the moment. Let me explain: currently the web application communicates with the database over the public DNS. However, I need them to be on the same lan, so I can de-risk security and improve latency.
I know for a fact that they're not part of a virtual network because when I try to affix a static private IP to my database VM, I'm shown the following prompt in the portal:
This virtual machine can't be configured with a static private IP
address because it's not deployed in a virtual network.
How should I proceed to fix this misconfiguration and what should my next concrete step be? The website is live, and I don't want to risk service interruption. Ideally, both VMs should be in the same virtual network, and should communicate with eachother via a static internal IP. Please forgive my ignorance. Any help would be greatly appreciated.
I guess i'll be the bearer of bad news. You have to delete both VMs while keeping the VHDs in the storage account, then recreate the VMs (reattaching the disks) in the Virtual Network.
Since these are Classic VMs you can use the old Portal when re-creating them. You'll find the VHDs under "My Disks" in the VM creation workflow.
Alternatively, just restrict the inbound access with an ACL on the database Endpoint. Allow the VIP of the first VM and deny everything else. This is good enough for almost any scenario, since if your Web Server gets compromised it's game over. It makes no difference how they exfiltrate stuff off your database (over a VNET or over VIP).
Here's the relevant documentation page for setting up Endpoint ACLs:
https://azure.microsoft.com/en-us/documentation/articles/virtual-machines-windows-classic-setup-endpoints/