We currently have a Azure DevOps pipeline that deploys and maintains our infrastructure. We provision and maintain using Terraform.
All of the resources are integrated in a virtual network, this includes the storage accounts. The storage accounts have the subnets whitelisted and all of the resources can access the storage without issues.
We have been having trouble deploying new resources or even just running terraform plan due to 403 errors that occur while trying to refresh the state of the containers.
At first we used an az cli task that whitelisted the Azure provided DevOps agent, however as of lately that didn't work anymore as the 403 kept coming.
We deployed a Ubuntu VM and configured with the required packages; pwsh, az cli, terraform etc.. The NIC associated with the VM has been integrated in the subnet as our other VM's, this subnet is whitelisted from the storage.
However, we still get the same 403 errors as before, even now the VM is in the same vnet and the subnet is whitelisted.
I am trying to think of other possible solutions, perhaps any of you know how we can solve this. Thank you.
We tried to integrate a self-hosted Azure DevOps agent into our virtual network to be able to run our infrastructure as code pipelines to provision and maintain resources. However we keep getting 403 unauthorized errors.
Related
I have a hosted agent VM in a VNET in my Azure subscription that is supposed to do Bicep deployments to my Azure subscription. It is working well.
I am noticing that Microsoft-hosted agents also can deploy resources or do updates in my Azure subscription once they have a valid service connection. The same pipeline can run on both Self-hosted VM agents or Microsoft-hosted agents. This is a concern for our security department. The preference is that no external entity (outside a designated VNET in the subscription) should be able to access the subscription. We want to establish network isolation between subscription and external access, whether a valid service connection is available or not.
If you have private agent you can limit access to your resources by filtering IP address. I don't know your infrstracture so I cannot say preciesly but for App Services of Function App you could use scm restrictions to limits deployments just to your private agent.
You won't be able to establish that on subcription level, but you could try something different. If you host you agent on Azure (vritual machine or scale set), you could use Managed Identity, then you could use this instead of service connection (or try service connection with Managed Identity), and then using Service Connection outside of your agent become pointless.
Please check this tutorial for more details:
If you use the Managed Identity enabled on a (Windows) Virtual Machine in Azure you can only request an Azure AD bearer token from that Virtual Machine, unlike a Service Principal.
We have code repositories on Azure Devops example url :https://dev.azure.com/myorg/myproject
We also have Azure VM created. Our Azure VM is windows 10. When we create a new VM on azure, Internet is enabled by default.
The VM will be shared with development team member. To secure code, developer should NOT be able to use personal email boxes and any other drives like dropbox, onedrive etc. So what i feel i need is we need is Internet disabled but only access to Azure DevOps repo. Is this possible? How to achieve this?
You can use Network Security Group resource in Azure. Set rules to allow traffic only to specified services (in your example Azure DevOps) and deny rest of the connections.
https://learn.microsoft.com/pl-pl/azure/virtual-network/security-overview
Deploying to an existing storage account on a subnet with service endpoints for Microsoft.EventHub, Microsoft.KeyVault, Microsoft.Storage and Microsoft.Web.
Storage account is on a selected vnet:
It looks like you want to restrict access to your storage account from your function app in a virtual network. If so, you need to enable the storage account endpoint in a subnet and enable your function app to integrate with that subnet. Your function app should host on an app service plan which supports virtual network. For more details, you could see the Integrate your app with an Azure Virtual Network.
Moreover, you could refer to this ARM template to finish most of the work. In this case, you will deploy a regional-vnet-integration and a storage account in the same region as the app service.
If you just enable the storage account service endpoint to this subnet but do not want to integrate your function app with this subnet, you need to allow possible outbound IPs of your function app in the firewall of the storage account. Also, the function app and storage account should be in a different region in this scenario.
Feel free to let me know if you have any question.
I set 'WEBSITE_CONTENTOVERVNET' to 1 in my app settings and that worked for me to deploying a logic app.
https://learn.microsoft.com/en-us/azure/azure-functions/functions-app-settings#website_contentovervnet
After fixing 403 error, I got 503 Service unavailable when deploying the zip file to the logic app.
The reason why the zip deployment failed is the fileshare in the storage account was not created when the logic app was deployed.
For a temporary fix, just create a file share before deploying the logic app.
A MS support ticket is created and hopefully they will fix it soon!
In Azure, I turned on IP restrictions for:
Web App (Networking > Access Restrictions)
SQL server (Firewalls and virtual networks > Add client IP)
SQL database (Set server settings)
The solution still builds locally and in DevOps (aka Team Foundation Server).
However, Azure App Service Deploy now fails:
##[error]Failed to deploy App Service.
##[error]Error Code: ERROR_COULD_NOT_CONNECT_TO_REMOTESVC
More Information: Could not connect to the remote computer
("MYSITENAME.scm.azurewebsites.net") using the specified process ("Web Management Service") because the server did not respond. Make sure that the process ("Web Management Service") is started on the remote computer.
Error: The remote server returned an error: (403) Forbidden.
Error count: 1.
How can I deploy through the firewall?
Do I need a Virtual Network to hide Azure resources behind my whitelisted IP?
The REST site scm.azurewebsites.net must have Allow All, i.e. no restriction. Also, Same restrictions as ***.azurewebsites.net should be unchecked.
It does not need additional restriction because url access already requires Microsoft credentials. If restrictions are added, deploy will fail the firewall, hence the many complications I encountered.
I think the answer is incorrect as you might face data ex-filtration and that's the reason Microsoft provide the feature to lock down SCM portal (Kudu console)
There is also a security issue on Kudu portal as it can display the secret of your keyvault (if you use keyvault) and you don't want someone in your organisation to access the Kudu portal for example.
You have to follow this link
https://learn.microsoft.com/en-us/azure/devops/organizations/security/allow-list-ip-url?view=azure-devops
It will provide you Azure DevOPS IP range that you need to allow on the SCM Access restriction.
Update: To make it works as expected and to use App Service Access Restriction (same for an Azure Function), you need to use the Service Tags "AzureCloud" and not the Azure DevOPS IP range as it's not enough. on the Azure Pipeline logs, you can see the IP blocked so you can see that it's within the ServiceTags "AzureCloud" in the Service Tags JSON file
It's not really clear on the MS Doc but the reason is that they struggled to define a proper IP range for Azure DevOPS Pipeline so they use IPs from AzureCloud Service Tag.
https://www.microsoft.com/en-us/download/details.aspx?id=56519
In my case I was deploying using Azure DevOps and got the error. It turned out the app service where my API was being deployed to, had the box checked "Same restrictions as xxxx.azurewebsites.net", under access restrictions or IP restrictions. you need to allow scm.azurewebsites.net.
Try adding the application setting WEBSITE_WEBDEPLOY_USE_SCM with a value of false to your Azure App Service. This was able to solve my issues deploying to a private endpoint.
In my case it was because the daily quota was overpassed.
So the solution in this case is either wait or pay more (scale up) the app service
In my case this was because the wrong agent (Windows Hosting) was being used when I should have been using a self hosted internal agent... so I needed to change it at the following location
I'am using AzureDevOps to build and pack my docker application.
The goal is to execute docker-compose commands in Azure VM, which is behind the firewall and can be access only thru vpn connection or standard web-browsing ports.
You can use deployment groups to achieve that. Reason this will work, because it is a one way communication (from agent to Azure Devops), so you dont really need to open ports for the VM, the VM has to only be able to reach Azure Devops endspoints (more on this).
TLDR. Using agents will work, because its an outgoing connection from the agent, not from Azure Devops to the agent.