I've got a pipeline that builds web artefacts and attempts to copy them to my Azure Storage using the Azure File Copy task provided in the Azure Pipelines. I've been trying for the last 2 days to fix this 403 response, stating there is a permissions error.
I have a service connection for this pipeline.
The service connection application registration has user_impersonation for Azure Storage in API Permissions
The service connection application registration has 'Storage Blob Data Contributor' & 'Storage Blob Data Owner' for the target Storage Account, the Resource Group and the Subscription.
Since the storage account uses a Firewall and has IP range whitelisting enabled according to your comment, you should add the agent's IP address to the whitelist.
If you're running your own build agent, it's pretty straightforward.
If you use Microsoft-hosted agent to run your jobs and you need the information about what IP addresses are used, see Microsoft-hosted agents Agent IP ranges.
In some setups, you may need to know the range of IP addresses where agents are deployed. For instance, if you need to grant the hosted agents access through a firewall, you may wish to restrict that access by IP address. Because Azure DevOps uses the Azure global network, IP ranges vary over time. We publish a weekly JSON file listing IP ranges for Azure datacenters, broken out by region. This file is published every Wednesday with new planned IP ranges. The new IP ranges become effective the following Monday. We recommend that you check back frequently to ensure you keep an up-to-date list.
Since there is no API in the Azure Management Libraries for .NET to list the regions for a geography, you must list them manually.
EDIT:
There's a closed (! - but still active) GitHub issue here: AzureDevops don't considerate as 'Microsoft Services'
EDIT 2:
Your hosted agents run in the same Azure geography as your organization. Each geography contains one or more regions. While your agent may run in the same region as your organization, it is not guaranteed to do so. To obtain the complete list of possible IP ranges for your agent, you must use the IP ranges from all of the regions that are contained in your geography. For example, if your organization is located in the United States geography, you must use the IP ranges for all of the regions in that geography.
To determine your geography, navigate to https://dev.azure.com/<your_organization>/_settings/organizationOverview, get your region, and find the associated geography from the Azure geography table. Once you have identified your geography, use the IP ranges from the weekly file for all regions in that geography.
Related
I see that azure Microsoft-hosted build agents are allocated in the same geography as the Azure DevOps organization. However, is there anyway to request for Microsoft hosted build agents to be allocated in a different region?
Our issue is, that our Azure DevOps organization is in region eastus2, where offices are in US, EU and AU. For test setups we get resources from azure on the go. ex. rabbitmq containers. Different offices maintain their own subscriptions and maintain different resource groups in the same regions closer to their offices.
Given that, we observe if a one in AU setup a pipeline to use a rabbitmq container it is allocated in the same region as the resource group, where Microsoft hosted agents in US, tests timeout.
But if we change the resource group to EU/US or the resource to EU/US, tests do not timeout. Given, each office prefers to have their resources in the same region as the office, is there any suggestion to overcome the issue?
As it is written here
Your hosted agents run in the same Azure geography as your organization. Each geography contains one or more regions. While your agent may run in the same region as your organization, it is not guaranteed to do so. To obtain the complete list of possible IP ranges for your agent, you must use the IP ranges from all of the regions that are contained in your geography. For example, if your organization is located in the United States geography, you must use the IP ranges for all of the regions in that geography.
This is not necessarily true that your organization is in the same region as your agents. They are in the same geography.
But answering you question this is not possible to request for agent for another region. So if you need that you need to consider self hosted agents on your won infrastracture. You can create several agent pools and handle them to support your need.
I need to identify How to know Azure Auto Resolve Intigration Runtime IP Address
Thanks
The IP addresses that Azure Integration Runtime uses depends on the region where your Azure integration runtime is located. All Azure integration runtimes that are in the same region use the same IP address ranges. Ref – Azure Integration Runtime IP addresses
Discover service tags by using downloadable JSON file given below.
You can download JSON file that contain the current list of service tags together with IP address range details. These lists are updated and published weekly. Locations for each cloud are:
• Azure Public
You can find ResourceName and Region as per your requirement from given file.
For more information refer to this article
I have an Azure Storage account where I have blobs stored in containers.
I would like to limit the access to this storage account to specific Azure resources and prevent internet connections.
I currently have access limited to IPs from our office locations. This allows us to support the process and use Azure Storage Explorer.
I tried adding the Outgoing IP Addresses from the Logic App but that did not allow access.
Then in the Logic App designer, I get the following Error.
I would like to additionally allow access from an Azure Logic app that would work with data stored there.
Is the IP you allowed known in the list of Logic Apps IPs? If not then I think you will need to whitelist the one on the list.
This is the list of Logic App IP's per country & connector:
Logic App IPs
I am having the same issue. Apparently this configuration is not supported. Quoted from an Azure ticket yesterday:
"Yea we have had couple (sic) customers reporting this issue. Unfortunately this feature is not supported as of now. The azure networking team was working on adding this support for logic apps. As of last month there was no ETA given."
Also, in my storage account logs the failed logic app requests are coming from 10.157.x.x, which I cannot whitelist in the storage account firewall. I even tried "fooling" the firewall by creating a vnet containing that subnet and allowing that. No dice.
Have you used the blob storage connector in your logic app ? Once you add the credential connection details, you'd be able to connect from the logic app.
The full documentation can be found here
I just created a standard vm inside Azure, and created a new Availability set.
I created another vm, with the same specs, in the same region, but when I go to configurare the availability set I don't see it in the list.
I'm missing something?
Luca
So... just posting this as an answer, to properly close the loop based on the comments under the question:
When setting up a Virtual Machine, you can choose which Cloud Service to place the Virtual Machine in. The Cloud Service is essentially a container which gets assigned a specific IP address, gets a cloudapp.net name (e.g. myservice.cloudapp.net), and gets assigned to a region (or affinity group, which is region-specific).
Availability Sets are specific to a given Cloud Service. You may place any of your Cloud Service's VMs in the same Availability Set (or even have multiple Availability Sets, with groups of VMs assigned to specific Availability Sets). However: An Availability Set does not span across Cloud Services.
So: When you went to set up your second Virtual Machine, and you didn't see your Availability Set, that is because you were attempting to deploy to a different Cloud Service.
Below screenshot shows the wizard page where we can select existing cloud service to which we can associate a new VM
I've created a Hosted Service that talks to a Storage Account in Azure. Both have their regions set to Anywhere US but looking at the bills for the last couple of months I've found that I'm being charged for communication between the two as one is in North-Central US and the other South-Central US.
Am I correct in thinking there would be no charge if they were both hosted in the same sub-region?
If so, is it possible to move one of them and how do I go about doing it? I can't see anywhere in the Management Portal that allows me to do this.
Thanks in advance.
Adding to what astaykov said: My advice is to always select a specific region, even if you don't use affinity groups. You'll now be assured that your storage and services are in the same data center and you won't incur outbound bandwidth charges.
There isn't a way to move a storage account; you'll need to either transfer your data (and incur bandwidth costs), or re-deploy your hosted service to the region currently hosting your data (no bandwidth costs). To minimize downtime if your site is live, you can push your new hosted service up (to a new .cloudapp.net name), then change your DNS information to point to the new hosted service.
EDIT 5/23/2012 - If you re-visit the portal and create a new storage account or hosted service, you'll notice that the Anywhere options are no longer available. This doesn't impact existing accounts (although they'll now be shown at their current subregion).
In order to avoid such charges the best guideline is to use Affinity Groups. You define affinity group once, and then choose it when creating new storage account or hosted service. You can still have the Affinity Group in "Anywhere US", but as long as both the storage account and the hosted service are in the same affinity group, they will be placed in one DataCenter.
As for moving account from one region to another - I don't think it is possible. You might have to create a new account and migrate the data if required. You can use some 3rd party tool as Cerebrata's Cloud Storage Studio to first export your data and then import it into the new account.
Don't forget - use affinity groups! This is the way to make 100% sure there will no be traffic charges between Compute, Storage, SQL Azure.