Microsoft documents the purpose of implicit and explicit dependencies. Where the explicit dependency uses 'dependsOn'. But it is mentioned that use cases for this approach are rare.
Could use some clarification on the following:
Example from MS
resource dnsZone 'Microsoft.Network/dnszones#2018-05-01' = {
name: 'demoeZone1'
location: 'global'
}
resource otherZone 'Microsoft.Network/dnszones#2018-05-01' = {
name: 'demoZone2'
location: 'global'
dependsOn: [
dnsZone
]
}
Quote
You can't query which resources were defined in the dependsOn element after deployment.
Assuming 'otherZone' is a dependsOn element, it can still be queried using the symbolic name.
Am I wrong and if so what is meant with the dependsOn element?
As stated in the article you link to:
The following example shows a DNS zone named otherZone that depends on a DNS zone named dnsZone
This means the otherZone is not the dependsOn element, it has a dependance on the dnsZone.
and
While you may be inclined to use dependsOn to map relationships between your resources, it's important to understand why you're doing it. For example, to document how resources are interconnected, dependsOn isn't the right approach. You can't query which resources were defined in the dependsOn element after deployment. Setting unnecessary dependencies slows deployment time because Resource Manager can't deploy those resources in parallel.
Bicep creates dependencies automatically (implicitly)
Azure Resource Manager evaluates the dependencies between resources, and deploys them in their dependent order. When resources aren't dependent on each other, Resource Manager deploys them in parallel.
So setting them explicitly is not needed in most cases.
The way I see it what they mean is that setting those explicit dependsOn elements without a good (deployment related) reason has no benefit because once the deployment has been done you can't see anywhere (in the Azure Portal) that you had this dependency set in your file.
The use case for dependsOn is if you need to make sure that the Resource Manager will create the resource with the dependOn only after the other resource has been created (and not in parallel).
But in most cases where you would need that you will set a parent relationship anyway which has the same effect (Implicit dependency).
Related
I'm not sure that there is a right answer for this and it will vary per scenario but I am curious because there isn't much documentation that I can find for a code first azure bicep infrastructure. Most examples you find show how to make a resource within a resource group, or using a module to define scope and deploy to another resource group, but what if you're trying to do more?
Let's do the following scenario: using 2 subscriptions(1 for prod, 1 for dev & qa) with 20 resource groups each containing multiple difference resources and you want to manage this within a CI/CD pipeline, plus the 3 environments: prod, qa, and dev. How would you go about this? I can think of a few scenarios but don't necessarily, but nothing sticks out as the best way to do it, maybe I'm missing something.
CI/CD portion:
Let's assume:
az account set --subscription(set our sub)
az group create --name --location (create resource group if it doesn't exist)
az deployment group create --name --resource-group --template-file --parameters(read from our files to deploy to a resource group)
You could pass an array of resource groups to loop through to create the resource group if it doesn't exist.
You could have the resource group list in a parameters file that you read from and do the same thing as above.
You could create a step for every resource group and the resources inside of it.(seems excessive?)
Bicep Portion:
Bicep restrictions: to specify scope(a resource group in our scenario) we'd have to have use modules dealing with multiple resource groups or have a step for each resource group and have a main.bicep file for the different resource groups/resources.
You could create a folder structure for each resource group and the resources inside of it with a main.bicep but that would mean you have a lot of extra deploy steps(seems excessive?).
You could have 1 main.bicep file and have a folder structure that uses a lot of modules to specify your scope while reading the resource group, resource variables etc using an environment parameters.json file.
You could create a folder for each environment, have folders with each environment then create each resource group and resources inside of it not using a parameters.json but using params in each file instead since they would be specific for each environment.
1 final issue:
Lastly let's say you want to add a step before the deployment of resources to use bicep what-if to check what resources will be updated or deleted(this is pretty important!). Last I checked there was an issue where what-if does not work for bicep modules so you wouldn't get the luxury of knowing what changes would be made prior to a deployment with the what-if. That is a pretty big safety net you'd be losing, so would you want to scratch the module strategy all together?
What would be the best way to tackle something like this while keeping it readable for average non experts to be able to hop in and work on it? I would lean towards making a folder structure using modules and reading from an environment parameters.json but I'm not convinced that's the best way, especially if what-if isn't fully working for bicep modules.
IMO this does depend a lot on the scenario, topology, permissions, etc. The way I would start thinking about this is that you want an "environment" that will vary a bit between dev/test and prod. That env has multiple resourceGroups and a dedicated subscription for each env.
In this case, I would use a single bicep "project" (e.g. main.bicep with modules) and change the deployment using parameter files (for dev/test vs. prod). The project would lay down everything needed for the environment (think greenfield). The main.bicep file is a subscription scoped deployment that will create the RGs and all the resources needed. Oversimplified example:
targetScope = 'subscription'
param sqlAdminUsername string'
param keyVaultResourceGroup
param keyVaultName string
param keyVaultSecretName string
param location string = deployment().location
resource kv 'Microsoft.KeyVault/vaults#2021-06-01-preview' existing = {
scope: resourceGroup(subscription().subscriptionId, keyVaultResourceGroup)
name: keyVaultName
}
resource sqlResourceGroup 'Microsoft.Resources/resourceGroups#2021-04-01' = {
name: 'shared-sql'
location: location
}
resource webResourceGroup 'Microsoft.Resources/resourceGroups#2021-04-01' = {
name: 'shared-web'
location: location
}
module sqlDeployment 'modules/shared-sql.bicep' = {
scope: resourceGroup(sqlResourceGroup.name)
name: 'sqlDeployment'
params: {
sqlAdminUsername: sqlAdminUsername
sqlAdminPassword: kv.getSecret(keyVaultSecretName)
location: location
}
}
module webDeployment 'modules/shared-web.bicep' = {
scope: resourceGroup(webResourceGroup.name)
name: 'webDeployment'
params: {
location: location
}
}
A single template + modules that creates the RGs, creates a SQL Server (via module) and an app service plan with an admin website (also via module). You can then parameterize whatever you want to for each environment.
re: what-if - what if will skip evaluation of a module if that module has a parameter that is an output of another module. If you don't pass outputs between modules then the module will be evaluated by what-if. The sample above does not pass outputs - often you don't need to do this because the information output was known by the parent (i.e. main.bicep) but sometimes you can't avoid it - ymmv.
Once you have the template designed in such a way, the pipeline is really straightforward - just deploy the template to the desired subscription.
That help?
I have terraform managing my infrastructure in Azure. However, there are cases where the state can get out of sync when other services are changing the infrastructure as well.
For example, I have terraform create an Application Gateway. But I also have an AKS cluster with AGIC enabled, which dynamically updates/changes rules, listeners, etc. inside of Application Gateway. So if terraform is re-run after AGIC makes some changes, terraform doesn't know and wants to reset to the default config it knows about.
Maybe this isn't possible, but is there an automated way to sync the two? It's kind of unfeasible to have to go into the terraform config and manually add the changes AGIC makes every time it does so. At this point, is it even worth managing Application Gateway with terraform?
If you really want to create the Application Gateway with Terraform, you should probably use the Lifecycle Meta-Argument ignore_changes on the attributes that will be modified by AKS. It is not perfect, but at least they can share their responsibilities without overwriting each other.
The ignore_changes feature is intended to be used when a resource is
created with references to data that may change in the future, but
should not affect said resource after its creation. In some rare
cases, settings of a remote object are modified by processes outside
of Terraform, which Terraform would then attempt to "fix" on the next
run. In order to make Terraform share management responsibilities of a
single object with a separate process, the ignore_changes
meta-argument specifies resource attributes that Terraform should
ignore when planning updates to the associated remote object.
For example :
resource "azurerm_application_gateway" "example" {
...
lifecycle {
ignore_changes = [
http_listener,
request_routing_rule,
backend_http_settings
]
}
...
}
I know that when I do terraform apply it does not deploy a resource if the previous deployment within the same terraform state, it would not re-create it .
But I want to do something different:
Create a resource if it is not created by someone else.
But if the resource is already there and even it is not in the terraform state, do not generate an error and have refrence to its name.
Is there any known pattern to do this?
By design Terraform providers will typically not automatically "adopt" existing objects as now being managed by Terraform, because to do so would potentially lead to costly mistakes if you inadvertently bind a remote object to a Terraform resource and then run terraform destroy without realizing what is going to be destroyed.
Instead, you must bind existing objects to your Terraform resources using the terraform import command, telling Terraform explicitly that you intend it to become the sole manager of that object.
When Terraform run task executes in azure devops release pipeline I get an error "A resource with the ID already exists".
The resource exists in Azure but why it is complaining about the resource if this already exists. This should ignore this part. Please help what I need to add in my code that will fix this error!
Am I just using this bugging terraform tool for deploying azure resource? Terraform help is terrible!!!
resource "azurerm_resource_group" "test_project" {
name = "${var.project_name}-${var.environment}-rg"
location = "${var.location}"
tags = {
application = "${var.project_name}"
}
}
Terraform is designed to allow you to manage only a subset of your infrastructure with a particular Terraform configuration, in case either some objects are managed by another tool or in case you've decomposed your infrastructure to be managed by many separate configurations that cooperate to produce the desired result.
As part of that design, Terraform makes a distinction between an object existing in the remote system and that object being managed by the current Terraform configuration. Where technical constraints of an underlying API allow it, Terraform providers will avoid implicitly taking ownership of something that was not created by that specific Terraform configuration. The error message you saw here is the Azure provider's implementation of that, where it pre-checks to make sure the name you give it is unique so that it won't overwrite (and thus take implicit ownership of) an object created elsewhere.
To proceed here you have two main options, depending on your intended goal:
If this object was formerly managed by some other system and you now want to manage it exclusively with this Terraform configuration, you can tell Terraform to associate the existing object with the resource block you've written and thus behave as if that object were originally created by that resource block:
terraform import azurerm_resource_group.test_project /subscriptions/YOUR-SUBSCRIPTION-ID/resourceGroups/PROJECTNAME-ENVIRONMENTNAME-rg
After you run terraform import you must ensure that whatever was previously managing that object will no longer associate with it. This object is now owned by this Terraform configuration and must not be changed by any other system.
If this object is managed by some other system and you wish to continue managing it that way then you can instead use a data block to retrieve information about that existing object to use elsewhere in your configuration without Terraform taking ownership:
data "azurerm_resource_group" "example" {
name = "${var.project_name}-${var.environment}-rg"
}
If you needed the resource group's location name elsewhere in your module, for example, you could use data.azurerm_resource_group.example.location to access it. If you wanted to make any later changes to this resource group, you would continue to do that using whichever other system is considered the owner of it in your environment.
The main difference between these two approaches is how Terraform will record the object in state snapshots. terraform import causes Terraform to create a binding between the resource configuration you wrote and the remote object whose id you gave on the command line, which is henceforth indistinguishable to Terraform from it having created that object and recorded the binding itself in the first place. For a data resource, Terraform just reads the data about the existing object and saves a cache of it in the state so it can determine if the value has changed on a future run; it will never plan to make any modifications to an object used with a data block.
Try to delete the .terraform local folder to clean the cache, then run terraform init again and retry running the pipeline.
For my future self:
Today I stumbled across this same problem, because I renamed some resources, and terraform could not track them. I found out about terraform state mv ... which gives you the ability to rename resources in your state file, so that it can track remote resources. Really useful.
I have the following ARM Template structure:
Parent Template
|--Nested Template 1
|--...
|--Nested Template 6
So I only have 2 levels of templates, Parent and nested.
Lets say I deploy parent to an empty resource group and everything works well. After that I delete one of the resources and want to deploy the same Parent Template with the same parameters to bring deleted resources back. But the deployment would fail saying that the resource already exists (the other, not the one i'm tried to recreate). I tried both incremental mode and full mode for deployments.
If i directly invoke nested template with the missing resources it works as expected (so specifically creating a deployment with nested template only, not with parent that invokes nested template).
UPD:
After some additional testing I can conclude thats even weirder then before. So I'm starting this deployment with powershell:
New-AzureRmResourceGroupDeployment #parameters
And it deploys just fine, however if I invoke the same command after the first deployment completed I would get an error:
The resource 'gggg-1s-the-wordd' already exists in location
'westeurope' in resource group 'gggg'. A resource with the same name
cannot be created in location 'northeurope'. Please select a new
resource name.
Is this behavior excepted? I can't seem to find anything relevant, thanks!
UPD2: It doesn't really matter if I use portal or powershell, I get the same error.
So with the help from Brian we were able to identify the culprit. The issue was that the WebApp had its location set to resourcegroup().location while the App Service Plan was correctly getting location from parameters. So that lead to a problem where at deployment time WebApp would deploy to the region where its App Service Plan was, but at evaluation time it would consider that this WebApp belongs to the region where the resource group was.
TLDR - copy paste error, which coupled with a bug in evaluation of location in ARM lead to a quite weird behavior.
If you deploy the same resource (intentionally did not use the word "template" there) to the same resource group, Azure should "make it so". IOW, if it's not there, it will create it, if it is there, it should no-op. It's not that black and white there are some nuances (like you can't change certain properties if the resource exsists) but if you deploy the same resource with the same property values to the same resource group you should not get an error.
In general, nesting (or not) shouldn't affect any of this.
If you're deploying to different resource groups, then you could see an error about "already exists" depending on the resource.
All that said, it's really hard to tell in your specific case what's going on without more detail... So if this doesn't help, can you add some detail (what's the exact error message) or a repro (template that we could see the problem with)?
I experienced the same issue. The reason was that, location of App Service was defined as [resourceGroup().location] instead of App service plan (ASP) location, which was creating the problem. I changed it by passing the location of ASP as a parameter to the template.
Getting location of of ASP is as:
internal static string GetASPLocation(TokenCloudCredentials credentials, string resourceGroup, string ASP)
{
Console.WriteLine($"Getting location of App Service Plan {ASP} in Resource Group {resourceGroup}");
var resourceClient = new ResourceManagementClient(credentials);
ResourceExistsResult result = resourceClient.Resources.CheckExistence(resourceGroup, new ResourceIdentity(ASP, "Microsoft.Web/serverfarms", "2015-08-01"));
var appServicePlan = resourceClient.Resources.Get(resourceGroup, new ResourceIdentity(ASP, "Microsoft.Web/serverfarms", "2015-08-01"));
return appServicePlan.Resource.Location;
}
And in ARM template, location can be changed as :
"location": "[parameters('ASPLocation')]"