I have an existing terraform script where one container registry is defined. We are in the process of creating a new one with customer managed key and once its established, the old one will be deleted.
So in the terraform script I copied the same module for container registry and gave a new name. But it gave an error saying multiple entries are seen. Then I gave count=2 in container registry module. That also gave an error.
I am unable to implement customer managed key as well due to this error.
Is there a way to achieve two container registry in one terraform script?
Initially i tried to reproduce the same and tried to change the name
of the registry , then new one got created deleting the old registry
.It was fine.
So then I jus copied the same module next to previous one and changed
the name of reference only, without changing the registry name.Then
it threw error as , it already exists.
Note:make sure you save changes every time you modify and then do
terraform flow.
I even got error as containerregistry1 already exists even if acr01 is changed to acr03 .(as registry name is not changed here)
But when I changed registry name also.then I could successfully create other similar instances of the registry successfully.
Even tried using count=3 in container registry module which gave me the similar error.
provider "azurerm" {
features {}
}
data "azurerm_resource_group" "example" {
name = "<myResourcegroup>"
}
resource "azurerm_container_registry" "acr" {
count = 3
name = "containerkavyasarabojuRegistry01"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
sku = "Premium"
admin_enabled = false
georeplications {
location = "East US"
zone_redundancy_enabled = true
tags = {}
}
...
...
This is caused as the registry name must be unique too.Azure container registry has global scope( resources having global scope).So it must be unique globally.
So we must make sure registry name must be unique.
So along with count = x, even registry name must be varied or incremented like below:
count = 3
name =
"containerkavyasarabojuRegistry01${format("%03d", count.index + 1)}"
example:
data "azurerm_resource_group" "example" {
name = "<myresourcegroup>"
}
resource "azurerm_container_registry" "acr" {
count = 3
name = "containerkavyasarabojuRegistry01${format("%03d", count.index + 1)}" //check this
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
sku = "Premium"
admin_enabled = false
...
georeplications {
...
}
...
RESULT: we can see the registries created with unique names.
You can check if acr name already exists through az cli -az acr check name before actually starting to give a name:
az acr check-name -n doesthisnameexist
Reference: terraform-modules-and-multiple-instances
Related
I have created storage account and container inside it to store my aks backup using terraform. I have created child module for the storage account and container.I am creating the storage account and continer calling it from root module from "main.tf".i have created two modules such as module Ex:"module aks_backup_storage" and "module aks_backup_conatiner". The module have been created successfully after applying the terraform command "terraform apply" but at the end it is raising the following errors are mentioned bellow in the console.
A resource with the ID "/subscriptions/...../resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_storage_account" for more information.
failed creating container: failed creating container: containers.Client#Create: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="ContainerAlreadyExists" Message="The specified container already exists.\nRequestId:f.........\nTime:2022-12-28T12:52:08.2075701Z"
root module
module "aks_backup_storage" {
source = "../modules/aks_pv_storage_container"
rg_aks_backup_storage = var.rg_aks_backup_storage
aks_backup_storage_account = var.aks_backup_storage_account
aks_backup_container = var.aks_backup_container
rg_aks_backup_storage_location = var.rg_aks_backup_storage_location
aks_backup_retention_days = var.aks_backup_retention_days
}
Child module
resource "azurerm_resource_group" "rg_aksbackup" {
name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = var.aks_backup_storage_account
resource_group_name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "ZRS"
access_tier = "Hot"
enable_https_traffic_only = true
min_tls_version = "TLS1_2"
#allow_blob_public_access = false
allow_nested_items_to_be_public = false
is_hns_enabled = false
blob_properties {
container_delete_retention_policy {
days = var.aks_backup_retention_days
}
delete_retention_policy {
days = var.aks_backup_retention_days
}
}
}
# Different container can be created for the different backup level such as cluster, Namespace, PV
resource "azurerm_storage_container" "aks_backup_container" {
#name = "aks-backup-container"
name = var.aks_backup_container
#storage_account_name = azurerm_storage_account.aks_backup_storage.name
storage_account_name= var.aks_backup_storage_account
}
I have also try to import the resource using the bellow command
terraform import ['azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage']
But it also saying ZSH command not found
zsh: no matches found: [azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage/]
I had no issue when i was creating the resources using the same code without declaring any module.
Now, I have several modules in root module in the main.tf file
here is my project directory structure
I really appreciate any suggestions thanks in advance
variable.tf
variable "rg_aks_backup_storage" {
type = string
description = "storage account name for the backup"
default = "rg-aks-backup-storage"
}
variable "aks_backup_storage_account" {
type = string
description = "storage account name for the backup"
default = "aksbackupstorage"
}
variable "aks_backup_container" {
type = string
description = "storage container name "
#default = "aks-storage-container"
default = "aksbackupstoragecontaine"
}
variable "rg_aks_backup_storage_location" {
type = string
default = "westeurope"
}
variable "aks_backup_retention_days" {
type = number
default = 90
}
The storage account name that you use must be unique within Azure (see naming restrictions). I checked, and the default storage account name that you are using is already taken. Have you tried changing the name to something you know is unique?
A way to consistently do this would be to add a random suffix at the end of the name, eg:
resource "random_string" "random_suffix" {
length = 6
special = false
upper = false
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = join("", tolist([var.aks_backup_storage_account, random_string.random_suffix.result]))
...
}
I also received the same error when I tried to run terraform apply while creating container registry.
It usually occurs when the terraform state file (running locally) does not match the Portal terraform state file resources.
Even if a resource with the same name does not exist in the portal or resource group, it will appear in terraform state files if it was deployed previously. If you've received these types of issues, verify the tf state file in portal. If the resource is not existent, use the following command to import it.
Note: Validate that the terraform state files are identical. Run terraform init & terraform apply once you are done with the changes.
To resolve this error, Use terraform import .
Here I tried to import the container registry (let's say) and it imported successfully.
terraform import azurerm_container_registry.acr "/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>/providers/Microsoft.ContainerRegistry/registries/xxxxcontainerRegistry1"
Output:
After that I applied terraform apply and successfully deployed the resource without any errors.
Deployed successfully in Portal:
I have some terrform code which works, but i want to able to ignore the DNS TXT Record value as this is updated externally using another tool (acme.sh), I have tried multiple differnt types of HCL to ignore the value, the terraform HCL does not fail, just set's the value back to the original value
Any help would be appreciated.
resource "azurerm_resource_group" "mydomain-co-uk-dns" {
name = "mydomain.co.uk-dns"
location = "North Europe"
}
resource "azurerm_dns_zone" "mydomaindns" {
name = "mydomain.co.uk"
resource_group_name = azurerm_resource_group.mydomain-co-uk.name
}
resource "azurerm_dns_txt_record" "_acme-challenge-api" {
name = "_acme-challenge.api"
zone_name = azurerm_dns_zone.mydomaindns.name
resource_group_name = azurerm_resource_group.mydomain-co-uk-dns.name
ttl = 300
record {
value = "randomkey-that-changes externally"
}
tags = {
Environment = "acmesh"
}
lifecycle {
ignore_changes = [
record
]
}
}
Thanks
I tried testing using the same code that you have provided and was successfully able to deploy the resources , then manually changed the value of record for portal and applied the terraform code again and it didn't do any changes just changed the value of the previous record to the newer value changes from portal in the terraform state file.
Note: I used Terraform v1.0.5 on windows_amd64 + provider registry.terraform.io/hashicorp/azurerm v2.83.0.
As confirmed by #Lain , the issue was resolved after upgrading the azurerm from 2.70.0 to latest.
I would create a sample azure web app using Terraform.
I use this code to create the resoucre.
This is my main.tf file :
resource "azurerm_resource_group" "rg" {
name = var.rgname
location = var.rglocation
}
resource "azurerm_app_service_plan" "plan" {
name = var.webapp_plan_name
location = var.rglocation
resource_group_name = var.rgname
sku {
tier = var.plan_settings["tier"]
size = var.plan_settings["size"]
capacity = var.plan_settings["capacity"]
}
}
resource "azurerm_app_service" "webapp" {
name = var.webapp_name
location = var.rglocation
resource_group_name = var.rgname
app_service_plan_id = azurerm_app_service_plan.plan.id
}
and this is the variable.tf
# variables for Resource Group
variable "rgname" {
description = "(Required)Name of the Resource Group"
type = string
default = "example-rg"
}
variable "rglocation" {
description = "Resource Group location like West Europe etc."
type = string
default = "eastus2"
}
# variables for web app plan
variable "webapp_plan_name" {
description = "Name of webapp"
type = string
default = "XXXXXXXXXx"
}
variable "plan_settings" {
type = map(string)
description = "Definition of the dedicated plan to use"
default = {
kind = "Linux"
size = "S1"
capacity = 1
tier = "Standard"
}
}
variable "webapp_name" {
description = "Name of webapp"
type = string
default = "XXXXXXXX"
}
The terraform apply --auto-approve show a error :
Error: creating/updating App Service Plan "XXXXXXXXXXX" (Resource Group "example-rg"): web.AppServicePlansClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="ResourceGroupNotFound" Message="Resource group 'example-rg' could not be found."
but in Azure Portal , the ressource group is created
what's wrong in my code ?
Can i hold on Terraform to check if resource is created or not before pass to next resource ?
You need to reference the resources to indicate the dependency between them to terraform, so that it can guarantee, that the resource group is created first, and then the other resources. The error indicates, that the resource group does not exist, yet. Your code tries to create the ASP first and then the RG.
resource "azurerm_resource_group" "rg" {
name = var.rgname
location = var.rglocation
}
resource "azurerm_app_service_plan" "plan" {
...
resource_group_name = azurerm_resource_group.rg.name
...
}
... but I can't use variable?
To answer that question and give more detail about the current issue, what's happening in your original code is that you're not referencing terraform resources in a way that terraform can use to create its dependency graph.
if var.rgname equals <my-rg-name> and azurerm_resource_group.rg.name also equals <my-rg-name> then technically, that's the same, right?
Well, no, not necessarily. They indeed have the same value. The difference is that the first one just echos the value. The second one contains the same value but it's also instruction to terraform saying, "Hey, wait a minute. We need the name value from azurerm_resource_group.rg so let me make sure I set that up first, and then I'll provision this resource"
The difference is subtle but important. Using values in this way lets Terraform understand what it has to provision first and lets it create its dependency graph. Using variables alone does not. Especially in large projects, alway try to use the resource variable value instead of just input variables which will help avoid unnecessary issues.
It's quite common to use an input variable to define certain initial data, like the name of the resource group as you did above. After that, however, every time resource_group_name is referenced in the manifests you should always use terraform generated value, so resource_group_name = azurerm_resource_group.rg.name
I have the following piece of Terraform code where Terraform fetches the sql admin password from a key vault. When I changed the administrator login and passsword in the key vault and then run terraform again to update the sql server, it destroys the sql database and sql server.
Is this standard procedure or can I change this behavior? One could understand that recreating the resources is not really feasible in a production environment. I know a lifecycle hook could prevent the deletion of a resource, but such a thing would then break the pipeline if I am correct.
data "azurerm_key_vault_secret" "sql_admin_user_secret" {
name = var.sql_admin_user_secret_name
key_vault_id = data.azurerm_key_vault.key_vault.id
}
data "azurerm_key_vault_secret" "sql_admin_password_secret" {
name = var.sql_admin_password_secret_name
key_vault_id = data.azurerm_key_vault.key_vault.id
}
resource "azurerm_sql_server" "sql_server" {
name = var.sql_server_name
resource_group_name = var.resource_group_name
location = var.location
version = var.sql_server_version
administrator_login = data.azurerm_key_vault_secret.sql_admin_user_secret.value
administrator_login_password = data.azurerm_key_vault_secret.sql_admin_password_secret.value
}
resource "azurerm_sql_database" "sql_database" {
name = var.sql_database_name
resource_group_name = var.resource_group_name
location = var.location
server_name = azurerm_sql_server.sql_server.name
edition = var.sql_edition
requested_service_objective_name = var.sql_service_level
}
I could add something like this, but this only prevents a destroy and ignores changes in those fields respectively. Which is again, not really an option.
lifecycle {
prevent_destroy = true
ignore_changes = ["administrator_login", "administrator_login_password"]
}
Update:
The way of working is to never update the administrator_login. administrator_login_password can be updated separately, which doesn't cause the instance to be recreated.
As per the official doc, if you change the administrator_login, it is expected the resource to be recreated. However, if you only change administrator_login_password, it should get updated.
administrator_login - (Required) The administrator login name for the new server. Changing this forces a new resource to be created.
There is nothing much can be done here since Terraform is communicating with the Azure API, which is not designed to update the administrator user id of Azure SQL without creating a new resource.
I'm getting the following error when trying to do a plan or an apply on a terraform script.
Error: Invalid count argument
on main.tf line 157, in resource "azurerm_sql_firewall_rule" "sqldatabase_onetimeaccess_firewall_rule":
157: count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
The "count" value depends on resource attributes that cannot be determined
until apply, so Terraform cannot predict how many instances will be created.
To work around this, use the -target argument to first apply only the
resources that the count depends on.
I understand this is falling over because it doesn't know the count for the number of firewall rules to create until the app_service is created. I can just run the apply with an argument of -target=azurerm_app_service.app_service then run another apply after the app_service is created.
However, this isn't great for our CI process, if we want to create a whole new environment from our terraform scripts we'd like to just tell terraform to just go build it without having to tell it each target to build in order.
Is there a way in terraform to just say go build everything that is needed in order without having to add targets?
Also below is an example terraform script that gives the above error:
provider "azurerm" {
version = "=1.38.0"
}
resource "azurerm_resource_group" "resourcegroup" {
name = "rg-stackoverflow60187000"
location = "West Europe"
}
resource "azurerm_app_service_plan" "service_plan" {
name = "plan-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
kind = "Linux"
reserved = true
sku {
tier = "Standard"
size = "S1"
}
}
resource "azurerm_app_service" "app_service" {
name = "app-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
app_service_plan_id = azurerm_app_service_plan.service_plan.id
site_config {
always_on = true
app_command_line = ""
linux_fx_version = "DOCKER|nginxdemos/hello"
}
app_settings = {
"WEBSITES_ENABLE_APP_SERVICE_STORAGE" = "false"
}
}
resource "azurerm_sql_server" "sql_server" {
name = "mysqlserver-stackoverflow60187000"
resource_group_name = azurerm_resource_group.resourcegroup.name
location = azurerm_resource_group.resourcegroup.location
version = "12.0"
administrator_login = "4dm1n157r470r"
administrator_login_password = "4-v3ry-53cr37-p455w0rd"
}
resource "azurerm_sql_database" "sqldatabase" {
name = "sqldatabase-stackoverflow60187000"
resource_group_name = azurerm_sql_server.sql_server.resource_group_name
location = azurerm_sql_server.sql_server.location
server_name = azurerm_sql_server.sql_server.name
edition = "Standard"
requested_service_objective_name = "S1"
}
resource "azurerm_sql_firewall_rule" "sqldatabase_firewall_rule" {
name = "App Service Access (${count.index})"
resource_group_name = azurerm_sql_database.sqldatabase.resource_group_name
server_name = azurerm_sql_database.sqldatabase.name
start_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
end_ip_address = element(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses), count.index)
count = length(split(",", azurerm_app_service.app_service.possible_outbound_ip_addresses))
}
To make this work without the -target workaround described in the error message requires reframing the problem in terms of values that Terraform can know only from the configuration, rather than values that are generated by the providers at apply time.
The trick then would be to figure out what values in your configuration the Azure API is using to decide how many IP addresses to return, and to rely on those instead. I don't know Azure well enough to give you a specific answer, but I see on Inbound/Outbound IP addresses that this seems to be an operational detail of Azure App Services rather than something you can control yourself, and so unfortunately this problem may not be solvable.
If there really is no way to predict from configuration how many addresses will be in possible_outbound_ip_addresses, the alternative is to split your configuration into two parts where one depends on the other. The first would configure your App Service and anything else that makes sense to manage along with it, and then the second might use the azurerm_app_service data source to retrieve the data about the assumed-already-existing app service and make firewall rules based on it.
Either way you'll need to run Terraform twice to make the necessary data available. An advantage of using -target is that you only need to do a funny workflow once during initial bootstrapping, and so you could potentially do the initial create outside of CI to get the objects initially created and then use CI for ongoing changes. As long as the app service object is never replaced, subsequent Terraform plans will already know how many IP addresses are set and so should be able to complete as normal.