Conditional resource get destroyed if count parameter is set to 0 - azure

I have a VNET/App Service integration requirement. This requires the creation of a VPN gateway.
Once the integration is completed a certificate (generated by the App Service) is associated to the point-to-site configuration of the VPN Gateway.
If i need to run terraform once again because i need to perform some changes it detects that the VPN gateway must be destroyed because in Azure it has a certificate!
I thought about using the count parameter on the VPN gateway resource, but if i set count = 0 according to a variable i get the same problem.
Any piece of advice?

Try adding an ignore_changes statement in the lifecycle of your resource. This is an example of what I use for some instances:
lifecycle {
ignore_changes = [
"user_data",
"instance_type",
"root_block_device.0.volume_size",
"ebs_optimized",
"tags",
]
}
It is set in the resource definition as follows (just to get an idea how to place it in the definition):
resource "aws_instance" "worker_base" {
count = "..."
instance_type = "..."
user_data = "..."
lifecycle {
ignore_changes = [
"user_data",
"instance_type",
"root_block_device.0.volume_size",
"ebs_optimized",
"tags",
]
}
tags = {
Name = "..."
}
root_block_device {
delete_on_termination = "..."
volume_size = "..."
volume_type = "..."
}
}
Now, from the terraform plan output you should see the parameter that changed so that a new resource is required. Try setting this in the ignore_changes list...

Related

Terraform arm template ignore changes

For some reason i am having lots of problems with the group template deployment resource. we deploy a logic app during creation, however, in a second run, terraform plan detects changes even after specifying the ignore changes flag to all in the lifecycle bracket.
Unsure if this is normal behavior, any help would be appreciated
resource "azurerm_resource_group_template_deployment" "deploylogicapp" {
name = "template_deployment"
resource_group_name = azurerm_resource_group.bt_security.name
deployment_mode = "Incremental"
template_content = <<TEMPLATE
{
"ARM template json body"
}
TEMPLATE
lifecycle {
ignore_changes=all
}
}
EDIT
Managed to find the issue.
Added the lifecycle bracket to the app workflow resource
resource "azurerm_logic_app_workflow" "logicapp" {
name = "azdevops-app"
location = azurerm_resource_group.bt_security.location
resource_group_name = azurerm_resource_group.bt_security.name
lifecycle {
ignore_changes = all
}
}
Then added template_content in the arm group template resource, instead of 'all'
Ran terraform plan twice, and it did not detect any changes (2nd round), which is what we wanted.

How can I prevent terraform from destroying and recreating vm azure vm extensions. Life cycle block isn't working

How can I prevent terraform from destroying and recreating azure vm extensions? Life cycle code block isn't working. Terraform persists on destroying the resources and fails when I have the locks enabled. Please can someone tell me where I am going wrong with this
This is my code
resource "azurerm_virtual_machine_extension" "dsc" {
for_each = var.dsc_agent_name
name = each.key
virtual_machine_id = each.value
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.0"
auto_upgrade_minor_version = "true"
tags = local.tags
lifecycle {
prevent_destroy = true
}
settings = <<SETTINGS
{
"ModulesUrl":"",
"SasToken":"",
"WmfVersion": "latest",
"Privacy": {
"DataCollection": ""
},
"ConfigurationFunction":""
}
SETTINGS
}
You can try to put an ignore section in:
lifecycle {
prevent_destroy = true
ignore_changes = [ VMDiagnosticsSettings ]
}
That way it will ignore what has been set on the resource in Azure with what is being declared (if anything for this section) in TF
Try removing the lifecycle block and then run 'terraform plan' - it should then show you which configuration item is causing it to be destroyed / re-created
Managed to resolve this - I basically used ignore changes on all the properties
Massive thanks to #MarkoE and AC81

Set functionTimeout using terraform

I need to add below property to my host.json file for azure function. Is it possible to add the property using terraform or by passing it using app_setting?
{
"functionTimeout": "00:10:00"
}
You can use the azurerm_app_configuration to add the configuration values in an Azure App Service. And azurerm_app_configuration_key is used to add the key-value pair in an Azure App Service using terraform.
You can use the below key-value pair for adding the timeout values in Azure
key : AzureFunctionsJobHost__functionTimeout
Value : 00:10:00
Example
Note:
App Configuration Keys are provisioned using a Data Plane API which requires the role App Configuration Data Owner on either the App Configuration or a parent scope (such as the Resource Group/Subscription).
resource "azurerm_resource_group" "rg" {
name = "example-resources"
location = "example-location"
}
resource "azurerm_app_configuration" "appconf" {
name = "appConf1"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
}
## Adding the App Configuration Data Owner role
data "azurerm_client_config" "current" {}
resource "azurerm_role_assignment" "appconf_dataowner" {
scope = azurerm_app_configuration.appconf.id
role_definition_name = "App Configuration Data Owner"
principal_id = data.azurerm_client_config.current.object_id
}
## Adding the App Settings config values
resource "azurerm_app_configuration_key" "test" {
configuration_store_id = azurerm_app_configuration.appconf.id
key = "<Your host.json timeout key "AzureFunctionsJobHost__functionTimeout">"
label = "somelabel"
value = "<Your timeout value "00:10:00">"
depends_on = [
azurerm_role_assignment.appconf_dataowner
]
}
Refer blog for adding multiple key-value pair.

Is there a way in terraform to create a replacement group of related resources before destroying the original group?

I have a VM template I'm deploying an Azure Virtual Desktop environment with terraform (via octopus deploy) to Azure. On top of the Virtual Machines, I'm installing a number of extensions which culminates with a vm extension to register the VM with the Host Pool.
I'd like to rebuild the VM each time the custom script extension is applied (Extension #2, after domain join). But in rebuilding the VM, I'd like to build out a new VM, complete with the host pool registration before any part of the existing VM is destroyed.
Please accept the cut down version below to understand what I am trying to do.
I expect the largest number of machine recreations to come from enhancements to the configuration scripts that configure the server on creation. Not all of the commands are expected to be idempotent and we want the AVD vms to be ephemeral. If an issue is encountered, the support team is expected to be able to drain a server and destroy it once empty to get a replacement by terraform apply. In a case where the script gets updated though, we want to be able to replace all VMs quickly in an emergency, or at the very least minimize the nightly maintenance window.
Script Process: parameterized script > gets filled out as a template file > gets stored as an az blob > called by custom script extension > executed on the machine.
VM build process: VM is provisioned > currently 8 extensions get applied one at a time, starting with the domain join, then the custom script extension, followed by several Azure monitoring extensions, and finally the host pool registration extension.
I've been trying to use the create_before_destroy lifecycle feature, but I can't get it to spin up the VM, and apply all extensions before it begins removing the hostpool registration from the existing VMs. I assume there's a way to do it using the triggers, but I'm not sure how to do it in such a way that it always has at least the current number of VMs.
It would also need to be able to stop if it encounters an error on the new vm, before destroying the existing vm (or better yet, be authorized to rebuild VMs if an extension fails part way through).
resource "random_pet" "avd_vm" {
prefix = var.client_name
length = 1
keepers = {
# Generate a new pet name each time we update the setup_host script
source_content = "${data.template_file.setup_host.rendered}"
}
}
data "template_file" "setup_host" {
template = file("${path.module}\\scripts\\setup-host.tpl")
vars = {
storageAccountName = azurerm_storage_account.storage.name
storageAccountKey = azurerm_storage_account.storage.primary_access_key
domain = var.domain
aad_group_name = var.aad_group_name
}
}
resource "azurerm_storage_blob" "setup_host" {
name = "setup-host.ps1"
storage_account_name = azurerm_storage_account.scripts.name
storage_container_name = time_sleep.container_rbac.triggers["name"]
type = "Block"
source_content = data.template_file.setup_host.rendered #"${path.module}\\scripts\\setup-host.ps1"
depends_on = [
azurerm_role_assignment.account1_write,
data.template_file.setup_host,
time_sleep.container_rbac
]
}
data "template_file" "client_r_drive_mapping" {
template = file("${path.module}\\scripts\\client_r_drive_mapping.tpl")
vars = {
storageAccountName = azurerm_storage_account.storage.name
storageAccountKey = azurerm_storage_account.storage.primary_access_key
}
}
resource "azurerm_windows_virtual_machine" "example" {
count = length(random_pet.avd_vm)
name = "${random_pet.avd_vm[count.index].id}"
...
lifecycle {
ignore_changes = [
boot_diagnostics,
identity
]
}
}
resource "azurerm_virtual_machine_extension" "first-domain_join_extension" {
count = var.rdsh_count
name = "${var.client_name}-avd-${random_pet.avd_vm[count.index].id}-domainJoin"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Compute"
type = "JsonADDomainExtension"
type_handler_version = "1.3"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"Name": "${var.domain_name}",
"OUPath": "${var.ou_path}",
"User": "${var.domain_user_upn}#${var.domain_name}",
"Restart": "true",
"Options": "3"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"Password": "${var.admin_password}"
}
PROTECTED_SETTINGS
lifecycle {
ignore_changes = [settings, protected_settings]
}
depends_on = [
azurerm_virtual_network_peering.out-primary,
azurerm_virtual_network_peering.in-primary,
azurerm_virtual_network_peering.in-secondary
]
}
# Multiple scripts called by ./<scriptname referencing them in follow-up scripts
# https://web.archive.org/web/20220127015539/https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows
# https://learn.microsoft.com/en-us/azure/virtual-machines/extensions/custom-script-windows#using-multiple-scripts
resource "azurerm_virtual_machine_extension" "second-custom_scripts" {
count = var.rdsh_count
name = "${random_pet.avd_vm[count.index].id}-setup-host"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Compute"
type = "CustomScriptExtension"
type_handler_version = "1.10"
auto_upgrade_minor_version = "true"
protected_settings = <<PROTECTED_SETTINGS
{
"storageAccountName": "${azurerm_storage_account.scripts.name}",
"storageAccountKey": "${azurerm_storage_account.scripts.primary_access_key}"
}
PROTECTED_SETTINGS
settings = <<SETTINGS
{
"fileUris": ["https://${azurerm_storage_account.scripts.name}.blob.core.windows.net/scripts/setup-host.ps1","https://${azurerm_storage_account.scripts.name}.blob.core.windows.net/scripts/client_r_drive_mapping.ps1"],
"commandToExecute": "powershell -ExecutionPolicy Unrestricted -file setup-host.ps1"
}
SETTINGS
depends_on = [
azurerm_virtual_machine_extension.first-domain_join_extension,
azurerm_storage_blob.setup_host
]
}
resource "azurerm_virtual_machine_extension" "last_host_extension_hp_registration" {
count = var.rdsh_count
name = "${var.client_name}-${random_pet.avd_vm[count.index].id}-avd_dsc"
virtual_machine_id = azurerm_windows_virtual_machine.avd_vm.*.id[count.index]
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.73"
auto_upgrade_minor_version = true
settings = <<-SETTINGS
{
"modulesUrl": "https://wvdportalstorageblob.blob.core.windows.net/galleryartifacts/Configuration_3-10-2021.zip",
"configurationFunction": "Configuration.ps1\\AddSessionHost",
"properties": {
"HostPoolName":"${azurerm_virtual_desktop_host_pool.pooleddepthfirst.name}"
}
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"properties": {
"registrationInfoToken": "${azurerm_virtual_desktop_host_pool_registration_info.pooleddepthfirst.token}"
}
}
PROTECTED_SETTINGS
lifecycle {
ignore_changes = [settings, protected_settings]
}
depends_on = [
azurerm_virtual_machine_extension.second-custom_scripts
]
}

How do I implement a retry pattern with Terraform?

My use case: I need to create an AKS cluster with Terraform azurerm provider, and then set up a Network Watcher flow log for its NSG.
Note that as many other AKS resources, the corresponding NSG is not controlled by Terraform. Instead, it's created by Azure indirectly (and asynchronously), so I treat it as data, not resource.
Also note that Azure will create and use its own NSG even if the AKS is created with a customary created VNet.
Depending on the particular region and the Azure API gateway, my team has seen up to 40 minute delay between having the AKS created and then the NSG resource visible in the node pool resource group.
If I don't want my Terraform config to fail, I see 3 options:
Run a CLI script that waits for the NSG, make it a null_resource and depend on it
Implement the same with a custom provider
Have a really ugly workaround that implements a retry pattern - below is 10 attempts at 30 seconds each:
data "azurerm_resources" "my_nsg_1" {
resource_group_name = var.clusterNodeResourceGroup
type = "Microsoft.Network/networkSecurityGroups"
}
resource "time_sleep" "my_nsg_sleep1" {
count = length(data.azurerm_resources.my_nsg_1.resources) == 0 ? 1 : 0
create_duration = "30s"
triggers = {
ts = timestamp()
}
}
data "azurerm_resources" "my_nsg_2" {
depends_on = [time_sleep.my_nsg_sleep1]
resource_group_name = var.clusterNodeResourceGroup
type = "Microsoft.Network/networkSecurityGroups"
}
resource "time_sleep" "my_nsg_sleep2" {
count = length(data.azurerm_resources.my_nsg_1.resources) == 0 ? 1 : 0
create_duration = length(data.azurerm_resources.my_nsg_2.resources) == 0 ? "30s" : "0s"
triggers = {
ts = timestamp()
}
}
...
data "azurerm_resources" "my_nsg_11" {
depends_on = [time_sleep.my_nsg_sleep10]
resource_group_name = var.clusterNodeResourceGroup
type = "Microsoft.Network/networkSecurityGroups"
}
// Now azurerm_resources.my_nsg_11 is OK as long as the NSG was created and became visible to the current API Gateway within 5 minutes.
Note that Terraform doesn't allow resource repeating via the use of "for_each" or "count" at more than an individual resource level. In addition, because it resolves dependencies during the static phase, two sets of resource lists created with "count" or "for_each" cannot have dependencies at an individual element level of each other - you can only have one list depend on the other, obviously with no circular dependencies allowed.
E.g. my_nsg[count.index] cannot depend on my_nsg_delay[count.index-1] while my_nsg_delay[count.index] depends on my_nsg[count.index]
Hence this horrible non-DRY antipattern.
Is there a better declarative solution so I don't involve a custom provider or a script?

Resources