For some reason i am having lots of problems with the group template deployment resource. we deploy a logic app during creation, however, in a second run, terraform plan detects changes even after specifying the ignore changes flag to all in the lifecycle bracket.
Unsure if this is normal behavior, any help would be appreciated
resource "azurerm_resource_group_template_deployment" "deploylogicapp" {
name = "template_deployment"
resource_group_name = azurerm_resource_group.bt_security.name
deployment_mode = "Incremental"
template_content = <<TEMPLATE
{
"ARM template json body"
}
TEMPLATE
lifecycle {
ignore_changes=all
}
}
EDIT
Managed to find the issue.
Added the lifecycle bracket to the app workflow resource
resource "azurerm_logic_app_workflow" "logicapp" {
name = "azdevops-app"
location = azurerm_resource_group.bt_security.location
resource_group_name = azurerm_resource_group.bt_security.name
lifecycle {
ignore_changes = all
}
}
Then added template_content in the arm group template resource, instead of 'all'
Ran terraform plan twice, and it did not detect any changes (2nd round), which is what we wanted.
Related
How can I prevent terraform from destroying and recreating azure vm extensions? Life cycle code block isn't working. Terraform persists on destroying the resources and fails when I have the locks enabled. Please can someone tell me where I am going wrong with this
This is my code
resource "azurerm_virtual_machine_extension" "dsc" {
for_each = var.dsc_agent_name
name = each.key
virtual_machine_id = each.value
publisher = "Microsoft.Powershell"
type = "DSC"
type_handler_version = "2.0"
auto_upgrade_minor_version = "true"
tags = local.tags
lifecycle {
prevent_destroy = true
}
settings = <<SETTINGS
{
"ModulesUrl":"",
"SasToken":"",
"WmfVersion": "latest",
"Privacy": {
"DataCollection": ""
},
"ConfigurationFunction":""
}
SETTINGS
}
You can try to put an ignore section in:
lifecycle {
prevent_destroy = true
ignore_changes = [ VMDiagnosticsSettings ]
}
That way it will ignore what has been set on the resource in Azure with what is being declared (if anything for this section) in TF
Try removing the lifecycle block and then run 'terraform plan' - it should then show you which configuration item is causing it to be destroyed / re-created
Managed to resolve this - I basically used ignore changes on all the properties
Massive thanks to #MarkoE and AC81
I have some terrform code which works, but i want to able to ignore the DNS TXT Record value as this is updated externally using another tool (acme.sh), I have tried multiple differnt types of HCL to ignore the value, the terraform HCL does not fail, just set's the value back to the original value
Any help would be appreciated.
resource "azurerm_resource_group" "mydomain-co-uk-dns" {
name = "mydomain.co.uk-dns"
location = "North Europe"
}
resource "azurerm_dns_zone" "mydomaindns" {
name = "mydomain.co.uk"
resource_group_name = azurerm_resource_group.mydomain-co-uk.name
}
resource "azurerm_dns_txt_record" "_acme-challenge-api" {
name = "_acme-challenge.api"
zone_name = azurerm_dns_zone.mydomaindns.name
resource_group_name = azurerm_resource_group.mydomain-co-uk-dns.name
ttl = 300
record {
value = "randomkey-that-changes externally"
}
tags = {
Environment = "acmesh"
}
lifecycle {
ignore_changes = [
record
]
}
}
Thanks
I tried testing using the same code that you have provided and was successfully able to deploy the resources , then manually changed the value of record for portal and applied the terraform code again and it didn't do any changes just changed the value of the previous record to the newer value changes from portal in the terraform state file.
Note: I used Terraform v1.0.5 on windows_amd64 + provider registry.terraform.io/hashicorp/azurerm v2.83.0.
As confirmed by #Lain , the issue was resolved after upgrading the azurerm from 2.70.0 to latest.
The Azure resources "ASP-IS4-INT-FLUX-VOL-ITA-archive" (App Service Plan) and "AF-IS4-INT-FLUX-VOL-ITA-archive" (Function App) has been deployed there more months ago.
The both Azure resources are configured with the OS Windows.
The terraform code of these resources :
resource "azurerm_app_service_plan" "ASP-VOL-ITA" {
name = "ASP-${var.client}-${var.environment}-${var.project_Flux}-${local.VOL-ITA_client_code}-${local.VOL-ITA_country_code}"
location = module.resource_group_VOL-ITA.out_rg_location
resource_group_name = module.resource_group_VOL-ITA.out_rg_name
kind = "functionapp"
tags = merge(var.default_tags, var.default_VOL-ITA_tags, var.ASP-VOL-ITA_tags)
sku {
tier = "Dynamic"
size = "Y1"
}
lifecycle {
ignore_changes = all
}
}
resource "azurerm_function_app" "AF-VOL-ITA-archive" {
name = "AF-${var.client}-${var.environment}-${var.project_Flux}-${local.VOL-ITA_client_code}-${local.VOL-ITA_country_code}-archive"
location = module.resource_group_VOL-ITA.out_rg_location
resource_group_name = module.resource_group_VOL-ITA.out_rg_name
app_service_plan_id = azurerm_app_service_plan.ASP-VOL-ITA.id
https_only = "true"
version = "~2"
storage_connection_string = module.storage_account_VOL.out_storage_primary_connection_string
tags = merge(var.default_tags, var.default_VOL-ITA_tags, var.AF-VOL-ITA-archive_tags)
lifecycle {
#ignore_changes = all
ignore_changes = [
app_settings,
storage_connection_string
]
}
}
Terraform version : 1.0.1
Azurerm provider : 2.68.0
But since some weeks when I execute a "terraform plan" command this "Function App" is marked as "update in-place" whereas I changed nothing since the initial deployment.
So then I adapted my Terraform code as below :
resource "azurerm_app_service_plan" "ASP-VOL-ITA" {
name = "ASP-${var.client}-${var.environment}-${var.project_Flux}-${local.VOL-ITA_client_code}-${local.VOL-ITA_country_code}"
location = module.resource_group_VOL-ITA.out_rg_location
resource_group_name = module.resource_group_VOL-ITA.out_rg_name
# IN COMMENT
#kind = "functionapp"
# ENABLE TWO BELOW LINES
kind = "Windows"
reserved = false
tags = merge(var.default_tags, var.default_VOL-ITA_tags, var.ASP-VOL-ITA_tags)
sku {
tier = "Dynamic"
size = "Y1"
}
lifecycle {
ignore_changes = all
}
}
resource "azurerm_function_app" "AF-VOL-ITA-archive" {
name = "AF-${var.client}-${var.environment}-${var.project_Flux}-${local.VOL-ITA_client_code}-${local.VOL-ITA_country_code}-archive"
location = module.resource_group_VOL-ITA.out_rg_location
resource_group_name = module.resource_group_VOL-ITA.out_rg_name
app_service_plan_id = azurerm_app_service_plan.ASP-VOL-ITA.id
https_only = "true"
version = "~2"
storage_connection_string = module.storage_account_VOL.out_storage_primary_connection_string
# ADDING OF THIS LINE
os_type = ""
tags = merge(var.default_tags, var.default_VOL-ITA_tags, var.AF-VOL-ITA-archive_tags)
lifecycle {
#ignore_changes = all
ignore_changes = [
app_settings,
storage_connection_string,
os_type
]
}
}
I tried more tests to fix this behavior.
I tried to :
in the declaration of the resource Function App I added the variable "os_type" with the value empty + in the App Service Plan resource the variables "kind" with "Windows" and the variable "reserved" with the value "false" -> The result of the "terraform plan" command is the Function App resource is marked "update in-place" with the adding of the "os_type" variable with empty value.
in the declaration of the resource Function App I comment the line with the variable "os_type" + in the App Service Plan resource the variables "kind" with "Windows" and the variable "reserved" with the value "false" -> The result of the "terraform plan" command is the Function App resource is marked as "update in-place" with the adding of the "os_type" variable with empty value.
Upgrading the version of the azurerm provider from 2.50.0 to 2.68.0 -> The result is the same as the above scenarios of tests.
I add the screenshot link which show how Terraform marks the Function App resource : https://postimg.cc/MvsfmrT9
I getted the Terraform files of my production environment and then copy these ones in a laboratory environment to do tests, so I did next actions :
I deployed the "Function App" and "App Service Plan" Azure resources in a labo Azure subscription with the same Terraform configuration (same .tf files, same version of azurerm provider)
I compared the json structure between the "Function App" resource of the production environment and those of the laboratory environment : I have the same json lines in both environments.
So I don't understand why I haven't the same behavior between the production env and the laboratory env.
Is anyone has encountered the same behavior ?
What is the consequence for the availability of the Function App with the change of this one with the adding of "os_type" variable with the empty value if I execute the "terraform apply" command ?
I'm new to Terraform and trying to wrap my head around the use of output variables. we are on AKS, and I'm deploying the following resources: resource group, log analytics workspace, Azure Kubernetes. When Log analytics is deployed, I capture the workspace ID into an output variable. Now, when Terraform deploys Kubernetes, it needs to know the workspace ID, how can I pass the output value to the addon_profile (last line in the code below)?
Error:
environment = "${log_analytics_workspace_id.value}"
A managed resource "log_analytics_workspace_id" "value" has not been declared in the root module.
Code:
resource "azurerm_resource_group" "test" {
name = "${var.log}"
location = "${var.location}"
}
resource "azurerm_log_analytics_workspace" "test" {
name = "${var.logname}"
location = "${azurerm_resource_group.loganalytics.location}"
resource_group_name = "${azurerm_resource_group.loganalytics.name}"
sku = "PerGB2018"
retention_in_days = 30
}
**output "log_analytics_workspace_id" {
value = "${azurerm_log_analytics_workspace.test.workspace_id}"
}**
....................................................
addon_profile {
oms_agent {
enabled = true
**log_analytics_workspace_id = "${log_analytics_workspace_id.value}"**
}
}
Terraform's output values are like the "return values" of a module. In order to declare and use the log_analytics_workspace_id output value, you would need to put all of the code for the creation of the resource group, log analytics workspace, and Azure Kubernetes infrastructure into a single Terraform module, and then reference the output value from outside of the module:
# declare your module here, which contains creation code for all your Azure infrastructure + the output variable
module "azure_analytics" {
source = "git::ssh://git#github.com..."
}
# now, you can reference the output variable in your addon_profile from outside the module:
addon_profile {
oms_agent {
enabled = true
log_analytics_workspace_id = "${module.azure_analytics.log_analytics_workspace_id}"
}
}
On the other hand, if you just want to use the workspace_id value from your azurerm_log_analytics_workspace within the same code, just reference it like azurerm_log_analytics_workspace.test.workspace_id.
I have a VNET/App Service integration requirement. This requires the creation of a VPN gateway.
Once the integration is completed a certificate (generated by the App Service) is associated to the point-to-site configuration of the VPN Gateway.
If i need to run terraform once again because i need to perform some changes it detects that the VPN gateway must be destroyed because in Azure it has a certificate!
I thought about using the count parameter on the VPN gateway resource, but if i set count = 0 according to a variable i get the same problem.
Any piece of advice?
Try adding an ignore_changes statement in the lifecycle of your resource. This is an example of what I use for some instances:
lifecycle {
ignore_changes = [
"user_data",
"instance_type",
"root_block_device.0.volume_size",
"ebs_optimized",
"tags",
]
}
It is set in the resource definition as follows (just to get an idea how to place it in the definition):
resource "aws_instance" "worker_base" {
count = "..."
instance_type = "..."
user_data = "..."
lifecycle {
ignore_changes = [
"user_data",
"instance_type",
"root_block_device.0.volume_size",
"ebs_optimized",
"tags",
]
}
tags = {
Name = "..."
}
root_block_device {
delete_on_termination = "..."
volume_size = "..."
volume_type = "..."
}
}
Now, from the terraform plan output you should see the parameter that changed so that a new resource is required. Try setting this in the ignore_changes list...