Terraform- How to avoid destroy and create with single state file - azure

I have a terraform code that creates a stream analytics job, An input and output for the job too.
Below is my terraform code:
provider "azurerm" {
version = "=1.44"
}
resource "azurerm_stream_analytics_job" "test_saj" {
name = "test-stj"
resource_group_name = "myrgname"
location = "Southeast Asia"
compatibility_level = "1.1"
data_locale = "en-US"
events_late_arrival_max_delay_in_seconds = 60
events_out_of_order_max_delay_in_seconds = 50
events_out_of_order_policy = "Adjust"
output_error_policy = "Drop"
streaming_units = 3
tags = {
environment = "test"
}
transformation_query = var.query
}
resource "azurerm_stream_analytics_output_blob" "mpl_saj_op_jk_blob" {
name = var.saj_jk_blob_output_name
stream_analytics_job_name = "test-stj"
resource_group_name = "myrgname"
storage_account_name = "mystaname"
storage_account_key = "mystakey"
storage_container_name = "testupload"
path_pattern = myfolder/{day}"
date_format = "yyyy-MM-dd"
time_format = "HH"
depends_on = [azurerm_stream_analytics_job.test_saj]
serialization {
type = "Json"
encoding = "UTF8"
format = "LineSeparated"
}
}
resource "azurerm_stream_analytics_stream_input_eventhub" "mpl_saj_ip_eh" {
name = var.saj_joker_event_hub_name
stream_analytics_job_name = "test-stj"
resource_group_name = "myrgname"
eventhub_name = "myehname"
eventhub_consumer_group_name = "myehcgname"
servicebus_namespace = "myehnamespacename"
shared_access_policy_name = "RootManageSharedAccessKey"
shared_access_policy_key = "ehnamespacekey"
serialization {
type = "Json"
encoding = "UTF8"
}
depends_on = [azurerm_stream_analytics_job.test_saj]
}
Following is my tfvars input file:
query=<<EOT
myqueryhere
EOT
saj_jk_blob_output_name="outputtoblob01"
saj_joker_event_hub_name="inputventhub01"
I have no problem with the creation. Now my problem is when I want to create a new input and output for the same stream analytics job, I changed the name values alone in the tfvars file and gave terraform apply (in the same directory where first apply was given. Same state file).
Terraform is replacing the existing i/p and o/p with the new ones which is not my requirement. I want both the old one and the new one. This usecase was satisfied when imported the existing stream analytics using terraform import in a completely different folder and I used the same code. But is there way to do this without terraform import. Can this be done with a single state file itself?

State allows Terraform to know what Azure resources to add, update, or delete. What you want to do can not be done with a single state file itself unless you directly deploy resources with different names in your configuration files.
For example, if you want to create two virtual networks. You can directly create resources like this or use a count parameter on resources level for the loop.
resource "azurerm_virtual_network" "example" {
name = "examplevnet1"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.1.0.0/16"]
}
resource "azurerm_virtual_network" "example" {
name = "examplevnet2"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
address_space = ["10.2.0.0/16"]
}
When working with Terraform in a team, you can use remote state to write the state data to a remote data store, which can then be shared between all members of a team. It's recommended to store Terraform state in Azure Storage.
For more information, you could see Terraform workflow in this blog.

Related

How Do I Skip The Creation Of A Terraform Resource?

My terraform script is setup up for a web app in production.
As part of that I have Azure DDoS protection enabled.
However, this is really expensive compared to the rest of the infrastructure.
For this reason, I don't want to create it for my development environment.
I run terraform using Azure pipelines so I would like to configure the pipeline to optionally not create it. e.g. with a variable in the pipeline
Is there an option I can pass to terraform to skip this resource?
Assuming there is an option and I can skip the ddos resource, will the creation of the vnet fail in the snippet below if it doesn't exist?
#---------------------------------------
# DDOS Protection Plan Definition
#---------------------------------------
resource "azurerm_network_ddos_protection_plan" "ddos" {
name = var.ddos_plan_name
location = var.location
resource_group_name = azurerm_resource_group.rg.name
}
#---------------------------------------
# vNet Definition
#---------------------------------------
resource "azurerm_virtual_network" "vnet" {
name = lower("${local.vNet_id}-${var.location_id}-${var.env_id}-1")
resource_group_name = azurerm_resource_group.rg.name
location = var.location
address_space = var.address_space
ddos_protection_plan {
id = azurerm_network_ddos_protection_plan.ddos.id
enable = true
}
depends_on = [
azurerm_resource_group.rg
]
}
The way I would do it is to use the count meta-argument [1]. For example, create a variable with a name create_ddos_protection_plan, set it to be of type bool and by default set it to false:
variable "create_ddos_protection_plan" {
description = "Whether to create DDoS resource or not."
type = bool
default = false
}
resource "azurerm_network_ddos_protection_plan" "ddos" {
count = var.create_ddos_protection_plan ? 1 : 0
name = var.ddos_plan_name
location = var.location
resource_group_name = azurerm_resource_group.rg.name
}
Later on if you decide you want to create it, you can set the value of the variable to true or remove the count meta-argument completely.
The vnet creation would fail if the resource does not exist based on the current setup.
[1] https://www.terraform.io/language/meta-arguments/count
You can use the count meta-argument to dynamically choose how many instances of a particular resource to create, including possibly choosing to create zero of them, which therefore effectively disables the resource altogether:
variable "enable_ddos_protection" {
type = bool
default = true
}
resource "azurerm_network_ddos_protection_plan" "ddos" {
count = var.enable_ddos_protection ? 1 : 0
name = var.ddos_plan_name
location = var.location
resource_group_name = azurerm_resource_group.rg.name
}
Since the number of instances of this resource is now dynamic, azurerm_network_ddos_protection_plan.ddos will appear as a list of objects instead of a single object. Therefore you'll also need to change how you refer to it in the virtual network configuration.
The most direct way to declare that would be to use a dynamic block to tell Terraform to generate one ddos_protection_plan block per instance of that resource, so there will be no blocks of that type if there are no protection plan instances:
resource "azurerm_virtual_network" "vnet" {
name = lower("${local.vNet_id}-${var.location_id}-${var.env_id}-1")
resource_group_name = azurerm_resource_group.rg.name
location = var.location
address_space = var.address_space
dynamic "ddos_protection_plan" {
for_each = azurerm_network_ddos_protection_plan.ddos
content {
id = ddos_protection_plan.value.id
enable = true
}
}
}
(I removed the depends_on declaration here because it was redundant with the reference in the resource_group_name argument, but the dynamic block is the main point of this example.)

Terraform Azurerm - Data Export Rule

My query is related to azurerm_log_analytics_data_export_rule. I have created Log Analytics Workspace and Eventhub in portal followed all the steps in below link.
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/log_analytics_data_export_rule
Both Terraform Plan and Apply are successful. But the expected tables are not created in Eventhub. For example (as per above link) “Heartbeat” table is not created Eventhub after export_rule created. The below Microsoft documentation mentions that the tables will be automatically created in EH or Storage account once export rule creation successful.
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-data-export?tabs=portal
Will be helpful if I get some info on this rule.
The Hashicrop template you are following will create new resource group, storage account, log analytics workspace & a export rule.
Since the above terraform template is creating new environment & there will be no heart beat logs present by default so that is reason why there were no heart beat logs container was created.
When we have tested in our environment, exporting heart beat logs of log analytics workspace data to storage account it took nearly 30 minutes to get the data to be reflected in our storage account.
Data completeness
Data export will continue to retry sending data for up to 30 minutes in the event that the destination is unavailable. If it's still unavailable after 30 minutes then data will be discarded until the destination becomes available.
provider "azurerm" {
features{}
}
resource "azurerm_resource_group" "data_export_resource_group" {
name = "test_data_export_rg"
location = "centralus"
}
resource "azurerm_log_analytics_workspace" "data_export_log_analytics_workspace" {
name = "testdataexportlaw"
location = azurerm_resource_group.data_export_resource_group.location
resource_group_name = azurerm_resource_group.data_export_resource_group.name
sku = "PerGB2018"
retention_in_days = 30
}
resource "azurerm_storage_account" "data_export_azurerm_storage_account" {
name = "testdataexportazurermsa"
resource_group_name = azurerm_resource_group.data_export_resource_group.name
location = azurerm_resource_group.data_export_resource_group.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_eventhub_namespace" "data_export_azurerm_eventhub_namespace" {
name = "testdataexportehnamespace"
location = azurerm_resource_group.data_export_resource_group.location
resource_group_name = azurerm_resource_group.data_export_resource_group.name
sku = "Standard"
capacity = 1
tags = {
environment = "Production"
}
}
resource "azurerm_eventhub" "data_export_eventhub" {
name = "testdataexporteh1"
namespace_name = azurerm_eventhub_namespace.data_export_azurerm_eventhub_namespace.name
resource_group_name = azurerm_resource_group.data_export_resource_group.name
partition_count = 2
message_retention = 1
}
```
resource "azurerm_log_analytics_data_export_rule" "example" {
name = "testdataExport1"
resource_group_name = azurerm_resource_group.data_export_resource_group.name
workspace_resource_id = azurerm_log_analytics_workspace.data_export_log_analytics_workspace.id
destination_resource_id = azurerm_eventhub.data_export_eventhub.id
table_names = ["Usage","StorageBlobLogs"]
enabled = true
}
```

Terraform reports a change to Application Insights key on every plan that is run

I have several Azure resources that are created using the for_each property and then those resources have an Application Insights resource created using for_each as well.
Here is the code that creates the azurerm_application_insights:
resource "azurerm_application_insights" "applicationInsights" {
for_each = toset(keys(merge(local.appServices, local.functionApps)))
name = lower(join("-", ["wb", var.deploymentEnvironment, var.location, each.key, "ai"]))
location = var.location
resource_group_name = azurerm_resource_group.rg.name
application_type = "web"
lifecycle {
ignore_changes = [tags]
}
}
I've noticed that every time we run a terraform plan against some environments, we are always seeing Terraform report a "change" to the APPINSIGHTS_INSTRUMENTATIONKEY value. When I compare this value in the app settings key/value list to the actual AI instrumentation key that was created for it, it does match.
Terraform will perform the following actions:
# module.primaryRegion.module.functionapp["fnapp1"].azurerm_function_app.fnapp will be updated in-place
~ resource "azurerm_function_app" "fnapp" {
~ app_settings = {
# Warning: this attribute value will be marked as sensitive and will
# not display in UI output after applying this change
~ "APPINSIGHTS_INSTRUMENTATIONKEY" = (sensitive)
# (1 unchanged element hidden)
Is this a common issue with other people? I would think that the instrumentation key would never change especially since Terraform is what created all of these Application Insights resources and assigns it to each application.
This is how I associate each Application Insights resource to their appropriate application with a for_each property
module "webapp" {
for_each = local.appServices
source = "../webapp"
name = lower(join("-", ["wb", var.deploymentEnvironment, var.location, each.key, "app"]))
location = var.location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = each.value.app_service_plan_id
app_settings = merge({"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.applicationInsights[each.key].instrumentation_key}, each.value.app_settings)
allowed_origins = each.value.allowed_origins
deploymentEnvironment = var.deploymentEnvironment
}
I'm wondering if the merge is just reordering the list of key/values in the app_settings for the app, and Terraform detects that as some kind of change and the value itself isn't changing. This is the only way I know how to assign a bunch of Application Insights resources to many web apps using for_each to reduce configuration code.
Use only the Site_config block to solve the issue
Example
resource "azurerm_windows_function_app" "function2" {
provider = azurerm.private
name = local.private.functionapps.function2.name
resource_group_name = local.private.rg.app.name
location = local.private.location
storage_account_name = local.private.functionapps.storageaccount.name
storage_account_access_key = azurerm_storage_account.function_apps_storage.primary_access_key
service_plan_id = azurerm_service_plan.app_service_plan.id
virtual_network_subnet_id = lookup(azurerm_subnet.subnets, "appservice").id
https_only = true
site_config {
application_insights_key = azurerm_application_insights.appinisghts.instrumentation_key
}
}

How do I make terraform skip that block while creating multiple resources in loop from a CSV file?

Hi I am trying to create a Terraform script which will take inputs from the user in the form of a CSV file and create multiple Azure resources.
For example if the user wants to create: ResourceGroup>Vnet>Subnet in bulk, he will provide input in CSV format as below:
resourcegroup,RG_location,RG_tag,domainname,DNS_Zone_tag,virtualnetwork,VNET_location,addressspace
csvrg1,eastus2,Terraform RG,test.sd,Terraform RG,csvvnet1,eastus2,10.0.0.0/16,Terraform VNET,subnet1,10.0.0.0/24
csvrg2,westus,Terraform RG2,test2.sd,Terraform RG2,csvvnet2,westus,172.0.0.0/8,Terraform VNET2,subnet1,171.0.0.0/24
I have written the following working main.tf file:
# Configure the Microsoft Azure Provider
provider "azurerm" {
version = "=1.43.0"
subscription_id = var.subscription
tenant_id = var.tenant
client_id = var.client
client_secret = var.secret
}
#Decoding the csv file
locals {
vmcsv = csvdecode(file("${path.module}/computelanding.csv"))
}
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
count = length(local.vmcsv)
name = local.vmcsv[count.index].resourcegroup
location = local.vmcsv[count.index].RG_location
tags = {
environment = local.vmcsv[count.index].RG_tag
}
}
# Create a DNS Zone
resource "azurerm_dns_zone" "dnsp-private" {
count = 1
name = local.vmcsv[count.index].domainname
resource_group_name = local.vmcsv[count.index].resourcegroup
depends_on = [azurerm_resource_group.myterraformgroup]
tags = {
environment = local.vmcsv[count.index].DNS_Zone_tag
}
}
To be continued....
The issue I am facing here what is in the second resource group, the user don't want a resource type, suppose the user want to skip the DNS zone in the resource group csvrg2. How do I make terraform skip that block ?
Edit: What I am trying to achieve is "based on some condition in the CSV file, not to create azurerm_dns_zone resource for the resource group csvrg2"
I have provided an example of the CSV file, how it may look like below:
resourcegroup,RG_location,RG_tag,DNS_required,domainname,DNS_Zone_tag,virtualnetwork,VNET_location,addressspace
csvrg1,eastus2,Terraform RG,1,test.sd,Terraform RG,csvvnet1,eastus2,10.0.0.0/16,Terraform VNET,subnet1,10.0.0.0/24
csvrg2,westus,Terraform RG2,0,test2.sd,Terraform RG2,csvvnet2,westus,172.0.0.0/8,Terraform VNET2,subnet1,171.0.0.0/24
you had already the right thought in your mind using the depends_on function. Although, you're using a count inside, which causes from my understanding, that once the first resource[0] is created, Terraform sees the dependency as solved and goes ahead as well.
I found this post with a workaround which you might be able to try:
https://github.com/hashicorp/terraform/issues/15285#issuecomment-447971852
That basically tells us to create a null_resource like in that example:
variable "instance_count" {
default = 0
}
resource "null_resource" "a" {
count = var.instance_count
}
resource "null_resource" "b" {
depends_on = [null_resource.a]
}
In your example, it might look like this:
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
count = length(local.vmcsv)
name = local.vmcsv[count.index].resourcegroup
location = local.vmcsv[count.index].RG_location
tags = {
environment = local.vmcsv[count.index].RG_tag
}
}
# Create a DNS Zone
resource "azurerm_dns_zone" "dnsp-private" {
count = 1
name = local.vmcsv[count.index].domainname
resource_group_name = local.vmcsv[count.index].resourcegroup
depends_on = null_resource.example
tags = {
environment = local.vmcsv[count.index].DNS_Zone_tag
}
}
resource "null_resource" "example" {
...
depends_on = [azurerm_resource_group.myterraformgroup[length(local.vmcsv)]]
}
or depending on your Terraform version (0.12+ which you're using guessing your syntax)
# Create a resource group if it doesn’t exist
resource "azurerm_resource_group" "myterraformgroup" {
count = length(local.vmcsv)
name = local.vmcsv[count.index].resourcegroup
location = local.vmcsv[count.index].RG_location
tags = {
environment = local.vmcsv[count.index].RG_tag
}
}
# Create a DNS Zone
resource "azurerm_dns_zone" "dnsp-private" {
count = 1
name = local.vmcsv[count.index].domainname
resource_group_name = local.vmcsv[count.index].resourcegroup
depends_on = [azurerm_resource_group.myterraformgroup[length(local.vmcsv)]]
tags = {
environment = local.vmcsv[count.index].DNS_Zone_tag
}
}
I hope that helps.
Greetings

Could not read output attribute from remote state datasource

I am new to terraform so I will attempt to explain with the best of my ability. Terraform will not read in the variable/output from the statefile and use that value in another file.
I have tried searching the internet for everything I could find to see if anyone how has had this problem and how they fixed it.
###vnet.tf
#Remote State pulling data from bastion resource group state
data "terraform_remote_state" "network" {
backend = "azurerm"
config = {
storage_account_name = "terraformstatetracking"
container_name = "bastionresourcegroups"
key = "terraform.terraformstate"
}
}
#creating virtual network and putting that network in resource group created by bastion.tf file
module "quannetwork" {
source = "Azure/network/azurerm"
resource_group_name = "data.terraform_remote_state.network.outputs.quan_netwk"
location = "centralus"
vnet_name = "quan"
address_space = "10.0.0.0/16"
subnet_prefixes = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
subnet_names = ["subnet1", "subnet2", "subnet3"]
tags = {
environment = "quan"
costcenter = "it"
}
}
terraform {
backend "azurerm" {
storage_account_name = "terraformstatetracking"
container_name = "quannetwork"
key = "terraform.terraformstate"
}
}
###resourcegroups.tf
# Create a resource group
#Bastion
resource "azurerm_resource_group" "cm" {
name = "${var.prefix}cm.RG"
location = "${var.location}"
tags = "${var.tags}"
}
#Bastion1
resource "azurerm_resource_group" "network" {
name = "${var.prefix}network.RG"
location = "${var.location}"
tags = "${var.tags}"
}
#bastion2
resource "azurerm_resource_group" "storage" {
name = "${var.prefix}storage.RG"
location = "${var.location}"
tags = "${var.tags}"
}
terraform {
backend "azurerm" {
storage_account_name = "terraformstatetracking"
container_name = "bastionresourcegroups"
key = "terraform.terraformstate"
}
}
###outputs.tf
output "quan_netwk" {
description = "Quan Network Resource Group"
value = "${azurerm_resource_group.network.id}"
}
When running the vnet.tf code it should read in the output from the outputs.tf which is stored in the azure backend storage account statefile file and use that value for the resource_group_name in the quannetwork module. Instead it creates a resource group named data.terraform_remote_state.network.outputs.quan_netwk. Any help would be greatly appreciated.
First, you need to input a string for the resource_group_name in your module quannetwork, not the resource group Id.
Second, if you want to quote something in the remote state, do not just put it in the Double quotes, the right format below:
resource_group_name = "${data.terraform_remote_state.network.outputs.quan_netwk}"

Resources