Terraform resources changes configuration items every time its deployed - terraform

I have worked with both the AWS and Azure providers in terraform and both times I have experienced an issue with "toggling" configuration items.
My terraform resources look like this:
resource "azurerm_resource_group" "sample" {
name = "sample"
location = "uksouth"
}
resource "azurerm_storage_account" "sample" {
name = "samplestackoverflow"
resource_group_name = azurerm_resource_group.sample.name
location = azurerm_resource_group.sample.location
account_tier = "Standard"
account_replication_type = "LRS"
min_tls_version = "TLS1_2"
}
resource "azurerm_service_plan" "sample" {
name = "sample"
resource_group_name = azurerm_resource_group.sample.name
location = azurerm_resource_group.sample.location
os_type = "Linux"
sku_name = "Y1"
}
resource "azurerm_linux_function_app" "sample" {
name = "samplestackoverflow"
resource_group_name = azurerm_resource_group.sample.name
location = azurerm_resource_group.sample.location
storage_account_name = azurerm_storage_account.sample.name
storage_account_access_key = azurerm_storage_account.sample.primary_access_key
service_plan_id = azurerm_service_plan.sample.id
https_only = true
client_certificate_mode = "Required"
functions_extension_version = "~4"
site_config {
application_stack {
python_version = "3.8"
}
}
}
Now the issue itself is that every time I run terraform apply and there are changes to be made, for example: change https_only from true to false, the site_config item is removed. So if I run terraform apply immediatelly after those changes are made, then that site_config that disappeared will be re-added. The output looks like this:
~ site_config {
# (33 unchanged attributes hidden)
+ application_stack {
+ python_version = "3.8"
+ use_dotnet_isolated_runtime = false
}
}
As I mentioned, this happens also with other providers and resources (I remember it happening to me for AWS API Gateway too). I can of course come around this by every time terrafrom apply twice. But I was wondering if there was something that could be done here?

Related

Terraform Azurerm: Create blob if not exists

I got Terrafrom code that creates storage account, container and block blob. Is it possible to configure that block blob is created only if it doesn't already exist?
In case of re-running terraform I wouldn't like to replace blob if it is already there as the content might have been manually modified and i would like to keep it.
Any tips? Only alternative I could think of is running powershell/bash script during further deployment steps that would create file if needed, but I am curious if this can be done just with Terraform.
locals {
storage_account_name_teast = format("%s%s", local.main_pw_prefix_short, "teast")
}
resource "azurerm_storage_account" "teaststorage" {
name = local.storage_account_name_teast
resource_group_name = azurerm_resource_group.main.name
location = var.location
account_tier = var.account_tier
account_replication_type = var.account_replication_type
allow_nested_items_to_be_public = false
min_tls_version = "TLS1_2"
network_rules {
default_action = "Deny"
bypass = [
"AzureServices"
]
virtual_network_subnet_ids = []
ip_rules = local.ip_rules
}
tags = var.tags
}
resource "azurerm_storage_container" "teastconfig" {
name = "config"
storage_account_name = azurerm_storage_account.teaststorage.name
container_access_type = "private"
}
resource "azurerm_storage_blob" "teastfeaturetoggle" {
name = "featureToggles.json"
storage_account_name = azurerm_storage_account.teaststorage.name
storage_container_name = azurerm_storage_container.teastconfig.name
type = "Block"
source = "vars-pr-default-toggles.json"
}
After scanning through terraform plan I figured out it was forcing a blob replacement because of:
content_md5 = "9a95db04fb1ff3abcd7ff81fcfb96307" -> null # forces replacement
I added lifecycle hook to blob resource to prevent it:
resource "azurerm_storage_blob" "teastfeaturetoggle" {
name = "featureToggles.json"
storage_account_name = azurerm_storage_account.teaststorage.name
storage_container_name = azurerm_storage_container.teastconfig.name
type = "Block"
source = "vars-pr-default-toggles.json"
lifecycle {
ignore_changes = [
content_md5,
]
}
}

How to set attribute 'cross_tenant_replication_enabled' in terraform

I am creating Storage account using terraform and want to set cross_tenant_replication_enabled to false
data "azurerm_resource_group" "data_resource_group" {
name = var.resource_group_name
}
resource "azurerm_storage_account" "example_storage_account" {
name = var.storage_account_name
resource_group_name = data.azurerm_resource_group.data_resource_group.name #(Existing resource group)
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
allow_nested_items_to_be_public = false
cross_tenant_replication_enabled = false
identity {
type = "SystemAssigned"
}
}
I am getting below error
Error: Unsupported argument
on ceft_azure/main.tf line 55, in resource "azurerm_storage_account" "example_storage_account":
55: cross_tenant_replication_enabled = false
An argument named "cross_tenant_replication_enabled" is not expected here.
How can I set the attribute value to false?
Tried to set the attribute(cross_tenant_replication_enabled=false) in storage container block. But it didn't work.
Able to create storage account using terraform and want to set cross_tenant_replication_enabled as false
Root Cause: current version on AzureRM is not supported to cross tenant replication. Use azurerm >=3.0.1
Update version in provider as
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=3.0.1"
}
here is the code snippet
Step1:
run below command
terraform init -upgrade
Step2:
Copy the below code in main tf file
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "rg_swarna-example-resources"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "swarnastorageaccountname"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
allow_nested_items_to_be_public = false
cross_tenant_replication_enabled = false
identity {
type = "SystemAssigned"
}
tags = {
environment = "staging"
}
}
Step3:
Run below commands
terraform plan
terraform apply -auto-approve
Verification:

Create main.tf resources only when a variable is set to true in the vars.tf file

I usually have one generic main.tf file that is the basis for all deployments to our environments (DEV/STAGING/LIVE). I have one parameter.tf file for each of those environments.
There is always a requirement to have some more expensive Azure options enabled in the STAGING and LIVE environments over what DEV might have - in my example its enabling the Azure Defender for SQL and extended Auditing functions for Azure SQL servers (PaaS)
This is a portion of my main.tf file that is generic...
# Define SQL Server
resource "azurerm_mssql_server" "example" {
name = var.azsqlserver1name
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
version = var.azsqlserver1version
administrator_login = var.azsqlserver1sauser
administrator_login_password = random_password.sql-password.result
public_network_access_enabled = "true" # set to false with vNet integration
}
# Define Storage Account and container for SQL Threat Detection Policy Audit Logs
resource "azurerm_storage_account" "example" {
name = var.azsaname1
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = var.azsatier1
account_replication_type = var.azsasku1
access_tier = var.azsaaccesstier1
account_kind = var.azsakind1
enable_https_traffic_only = "true"
}
resource "azurerm_storage_container" "example" {
name = "vascans"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
# Defines Azure SQL Defender and Auditing - NOTE: Auditing - only SA out at the moment (11/2020) - Log Analytics and Event Hub in preview only
resource "azurerm_mssql_server_security_alert_policy" "example" {
resource_group_name = azurerm_resource_group.example.name
server_name = azurerm_mssql_server.example.name
state = var.azsqltreatdetectionstate
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
email_account_admins = var.azsqltreatdetectionemailadmins
retention_days = var.azsqltreatdetectionretention
}
resource "azurerm_mssql_server_vulnerability_assessment" "example" {
server_security_alert_policy_id = azurerm_mssql_server_security_alert_policy.example.id
storage_container_path = "${azurerm_storage_account.example.primary_blob_endpoint}${azurerm_storage_container.example.name}/"
storage_account_access_key = azurerm_storage_account.example.primary_access_key
recurring_scans {
enabled = var.azsqlvscansrecurring
email_subscription_admins = var.azsqlvscansemailadmins
}
}
resource "azurerm_mssql_server_extended_auditing_policy" "example" {
server_id = azurerm_mssql_server.example.id
storage_endpoint = azurerm_storage_account.example.primary_blob_endpoint
storage_account_access_key = azurerm_storage_account.example.primary_access_key
storage_account_access_key_is_secondary = false
retention_in_days = var.azsqlauditretentiondays
}
What I need to do is have anything after the first "azurerm_mssql_server" resource to only be created in STAGING and LIVE (not DEV). I was planning to have a variable in the DEV/STAGING/LIVE parm tf files that state something like...
DEVparm.tf
variable azsqlenableazuredefenderforsql {
default="false"
}
STAGINGparm.tf and LIVEparm.tf
variable azsqlenableazuredefenderforsql {
default="true"
}
If this possible to achieve? Thus far I've draw a blank and tested a few things, but they don't quite work. It seems a simple enough vision, but when there is no IF... statement
If you need to flip a resource on and off that is easy to achieve with count = 1 or 0. This is usually handled with the ternary operator.
resource "some_resource" "example" {
count = terraform.workspace != "development" ? 1 : 0
}
The count parameter was added to modules for terraform 0.13. If you have a bundle of resources it could be an alterative method to excluding certain resources from building.
One way that a lot of people solve this is by combining the count parameter on resources with a ternary. For example, look at the section entitled "If-Statements with the count parameter" in https://blog.gruntwork.io/terraform-tips-tricks-loops-if-statements-and-gotchas-f739bbae55f9#478c.
Basically you can keep your azsqlenableazuredefenderforsql variable and then in your resources do something like:
resource "azurerm_storage_container" "example" {
count = var.azsqlenableazuredefenderforsql ? 1 : 0
name = "vascans"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}

azure function via terraform: how to connect to service bus

I am stuck when trying to deploy an Azure function via Azure DevOps pipelines and Terraform.
Running terraform apply works fine and the Service Bus looks good and works. In the Azure portal the function seems to be running, but it complains that it can not find the ServiceBusConnection.
I defined it via the following Terraform declaration:
resource "azurerm_resource_group" "rg" {
name = "rg-sb-westeurope"
location = "westeurope"
}
resource "azurerm_servicebus_namespace" "sb" {
name = "ns-sb"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
sku = "Standard"
}
resource "azurerm_servicebus_queue" "sbq" {
name = "servicebusqueue"
resource_group_name = azurerm_resource_group.rg.name
namespace_name = azurerm_servicebus_namespace.sb.name
enable_partitioning = true
}
resource "azurerm_servicebus_namespace_authorization_rule" "sb-ar" {
name = "servicebus_auth_rule"
namespace_name = azurerm_servicebus_namespace.sb.name
resource_group_name = azurerm_resource_group.rg.name
listen = false
send = true
manage = false
}
In the function app i declare:
resource "azurerm_function_app" "fa" {
name = "function-app"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = azurerm_app_service_plan.asp.id
storage_account_name = azurerm_storage_account.sa.name
storage_account_access_key = azurerm_storage_account.sa.primary_access_key
app_settings = {
ServiceBusConnection = azurerm_servicebus_namespace_authorization_rule.sb-ar.name
}
}
This tf. will not work out of the box as i have not copied here the full declaration.
I think I am setting the connection environment vars wrong but have no idea on how to do it correctly.
EDIT
With the hint from #Heye I got it working. This is the correct snipped replacing the name with primary_connection_string.
resource "azurerm_function_app" "fa" {
name = "function-app"
location = azurerm_resource_group.rg.location
resource_group_name = azurerm_resource_group.rg.name
app_service_plan_id = azurerm_app_service_plan.asp.id
storage_account_name = azurerm_storage_account.sa.name
storage_account_access_key = azurerm_storage_account.sa.primary_access_key
app_settings = {
ServiceBusConnection = azurerm_servicebus_namespace_authorization_rule.sb-ar.primary_connection_string
}
}
You are setting the ServiceBusConnection value to the name of the authorization rule. However, you probably want to set it to the primary_connection_string, as that contains the key along with all the information needed to connect to the Service Bus.

Terraform - Import Azure VMs to state file using modules

I'm creating VMs using the script below beginning with "# Script to create VM". The script is being called from a higher level directory so as to create the VMs using modules, the call looks something like in the code below starting with "#Template..". The problem is that we are missing the state for a few VMs that were created during a previous run. I've tried importing the VM itself but looking at the state file it does not appear anything similar to the ones already there created using the bottom script. Any help would be great.
#Template to call VM Script below
module <virtual_machine_name> {
source = "./vm"
virtual_machine_name = "<virtual_machine_name>"
resource_group_name = "<resource_group_name>"
availability_set_name = "<availability_set_name>"
virtual_machine_size = "<virtual_machine_size>"
subnet_name = "<subnet_name>"
private_ip = "<private_ip>"
optional:
production = true (default is false)
data_disk_name = ["<disk1>","<disk2>"]
data_disk_size = ["50","100"] size is in GB
}
# Script to create VM
data azurerm_resource_group rgdata02 {
name = "${var.resource_group_name}"
}
data azurerm_subnet sndata02 {
name = "${var.subnet_name}"
resource_group_name = "${var.core_resource_group_name}"
virtual_network_name = "${var.virtual_network_name}"
}
data azurerm_availability_set availsetdata02 {
name = "${var.availability_set_name}"
resource_group_name = "${var.resource_group_name}"
}
data azurerm_backup_policy_vm bkpoldata02 {
name = "${var.backup_policy_name}"
recovery_vault_name = "${var.recovery_services_vault_name}"
resource_group_name = "${var.core_resource_group_name}"
}
data azurerm_log_analytics_workspace law02 {
name = "${var.log_analytics_workspace_name}"
resource_group_name = "${var.core_resource_group_name}"
}
#===================================================================
# Create NIC
#===================================================================
resource "azurerm_network_interface" "vmnic02" {
name = "nic${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
ip_configuration {
name = "ipcnfg${var.virtual_machine_name}"
subnet_id = "${data.azurerm_subnet.sndata02.id}"
private_ip_address_allocation = "Static"
private_ip_address = "${var.private_ip}"
}
}
#===================================================================
# Create VM with Availability Set
#===================================================================
resource "azurerm_virtual_machine" "vm02" {
count = var.avail_set != "" ? 1 : 0
depends_on = [azurerm_network_interface.vmnic02]
name = "${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
network_interface_ids = [azurerm_network_interface.vmnic02.id]
vm_size = "${var.virtual_machine_size}"
availability_set_id = "${data.azurerm_availability_set.availsetdata02.id}"
tags = var.tags
# This means the OS Disk will be deleted when Terraform destroys the Virtual Machine
# NOTE: This may not be optimal in all cases.
delete_os_disk_on_termination = true
os_profile {
computer_name = "${var.virtual_machine_name}"
admin_username = "__VMUSER__"
admin_password = "__VMPWD__"
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_image_reference {
id = "${var.image_id}"
}
storage_os_disk {
name = "${var.virtual_machine_name}osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
os_type = "Linux"
}
boot_diagnostics {
enabled = true
storage_uri = "${var.boot_diagnostics_uri}"
}
}
#===================================================================
# Create VM without Availability Set
#===================================================================
resource "azurerm_virtual_machine" "vm03" {
count = var.avail_set == "" ? 1 : 0
depends_on = [azurerm_network_interface.vmnic02]
name = "${var.virtual_machine_name}"
location = "${data.azurerm_resource_group.rgdata02.location}"
resource_group_name = "${var.resource_group_name}"
network_interface_ids = [azurerm_network_interface.vmnic02.id]
vm_size = "${var.virtual_machine_size}"
# availability_set_id = "${data.azurerm_availability_set.availsetdata02.id}"
tags = var.tags
# This means the OS Disk will be deleted when Terraform destroys the Virtual Machine
# NOTE: This may not be optimal in all cases.
delete_os_disk_on_termination = true
os_profile {
computer_name = "${var.virtual_machine_name}"
admin_username = "__VMUSER__"
admin_password = "__VMPWD__"
}
os_profile_linux_config {
disable_password_authentication = false
}
storage_image_reference {
id = "${var.image_id}"
}
storage_os_disk {
name = "${var.virtual_machine_name}osdisk"
caching = "ReadWrite"
create_option = "FromImage"
managed_disk_type = "Premium_LRS"
os_type = "Linux"
}
boot_diagnostics {
enabled = true
storage_uri = "${var.boot_diagnostics_uri}"
}
}
#===================================================================
# Set Monitoring and Log Analytics Workspace
#===================================================================
resource "azurerm_virtual_machine_extension" "oms_mma02" {
count = var.bootstrap ? 1 : 0
name = "${var.virtual_machine_name}-OMSExtension"
virtual_machine_id = "${azurerm_virtual_machine.vm02.id}"
publisher = "Microsoft.EnterpriseCloud.Monitoring"
type = "OmsAgentForLinux"
type_handler_version = "1.8"
auto_upgrade_minor_version = true
settings = <<SETTINGS
{
"workspaceId" : "${data.azurerm_log_analytics_workspace.law02.workspace_id}"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"workspaceKey" : "${data.azurerm_log_analytics_workspace.law02.primary_shared_key}"
}
PROTECTED_SETTINGS
}
#===================================================================
# Associate VM to Backup Policy
#===================================================================
resource "azurerm_backup_protected_vm" "vm02" {
count = var.bootstrap ? 1 : 0
resource_group_name = "${var.core_resource_group_name}"
recovery_vault_name = "${var.recovery_services_vault_name}"
source_vm_id = "${azurerm_virtual_machine.vm02.id}"
backup_policy_id = "${data.azurerm_backup_policy_vm.bkpoldata02.id}"}
On my understanding that you do not understand the Terraform Import clearly. So I would show you what does it mean.
When you want to import the pre-existing resources, you need to configure the resource in the Terraform files first that how the existing resources configured. And all the resources would be imported into the state files.
Another caveat currently is that only a single resource can be imported into a state file at a time.
When you want to import the resources into a module, I assume the folder structure like this:
testingimportfolder
└── main.tf
└── terraform.tfstate
└── terraform.tfstate.backup
└───module
└── main.tf
And the main.tf file in the folder testingimportfolder set the module block liek this:
module "importlab" {
source = "./module"
...
}
And after you finish importing all the resources into the state file, and then you can see the output of the command terraform state list like this:
module.importlab.azurerm_network_security_group.nsg
module.importlab.azurerm_resource_group.rg
module.importlab.azurerm_virtual_network.vnet
All the resource name should like module.module_name.azurerm_xxxx.resource_name. If you use the module inside the module, I assume the folder structure like this:
importmodules
├── main.tf
├── modules
│   └── vm
│   ├── main.tf
│   └── module
│   └── main.tf
And the file importmodules/modules/vm/main.tf like this:
module "azurevm" {
source = "./module"
...
}
Then after you finish importing all the resources into the state file, and then you can see the output of the command terraform state list like this:
module.vm.module.azurevm.azurerm_network_interface.example
Yes, it just likes what you have got. The state file will store your existing resources as you quote the modules one by one. So you need to plan your code and modules carefully and clearly. Or you will make yourself confused.

Resources