I am creating Storage account using terraform and want to set cross_tenant_replication_enabled to false
data "azurerm_resource_group" "data_resource_group" {
name = var.resource_group_name
}
resource "azurerm_storage_account" "example_storage_account" {
name = var.storage_account_name
resource_group_name = data.azurerm_resource_group.data_resource_group.name #(Existing resource group)
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
allow_nested_items_to_be_public = false
cross_tenant_replication_enabled = false
identity {
type = "SystemAssigned"
}
}
I am getting below error
Error: Unsupported argument
on ceft_azure/main.tf line 55, in resource "azurerm_storage_account" "example_storage_account":
55: cross_tenant_replication_enabled = false
An argument named "cross_tenant_replication_enabled" is not expected here.
How can I set the attribute value to false?
Tried to set the attribute(cross_tenant_replication_enabled=false) in storage container block. But it didn't work.
Able to create storage account using terraform and want to set cross_tenant_replication_enabled as false
Root Cause: current version on AzureRM is not supported to cross tenant replication. Use azurerm >=3.0.1
Update version in provider as
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">=3.0.1"
}
here is the code snippet
Step1:
run below command
terraform init -upgrade
Step2:
Copy the below code in main tf file
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "rg_swarna-example-resources"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "swarnastorageaccountname"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
allow_nested_items_to_be_public = false
cross_tenant_replication_enabled = false
identity {
type = "SystemAssigned"
}
tags = {
environment = "staging"
}
}
Step3:
Run below commands
terraform plan
terraform apply -auto-approve
Verification:
Related
I am using Terraform Script to create a Recovery Service Vault and need to take the VM backup into that recovery service vault. Below is the terraform script and steps I performed:
Create Recovery Service Vault
Define backup policy
Azure VM protection backup
Terraform Version >= 0.14
Azure rm version= ~>2.0
Main.tf
resource "azurerm_recovery_services_vault" "recovery_vault" {
name = var.recovery_vault_name
location = var.location
resource_group_name = var.resource_group_name
}
resource "azurerm_backup_policy_vm" "ss_vm_backup_policy" {
name = "tfex-recovery-vault-policy"
resource_group_name = var.resource_group_name
recovery_vault_name = azurerm_recovery_services_vault.recovery_vault.name
policy_type = "V1"
backup {
frequency = "Monthly"
time = "23:00"
weekdays = ['Sunday']
}
instant_restore_retention_days = 5
retention_weekly {
count= 12
weekdays = ['Sunday']
}
}
resource "azurerm_backup_protected_vm" "ss_vm_protection_backup" {
resource_group_name = var.resource_group_name
recovery_vault_name = azurerm_recovery_services_vault.recovery_vault.name
source_vm_id = azurerm_windows_virtual_machine.windows_vm[count.index].id
backup_policy_id = azurerm_backup_policy_vm.ss_vm_backup_policy.id
}
Variable.tf
variable resource_group_name {
default = "ss"
}
variable recovery_vault_name {
default = "yy"
}
Referred the above Main.tf in my application specific main.tf file as below:
module "azure_windows_vm" {
source = "git::https://xx.yy.com/ABCD/_git/NBGF/TGFT?ref=master"
vm_name = local.vm_name
location = local.location
resource_group_name = local.resource_group_name
admin_password = "xyz"
num_vms = 2
vnet_name = var.vm_subnet_id
tags=local.tags
}
When I execute the above Terraform script in DevOps pipeline, I am getting below error from line where module "azure_windows_vm" starts
Error: Missing required argument The argument "recovery_vault_name" is
required, but no definition was found
I tried different things to fix this error, but somehow it is not working. Can someone please guide what I am missing here?
Thank You!
I got Terrafrom code that creates storage account, container and block blob. Is it possible to configure that block blob is created only if it doesn't already exist?
In case of re-running terraform I wouldn't like to replace blob if it is already there as the content might have been manually modified and i would like to keep it.
Any tips? Only alternative I could think of is running powershell/bash script during further deployment steps that would create file if needed, but I am curious if this can be done just with Terraform.
locals {
storage_account_name_teast = format("%s%s", local.main_pw_prefix_short, "teast")
}
resource "azurerm_storage_account" "teaststorage" {
name = local.storage_account_name_teast
resource_group_name = azurerm_resource_group.main.name
location = var.location
account_tier = var.account_tier
account_replication_type = var.account_replication_type
allow_nested_items_to_be_public = false
min_tls_version = "TLS1_2"
network_rules {
default_action = "Deny"
bypass = [
"AzureServices"
]
virtual_network_subnet_ids = []
ip_rules = local.ip_rules
}
tags = var.tags
}
resource "azurerm_storage_container" "teastconfig" {
name = "config"
storage_account_name = azurerm_storage_account.teaststorage.name
container_access_type = "private"
}
resource "azurerm_storage_blob" "teastfeaturetoggle" {
name = "featureToggles.json"
storage_account_name = azurerm_storage_account.teaststorage.name
storage_container_name = azurerm_storage_container.teastconfig.name
type = "Block"
source = "vars-pr-default-toggles.json"
}
After scanning through terraform plan I figured out it was forcing a blob replacement because of:
content_md5 = "9a95db04fb1ff3abcd7ff81fcfb96307" -> null # forces replacement
I added lifecycle hook to blob resource to prevent it:
resource "azurerm_storage_blob" "teastfeaturetoggle" {
name = "featureToggles.json"
storage_account_name = azurerm_storage_account.teaststorage.name
storage_container_name = azurerm_storage_container.teastconfig.name
type = "Block"
source = "vars-pr-default-toggles.json"
lifecycle {
ignore_changes = [
content_md5,
]
}
}
I have worked with both the AWS and Azure providers in terraform and both times I have experienced an issue with "toggling" configuration items.
My terraform resources look like this:
resource "azurerm_resource_group" "sample" {
name = "sample"
location = "uksouth"
}
resource "azurerm_storage_account" "sample" {
name = "samplestackoverflow"
resource_group_name = azurerm_resource_group.sample.name
location = azurerm_resource_group.sample.location
account_tier = "Standard"
account_replication_type = "LRS"
min_tls_version = "TLS1_2"
}
resource "azurerm_service_plan" "sample" {
name = "sample"
resource_group_name = azurerm_resource_group.sample.name
location = azurerm_resource_group.sample.location
os_type = "Linux"
sku_name = "Y1"
}
resource "azurerm_linux_function_app" "sample" {
name = "samplestackoverflow"
resource_group_name = azurerm_resource_group.sample.name
location = azurerm_resource_group.sample.location
storage_account_name = azurerm_storage_account.sample.name
storage_account_access_key = azurerm_storage_account.sample.primary_access_key
service_plan_id = azurerm_service_plan.sample.id
https_only = true
client_certificate_mode = "Required"
functions_extension_version = "~4"
site_config {
application_stack {
python_version = "3.8"
}
}
}
Now the issue itself is that every time I run terraform apply and there are changes to be made, for example: change https_only from true to false, the site_config item is removed. So if I run terraform apply immediatelly after those changes are made, then that site_config that disappeared will be re-added. The output looks like this:
~ site_config {
# (33 unchanged attributes hidden)
+ application_stack {
+ python_version = "3.8"
+ use_dotnet_isolated_runtime = false
}
}
As I mentioned, this happens also with other providers and resources (I remember it happening to me for AWS API Gateway too). I can of course come around this by every time terrafrom apply twice. But I was wondering if there was something that could be done here?
on the way to create aks via terraform, here i want to create azure storage account & want to use the same same account to store the terraform state file.
however getting below error
│ Error: Error loading state: Error retrieving keys for Storage Account "azurerm_resource_group.aks_rg.name": storage.AccountsClient#ListKeys: Invalid input: autorest/validation: validation failed: parameter=accountName constraint=MaxLength value="azurerm_resource_group.aks_rg.name" details: value length must be less than or equal to 24
│
#Create Resource Group
resource "azurerm_resource_group" "aks_rg" {
location = "${var.location}"
name = "${var.global-prefix}-${var.cluster-id}-${var.environment}-azwe-aks-rg"
}
#Create Storage Account & Container
resource "azurerm_storage_account" "storage_acc" {
name = "${var.cluster-id}-storage-account"
resource_group_name = azurerm_resource_group.aks_rg.name
location = azurerm_resource_group.aks_rg.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "storage_container" {
name = "${var.cluster-id}-storage-account-container"
storage_account_name = azurerm_storage_account.storage_acc.name
container_access_type = "private"
}
#store terraform state in remote container
terraform {
# Configure Terraform State Storage
backend "azurerm" {
resource_group_name = "azurerm_resource_group.aks_rg.name"
storage_account_name = "azurerm_storage_container.storage_acc.name"
container_name = "azurerm_storage_container.storage_container.name"
key = "terraform.tfstate"
}
}
You need to first create the storage account and container then while creating the aks cluster you need to give the below:
terraform {
# Configure Terraform State Storage
backend "azurerm" {
resource_group_name = "azurerm_resource_group.aks_rg.name"
storage_account_name = "azurerm_resource_group.aks_rg.name"
container_name = "powermeprodtfstate"
key = "terraform.tfstate"
}
}
Instead of creating the storage account and container ins the same file while storing the terraform tfstate.
Example:
Create storage account and container:
provider "azurerm" {
features {}
}
data "azurerm_resource_group" "example" {
name = "resourcegroupname"
}
resource "azurerm_storage_account" "example" {
name = "yourstorageaccountname"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "example" {
name = "terraform"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
Then create the aks resource group and store the tfstate in container.
provider "azurerm" {
features {}
}
terraform {
# Configure Terraform State Storage
backend "azurerm" {
resource_group_name = "resourcegroup"
storage_account_name = "storageaccountnameearliercreated"
container_name = "terraform"
key = "terraform.tfstate"
}
}
resource "azurerm_resource_group" "aks_rg" {
name = "aks-rg"
location = "west us"
}
Reference:
How to store the Terraform state file in Azure Storage. » Jorge Bernhardt
unable to attach service_endpoint_policy_ids to the subnet
service_endpoints created successfully but storage policy unable to attach to subnet
Ended up with below error
Error: Cycle: azurerm_subnet_service_endpoint_storage_policy.stg, azurerm_subnet.backend, module.storage_bsai.var.vnet_subnet_id (expand), module.storage_bsai.azurerm_storage_account.storageaccount_name, module.storage_bsai.output.id (expand)
provider
azurerm version = "2.65.0"
Terraform resource for storage policy and subnet
resource "azurerm_subnet_service_endpoint_storage_policy" "stg" {
name = "storage-policy-bsai"
resource_group_name = "${var.env}-bsai"
location = var.region
definition {
name = "storage"
#description = "definition1"
service_resources = [
module.resource_group.id,
module.storage_bsai.id
]
}
}
resource "azurerm_subnet" "backend" {
depends_on = [module.vnet]
name = "backend"
virtual_network_name = "${var.env}-${var.region}-bsai"
resource_group_name = "${var.env}-bsai"
address_prefixes = ["10.0.0.0/24"]
service_endpoints = ["Microsoft.Storage", "Microsoft.AzureCosmosDB", "Microsoft.ServiceBus", "Microsoft.Web", "Microsoft.ContainerRegistry"]
service_endpoint_policy_ids = [azurerm_subnet_service_endpoint_storage_policy.stg.id]
delegation {
name = "delegation"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
service_endpoint_policy_ids should be a list:
service_endpoint_policy_ids = [azurerm_subnet_service_endpoint_storage_policy.stg.id[
Found the issue -
This appears that you have a cyclic dependency in your config? (i.e. where 2 or more resources depend on each other, meaning Terraform cannot reconcile what needs to happen in what order)
https://github.com/terraform-providers/terraform-provider-azurerm/issues/12593#issuecomment-881192611