Terraform Azurerm: Create blob if not exists - terraform

I got Terrafrom code that creates storage account, container and block blob. Is it possible to configure that block blob is created only if it doesn't already exist?
In case of re-running terraform I wouldn't like to replace blob if it is already there as the content might have been manually modified and i would like to keep it.
Any tips? Only alternative I could think of is running powershell/bash script during further deployment steps that would create file if needed, but I am curious if this can be done just with Terraform.
locals {
storage_account_name_teast = format("%s%s", local.main_pw_prefix_short, "teast")
}
resource "azurerm_storage_account" "teaststorage" {
name = local.storage_account_name_teast
resource_group_name = azurerm_resource_group.main.name
location = var.location
account_tier = var.account_tier
account_replication_type = var.account_replication_type
allow_nested_items_to_be_public = false
min_tls_version = "TLS1_2"
network_rules {
default_action = "Deny"
bypass = [
"AzureServices"
]
virtual_network_subnet_ids = []
ip_rules = local.ip_rules
}
tags = var.tags
}
resource "azurerm_storage_container" "teastconfig" {
name = "config"
storage_account_name = azurerm_storage_account.teaststorage.name
container_access_type = "private"
}
resource "azurerm_storage_blob" "teastfeaturetoggle" {
name = "featureToggles.json"
storage_account_name = azurerm_storage_account.teaststorage.name
storage_container_name = azurerm_storage_container.teastconfig.name
type = "Block"
source = "vars-pr-default-toggles.json"
}

After scanning through terraform plan I figured out it was forcing a blob replacement because of:
content_md5 = "9a95db04fb1ff3abcd7ff81fcfb96307" -> null # forces replacement
I added lifecycle hook to blob resource to prevent it:
resource "azurerm_storage_blob" "teastfeaturetoggle" {
name = "featureToggles.json"
storage_account_name = azurerm_storage_account.teaststorage.name
storage_container_name = azurerm_storage_container.teastconfig.name
type = "Block"
source = "vars-pr-default-toggles.json"
lifecycle {
ignore_changes = [
content_md5,
]
}
}

Related

terraform nested for each loop in azure storage account

I would want to create multiple storage acounts and inside each of those sotrage accounts some containers.
If I would want 3 storage account i would always want to create container-a and container-b in those 3 storage accounts
So for example would be. Storage account list ["sa1","sa2","sa3"].
resource "azurerm_storage_account" "storage_account" {
count = length(var.list)
name = var.name
resource_group_name = module.storage-account-resource-group.resource_group_name[0]
location = var.location
account_tier = var.account_tier
account_kind = var.account_kind
then container block
resource "azurerm_storage_container" "container" {
depends_on = [azurerm_storage_account.storage_account]
count = length(var.containers)
name = var.containers[count.index].name
container_access_type = var.containers[count.index].access_type
storage_account_name = azurerm_storage_account.storage_account[0].name
container variables:
variable "containers" {
type = list(object({
name = string
access_type = string
}))
default = []
description = "List of storage account containers."
}
list variable
variable "list" {
type = list(string)
description = "the env to deploy. ['dev','qa','prod']"
This code will create only one container in the first storage account "sa1" but not in the others two "sa2" and "sa3". I read I need to use 2 for each to iterate in both list of storage account and continaers, but not sure how should be the code for it.
It would be better to use for_each:
resource "azurerm_storage_account" "storage_account" {
for_each = toset(var.list)
name = var.name
resource_group_name = module.storage-account-resource-group.resource_group_name[0]
location = var.location
account_tier = var.account_tier
account_kind = var.account_kind
}
then you need an equivalent of a double for loop, which you can get using setproduct:
locals {
flat_list = setproduct(var.list, var.containers)
}
and then you use local.flat_list for containers:
resource "azurerm_storage_container" "container" {
for_each = {for idx, val in local.flat_list: idx => val}
name = each.value.name[1].name
container_access_type = each.value.name[1].access_type
storage_account_name = azurerm_storage_account.storage_account[each.value[0]].name
}
p.s. I haven't run the code, thus it may require some adjustments, but the idea remains valid.

Accessing specific storage account id created using terraform module

I am creating an infrastructure with terraform modules. Some of the common and repeatitive infra are created using module
and other resources are created independently outside of the module. The structure of my code is described as below.
-terraform\module\storage.tf
-terraform\main.tf
-terraform\mlws.tf
This is my code for /module/storage.tf where I am createing a storage account like this
resource "azurerm_storage_account" "storage" {
name = var.storage_account_name
resource_group_name = var.rg_name
location = var.location
account_tier = "Standard"
account_replication_type = "GRS"
min_tls_version = "TLS1_2"
}
module "m1" {
source = "./modules"
storage_account_name = "m1storage"
rg_name = "rg1"
location = "USCentral"
}
module "m2" {
source = "./modules"
storage_account_name = "m2storage"
rg_name = "rg2"
location = "USCentral"
}
module "m3" {
source = "./modules"
storage_account_name = "m3storage"
rg_name = "rg3"
location = "USCentral"
}
resource "azurerm_machine_learning_workspace" "mlws" {
name = "mlws"
location = ""USCentral"
resource_group_name = "mlws-rg1"
application_insights_id = azurerm_application_insights.mlops_appins.id
key_vault_id = data.azurerm_key_vault.kv.id
storage_account_id = **<Mandatory to be filled>**
container_registry_id = azurerm_container_registry.acr.id
identity {
type = "SystemAssigned"
}
depends_on = [
module.m2
]
}
The code for storage account is under \terraform\module\storage.tf, The code for calling the module is under \terraform\main.tf, The code for machine learning workspace is under \terraform\mlws.tf.
Since my mlws.tf code is outside the module but it need to be associated with storage account id created under module m2 in above code.
I am struggling to fetch the id of "m2storage" storage account. Can you please provide solution on how can I access the id of specific storage account created through module and attach it with my code which is outside the module.
This is how it normally works. You run module m2 and it should give output something like this (should include storage_account_id):
output "storage_account_id" {
description = "M2 storage account id."
value = m2.storage_account.storage_account_id
}
Now you have the output and you want to use it you will refer to it as:
resource "azurerm_machine_learning_workspace" "mlws" {
name = "mlws"
location = ""USCentral"
resource_group_name = "mlws-rg1"
application_insights_id = azurerm_application_insights.mlops_appins.id
key_vault_id = data.azurerm_key_vault.kv.id
storage_account_id = module.m2.storage_account_id
container_registry_id = azurerm_container_registry.acr.id
identity {
type = "SystemAssigned"
}
depends_on = [
module.m2
]
}
Let me know if you need more help.

Terraform 403 error when creating function app and storage account with private endpoint

I am getting a 403 forbidden when creating a function app that connects to its storage account via private endpoint inside a vnet. Storage account has firewall default action of 'Deny', and of course if I set it to 'Allow' it will work. I want this as 'Deny', however. Following this microsoft link if the function app and storage account are created in the same region with vnet, subnets, and private endpoints then it's supposed to work so I must be doing something wrong. I also tried changing the region for the storage account and it still resulted in a 403.
Error:
Error: web.AppsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="BadRequest" Message="There was a conflict. The remote server returned an error: (403) Forbidden." Details=[{"Message":"There was a conflict. The remote server returned an error: (403) Forbidden."},{"Code":"BadRequest"},{"ErrorEntity":{"Code":"BadRequest","ExtendedCode":"01020","Message":"There was a conflict. The remote server returned an error: (403) Forbidden.","MessageTemplate":"There was a conflict. {0}","Parameters":["The remote server returned an error: (403) Forbidden."]}}]
Here is my terraform code
resource "azurerm_function_app" "func" {
name = "${var.func_basics.name}-func"
location = var.func_basics.location
resource_group_name = var.func_basics.resource_group_name
app_service_plan_id = azurerm_app_service_plan.svc_plan.id
storage_account_name = azurerm_storage_account.func_sa.name
storage_account_access_key = azurerm_storage_account.func_sa.primary_access_key
version = var.runtime_version
https_only = true
depends_on = [
azurerm_storage_account.func_sa,
azurerm_app_service_plan.svc_plan,
azurerm_application_insights.func_ai,
azurerm_virtual_network.func_vnet
]
app_settings = merge(var.app_settings, local.additional_app_settings)
}
resource "azurerm_app_service_plan" "svc_plan" {
name = "${var.func_basics.name}-func-plan"
location = var.func_basics.location
resource_group_name = var.func_basics.resource_group_name
kind = "elastic"
sku {
tier = "ElasticPremium"
size = "EP1"
}
}
resource "azurerm_application_insights" "func_ai" {
name = "${var.func_basics.name}-func-appi"
location = var.func_basics.location
resource_group_name = var.func_basics.resource_group_name
application_type = var.ai_app_type
}
resource "azurerm_storage_account" "func_sa" {
name = "st${lower(replace(var.func_basics.name, "/[-_]*/", ""))}"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
account_tier = var.sa_settings.tier
account_replication_type = var.sa_settings.replication_type
account_kind = "StorageV2"
enable_https_traffic_only = true
min_tls_version = "TLS1_2"
depends_on = [
azurerm_virtual_network.func_vnet
]
network_rules {
default_action = "Deny"
virtual_network_subnet_ids = [azurerm_subnet.func_endpoint_subnet.id]
bypass = [
"Metrics",
"Logging",
"AzureServices"
]
}
}
resource "azurerm_virtual_network" "func_vnet" {
name = "${var.func_basics.name}-func-vnet"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "func_service_subnet" {
name = "${var.func_basics.name}-func-svc-snet"
resource_group_name = var.func_basics.resource_group_name
virtual_network_name = azurerm_virtual_network.func_vnet.name
address_prefixes = ["10.0.1.0/24"]
enforce_private_link_service_network_policies = true
service_endpoints = ["Microsoft.Storage"]
delegation {
name = "${var.func_basics.name}-func-del"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
resource "azurerm_subnet" "func_endpoint_subnet" {
name = "${var.func_basics.name}-func-end-snet"
resource_group_name = var.func_basics.resource_group_name
virtual_network_name = azurerm_virtual_network.func_vnet.name
address_prefixes = ["10.0.2.0/24"]
enforce_private_link_endpoint_network_policies = true
}
resource "azurerm_private_endpoint" "func_req_sa_blob_endpoint" {
name = "${var.func_basics.name}-func-req-sa-blob-end"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
subnet_id = azurerm_subnet.func_endpoint_subnet.id
private_service_connection {
name = "${var.func_basics.name}-func-req-sa-blob-pscon"
private_connection_resource_id = azurerm_storage_account.func_sa.id
is_manual_connection = false
subresource_names = ["blob"]
}
}
resource "azurerm_private_endpoint" "func_req_sa_file_endpoint" {
name = "${var.func_basics.name}-func-req-sa-file-end"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
subnet_id = azurerm_subnet.func_endpoint_subnet.id
private_service_connection {
name = "${var.func_basics.name}-func-req-sa-file-pscon"
private_connection_resource_id = azurerm_storage_account.func_sa.id
is_manual_connection = false
subresource_names = ["file"]
}
}
resource "azurerm_app_service_virtual_network_swift_connection" "func_vnet_swift" {
app_service_id = azurerm_function_app.func.id
subnet_id = azurerm_subnet.func_service_subnet.id
}
locals {
additional_app_settings = {
"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.func_ai.instrumentation_key
"WEBSITE_CONTENTAZUREFILECONNECTIONSTRING" = azurerm_storage_account.func_sa.primary_connection_string
"AzureWebJobsStorage" = azurerm_storage_account.func_sa.primary_connection_string
"WEBSITE_VNET_ROUTE_ALL" = "1"
"WEBSITE_CONTENTOVERVNET" = "1"
"WEBSITE_DNS_SERVER" = "168.63.129.16"
}
}
It seems that it's a common error message when you create an Azure function where the storage account of the function is added to the Virtual Network, read here for more details.
To resolve it, you can use the local-exec Provisioner to invoke the az CLI command to deny the traffic after all of the provisions are finished.
az storage account update --name storage_account_name --resource-group reource_group_name --default-action 'Deny' --bypass 'AzureServices', 'Logging', 'Metrics'
Alternatively, you can separately configure the storage account network rules. You may need to allow your client's IP to access the storage account.
resource "azurerm_storage_account_network_rules" "test" {
resource_group_name = var.resourceGroupName
storage_account_name = azurerm_storage_account.func_sa.name
default_action = "Deny"
bypass = [
"Metrics",
"Logging",
"AzureServices"
]
ip_rules = ["x.x.x.x"]
depends_on = [
azurerm_storage_account.func_sa,
azurerm_app_service_plan.svc_plan,
azurerm_application_insights.func_ai,
azurerm_virtual_network.func_vnet,
azurerm_function_app.func
]
}
In addition, there is a possible solution for this similar case on Github.
I've had this issue in the past and found that it can be resolved as follows. I've tested this on v3.3.0 of the provider using the azurerm_windows_function_app resource. I think currently this is an Azure problem, in that it if you don't supply a share it will try and create one but will be denied. You'd expect this to work if Allow Azure services on the trusted services list to access this storage account is enabled, but webapps aren't trusted.
Create your storage account with IP rules and deny
Create a share within this for your function app content
within the function set the following configuration settings
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = <storage_account.primary_connection_string>
WEBSITE_CONTENTSHARE = <your share>
WEBSITE_CONTENTOVERVNET = 1
In the functions site configuration set the attribute vnet_route_all_enabled = true

Terraform Azure Container Groups appear to have no way to mount multiple volumes?

When reviewing the documentation for Azure Container Groups, specifically this page on secrets: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-secret
I noticed the volumes object is an array of what appear to be 1 or more volumes.
"volumes": [
{
"name": "secretvolume1",
"secret": {
"mysecret1": "TXkgZmlyc3Qgc2VjcmV0IEZPTwo=",
"mysecret2": "TXkgc2Vjb25kIHNlY3JldCBCQVIK"
}
}
]
When reviewing the Terraform documentation here: https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/container_group
I noticed the volume object is singular.
Is it not possible to make multiple volumes in terraform? Is this also not possible in say ARM, despite it appearing to be so in documentation? Testing would indicate Terrraform doesn't support multiple volumes, though I'm not proficient enough with ARM to verify.
Sure, it's possible to make multiple volumes with Terraform:
In my working sample, it creates two volumes, one is for a storage file share, another is a secret volume.
resource "azurerm_resource_group" "example" {
name = "${var.prefix}-resources"
location = var.location
}
resource "azurerm_storage_account" "example" {
name = "${var.prefix}stor"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_share" "example" {
name = "aci-test-share"
storage_account_name = azurerm_storage_account.example.name
quota = 50
}
resource "azurerm_container_group" "example" {
name = "${var.prefix}-continst"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_address_type = "public"
dns_name_label = "${var.prefix}-continst"
os_type = "Linux"
container {
name = "hello-world"
image = "microsoft/aci-helloworld:latest"
cpu = "0.5"
memory = "1.5"
ports {
port = 443
protocol = "TCP"
}
volume {
name = "logs"
mount_path = "/aci/logs"
read_only = false
share_name = azurerm_storage_share.example.name
storage_account_name = azurerm_storage_account.example.name
storage_account_key = azurerm_storage_account.example.primary_access_key
}
volume {
name = "secretvolume1"
mount_path = "/mnt/secrets"
read_only = false
secret = {
"mysecret1"=base64encode("My first secret FOO")
"mysecret2"=base64encode("My second secret BAR")
}
}
}
}
I am using the latest provider.
PS D:\Terraform> .\terraform.exe -v
Terraform v0.14.7
+ provider registry.terraform.io/hashicorp/azurerm v2.48.0
Verify the mount path from the container instance--->connect--->/bin/sh on the Azure portal.

Could not read output attribute from remote state datasource

I am new to terraform so I will attempt to explain with the best of my ability. Terraform will not read in the variable/output from the statefile and use that value in another file.
I have tried searching the internet for everything I could find to see if anyone how has had this problem and how they fixed it.
###vnet.tf
#Remote State pulling data from bastion resource group state
data "terraform_remote_state" "network" {
backend = "azurerm"
config = {
storage_account_name = "terraformstatetracking"
container_name = "bastionresourcegroups"
key = "terraform.terraformstate"
}
}
#creating virtual network and putting that network in resource group created by bastion.tf file
module "quannetwork" {
source = "Azure/network/azurerm"
resource_group_name = "data.terraform_remote_state.network.outputs.quan_netwk"
location = "centralus"
vnet_name = "quan"
address_space = "10.0.0.0/16"
subnet_prefixes = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
subnet_names = ["subnet1", "subnet2", "subnet3"]
tags = {
environment = "quan"
costcenter = "it"
}
}
terraform {
backend "azurerm" {
storage_account_name = "terraformstatetracking"
container_name = "quannetwork"
key = "terraform.terraformstate"
}
}
###resourcegroups.tf
# Create a resource group
#Bastion
resource "azurerm_resource_group" "cm" {
name = "${var.prefix}cm.RG"
location = "${var.location}"
tags = "${var.tags}"
}
#Bastion1
resource "azurerm_resource_group" "network" {
name = "${var.prefix}network.RG"
location = "${var.location}"
tags = "${var.tags}"
}
#bastion2
resource "azurerm_resource_group" "storage" {
name = "${var.prefix}storage.RG"
location = "${var.location}"
tags = "${var.tags}"
}
terraform {
backend "azurerm" {
storage_account_name = "terraformstatetracking"
container_name = "bastionresourcegroups"
key = "terraform.terraformstate"
}
}
###outputs.tf
output "quan_netwk" {
description = "Quan Network Resource Group"
value = "${azurerm_resource_group.network.id}"
}
When running the vnet.tf code it should read in the output from the outputs.tf which is stored in the azure backend storage account statefile file and use that value for the resource_group_name in the quannetwork module. Instead it creates a resource group named data.terraform_remote_state.network.outputs.quan_netwk. Any help would be greatly appreciated.
First, you need to input a string for the resource_group_name in your module quannetwork, not the resource group Id.
Second, if you want to quote something in the remote state, do not just put it in the Double quotes, the right format below:
resource_group_name = "${data.terraform_remote_state.network.outputs.quan_netwk}"

Resources