Terraform can not create a storage account in Azure - azure

I have a Terraform script that used to be able to create a storage account in Azure ok, but today started to return the error message:
azurerm_storage_account.testsa: 1 error(s) occurred:
* azurerm_storage_account.testsa: Error waiting for Azure Storage Account "terraformtesthubb" to be created: Future#WaitForCompletion: the number of retries has been exceeded: StatusCode=400 -- Original Error: Code="AadClientCredentialsGrantFailure" Message="Failure in AAD Client Credentials Grant Flow."
The trace logs don't show anything useful, and the term AadClientCredentialsGrantFailure literally returns nothing in Google. What is the cause?

Answering this one for myself because Google totally failed me.
This turned out to be an issue with Azure. Despite there being no errors listed in any of the status pages, the script would work in US West, but fail in US West 2.
After a few days this issue went away, so it was an intermittent Azure issue.
Edit
For reference, this was the script. Markers like #{Principal.TenantId} are being replaced during the template deployment.
provider "azurerm" {
client_id = "#{Principal.Client}"
client_secret = "#{Principal.Password}"
subscription_id = "#{Principal.SubscriptionNumber}"
tenant_id = "#{Principal.TenantId}"
}
resource "azurerm_resource_group" "testrg" {
name = "terraformtesthub#{Octopus.Environment.Name | ToLower}"
location = "#{Octopus.Environment.Name | ToLower}"
}
resource "azurerm_virtual_network" "test" {
name = "terraformtesthub#{Octopus.Environment.Name | ToLower}"
address_space = ["10.0.0.0/16"]
location = "${azurerm_resource_group.testrg.location}"
resource_group_name = "${azurerm_resource_group.testrg.name}"
}
resource "azurerm_subnet" "test" {
name = "terraformtesthub#{Octopus.Environment.Name | ToLower}"
resource_group_name = "${azurerm_resource_group.testrg.name}"
virtual_network_name = "${azurerm_virtual_network.test.name}"
address_prefix = "10.0.2.0/24"
service_endpoints = ["Microsoft.Sql", "Microsoft.Storage"]
}
resource "azurerm_storage_account" "testsa" {
name = "terraformtesthub#{Octopus.Environment.Name | ToLower}"
resource_group_name = "${azurerm_resource_group.testrg.name}"
location = "#{Octopus.Environment.Name | ToLower}"
account_tier = "Standard"
account_kind = "StorageV2"
account_replication_type = "RAGRS"
lifecycle {
prevent_destroy = true
}
network_rules {
ip_rules = ["100.0.0.1"]
virtual_network_subnet_ids = ["${azurerm_subnet.test.id}"]
}
}

Related

Creating data lake causes error code 403 or 409. checking for existence of existing File System and then never finds it in azure terraform

I'm working on some terraform code and I have it set up that I call a module that is a storage account and data lake module. I call that to my main repo to work on it. The issue is I keep getting various error codes from the different runs I've made
Error: checking for existence of existing File System "examplenameofdl" (Account "sadl123"): datalakestore.Client#GetProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: error response cannot be parsed: {"" '\x00' '\x00'} error: EOF
and
checking for existence of existing File System "examplenameofdl" (Account "sadl123"): datalakestore.Client#GetProperties: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=
Here is the code I'm using for the storage account/ data lake
resource "azurerm_storage_account" "storage_account" {
name = var.name
location = var.location
resource_group_name = var.rg_name
tags = var.tags
account_tier = var.account_tier
account_replication_type = var.account_replication_type
account_kind = var.account_kind
access_tier = var.access_tier
is_hns_enabled = var.hnsenabled
network_rules {
default_action = var.default_network_action
ip_rules = var.ip_rules
virtual_network_subnet_ids = var.virtual_network_subnet_ids
bypass = var.bypass_network_rules
}
}
resource "azurerm_storage_data_lake_gen2_filesystem" "example" {
count = var.dlenabled ? 1 : 0 //this is to check if the data lake is needed
name = "examplenameofdl"
storage_account_id = azurerm_storage_account.storage_account.id
properties = {
hello = "aGVsbG8="
}
}
This is then called into a separate repo that has the variables where the resource group is created and then passed to it from a separate module. Whenever it's run it causes this error
│ Error: checking for existence of existing File System "examplenameofdl" (Account "sadl123"): datalakestore.Client#GetProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: error response cannot be parsed: {"" '\x00' '\x00'} error: EOF
│
│ with module.storage_account1["sa-01"].azurerm_storage_data_lake_gen2_filesystem.example[0],
│ on .terraform/modules/storage_account1/main.tf line 76, in resource "azurerm_storage_data_lake_gen2_filesystem" "example":
│ 76: resource "azurerm_storage_data_lake_gen2_filesystem" "example" {
It never creates the file system so why is it checking for the existence of it? I've tried assigning permissions to the resource group to have Storage Blob Data Owner
resource "azurerm_role_assignment" "role_assignment" {
scope = azurerm_resource_group.spoke_rgs["sadl-rg-01"].id
role_definition_name = "Storage Blob Data Owner"
principal_id = data.azurerm_client_config.current.object_id
}
I've also tried to assign the storage account the same permissions, but nothing gets rid of this error. I'm completely lost on how to proceed
I tried in my environment and got below results:
Initially I tried same process and got a same error.
Console:
According to this Document, if you need to create data lake file system it requires roles like Storage blob contributor, storage blob data owner, storage account contributor, storage blob data reader.
You can assign through portal and also terraform:
data "azurerm_subscription" "primary" {
}
data "azurerm_client_config" "example" {
}
resource "azurerm_role_assignment" "example" {
scope = data.azurerm_subscription.primary.id
role_definition_name = "storage blob data owner"
principal_id = data.azurerm_client_config.example.object_id
}
Portal:
terraform.tf
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "example" {
name = "example-vnet"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "example-subnet"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
service_endpoints = ["Microsoft.Storage"]
}
resource "azurerm_storage_account" "example" {
name = "venky326"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
account_kind = "StorageV2"
is_hns_enabled = "true"
tags = {
environment = "staging"
}
}
resource "azurerm_storage_account_network_rules" "example" {
storage_account_id = azurerm_storage_account.example.id
default_action = "Allow"
ip_rules = ["127.0.0.1"]
virtual_network_subnet_ids = [azurerm_subnet.example.id]
bypass = ["Metrics"]
}
resource "azurerm_storage_data_lake_gen2_filesystem" "example" {
name = "datelake132"
storage_account_id = azurerm_storage_account.example.id
properties = {
hello = "aGVsbG8="
}
}
After assigning role to storage account, the code executed successfully.
Console:
Portal:
checking for existence of existing File System "examplenameofdl" (Account "sadl123"): datalakestore.Client#GetProperties: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=
The above 409 error indicates in a small period of time,if the code in your application deletes and then immediately recreates a blob container with the same name.
Reference:
Troubleshoot client application errors in Azure Storage accounts | Microsoft Learn

Terraform - ADF to DB connectivity issue when tenant_id is provided in LS configuration - azurerm_data_factory_linked_service_azure_sql_database

Terraform Version
1.2.3
AzureRM Provider Version
v3.13.0
Affected Resource(s)/Data Source(s)
Azure data factory, SQL Database
Terraform Configuration Files
resource "azurerm_data_factory_linked_service_azure_sql_database" "sqldatabase_linked_service_10102022" {
count = (var.subResourcesInfo.sqlDatabaseName != "") ? 1 : 0
depends_on = [azurerm_data_factory_integration_runtime_azure.autoresolve_integration_runtime,
azurerm_data_factory_managed_private_endpoint.sqlserver_managed_endpoint]
name = "AzureSqlDatabase10102022"
data_factory_id = azurerm_data_factory.datafactory.id
integration_runtime_name = "AutoResolveIntegrationRuntime"
use_managed_identity = true
connection_string = format("Integrated Security=False;Data Source=%s.database.windows.net;Initial Catalog=%s;",
var.subResourcesInfo.sqlServerName,
var.subResourcesInfo.sqlDatabaseName)
}
Expected Behaviour
Issue is ADF to DB connectivity, error:
Operation on target DWH_DF_aaa failed: {'StatusCode':'DFExecutorUserError','Message':'Job failed due to reason: com.microsoft.dataflow.broker.InvalidOperationException: Only one valid authentication should be used for AzureSqlDatabase. ServicePrincipalAuthentication is invalid. One or two of servicePrincipalId/key/tenant is missing.','Details':''}
When we created this LS using TF, we get tenant="" in ADF LS Json file which we suspect that causing issue of above error.
When we created the same LS directly on ADF UI, then there is no field of tenant="" in its json file, and if we use this LS in dataflow/pipeline then communication works from ADF to DB.
Expected behavior should be, if we don't provide tenant_id parameter in TF code then in json also should not show tenant="" which then works for connectivity.
I tried to reproduce the scenario in my environment:
With below code , I could create a Linked Service (connection) between Azure SQL Database and Azure Data Factory.
Code:
resource "azurerm_data_factory" "example" {
name = "kaADFexample"
location = data.azurerm_resource_group.example.location
resource_group_name = data.azurerm_resource_group.example.name
managed_virtual_network_enabled = true
}
resource "azurerm_storage_account" "example" {
name = "kaaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_kind = "BlobStorage"
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_data_factory_managed_private_endpoint" "example" {
name = "example"
data_factory_id = azurerm_data_factory.example.id
target_resource_id = azurerm_storage_account.example.id
subresource_name = "blob"
}
resource "azurerm_user_assigned_identity" "main" {
depends_on = [data.azurerm_resource_group.example]
name = "kasupports01-mid"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_data_factory_integration_runtime_azure" "test" {
name = "AutoResolveIntegrationRuntime"
data_factory_id = azurerm_data_factory.example.id
location = "AutoResolve"
virtual_network_enabled = true
}
resource "azurerm_data_factory_linked_service_azure_sql_database" "linked_service_azure_sql_database" {
name = "kaexampleLS"
data_factory_id = azurerm_data_factory.example.id
connection_string = "data source=serverhostname;initial catalog=master;user id=testUser;Password=test;integrated security=False;encrypt=True;connection timeout=30"
use_managed_identity = true
integration_runtime_name = azurerm_data_factory_integration_runtime_azure.test.name
depends_on = [azurerm_data_factory_integration_runtime_azure.test,
azurerm_data_factory_managed_private_endpoint.example]
}
output "id" {
value = azurerm_data_factory_linked_service_azure_sql_database.linked_service_azure_sql_database.id
}
Executed: terraform plan
Output:
id = "/subscriptions/xxxxxxxxx/resourceGroups/xxxxxx/providers/Microsoft.DataFactory/factories/kaADFexample/linkedservices/kaexampleLS"
If the error persists in your case ,try removing the tenant attribute in the data_factory just after deployment is done in terraform.
Please check this known issue and mentioned by #chgenzel in terraform-provider-azurerm issues | Github
ADF:
Managed Identity
Linked service : azure sql
Reference: data_factory_linked_service_azure_sql_database | terraformregistry

azure resource prive endpoint creation error

I am trying to create a private endpoint for Azure Function App using terraform
The code for functionApp is
resource "azurerm_resource_group" "example" {
name = "azure-functions-test-rg"
location = "West Europe"
}
resource "azurerm_storage_account" "example" {
name = "functionsapptestsa"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_app_service_plan" "example" {
name = "azure-functions-test-service-plan"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
sku {
tier = "PremiumContainer"
size = "P1"
}
}
resource "azurerm_function_app" "example" {
name = "test-azure-functions"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
app_service_plan_id = azurerm_app_service_plan.example.id
storage_account_name = azurerm_storage_account.example.name
storage_account_access_key = azurerm_storage_account.example.primary_access_key
}
This all works fine, the functionapp gets created. I am trying to create private endpoint to this functionapp with following code
resource "azurerm_private_endpoint" "examplepe" {
name = "example-endpoint"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
subnet_id = azurerm_subnet.endpoint.id #dummy data
private_service_connection {
name = "example-privateserviceconnection"
is_manual_connection = false
private_connection_resource_id = azurerm_function_app.example.id
subresource_names = ["blob"]
}
}
The error I am getting is " Error creating private endpoint "resource name".... failure sending request: Statuscode=0 -- Original Error: Code="BadRequest" Message="Call to Microsoft.Web/sites failed. Error message: GroupId is invalid." Details=[]
Thanks
The issue was with incorrect subResource name being selected.
Resource Type SubResource Name Secondary SubResource Name
Data Lake File System Gen2 dfs dfs_secondary
Sql Database / Data Warehouse sqlServer
Storage Account blob blob_secondary
Storage Account file file_secondary
Storage Account queue queue_secondary
Storage Account table table_secondary
Storage Account web web_secondary
Web App / Function App sites
Web App / Function App Slots sites-<slotName>
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/private_endpoint#subresource_names
This has the details of the subresource

Terraform 403 error when creating function app and storage account with private endpoint

I am getting a 403 forbidden when creating a function app that connects to its storage account via private endpoint inside a vnet. Storage account has firewall default action of 'Deny', and of course if I set it to 'Allow' it will work. I want this as 'Deny', however. Following this microsoft link if the function app and storage account are created in the same region with vnet, subnets, and private endpoints then it's supposed to work so I must be doing something wrong. I also tried changing the region for the storage account and it still resulted in a 403.
Error:
Error: web.AppsClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: Code="BadRequest" Message="There was a conflict. The remote server returned an error: (403) Forbidden." Details=[{"Message":"There was a conflict. The remote server returned an error: (403) Forbidden."},{"Code":"BadRequest"},{"ErrorEntity":{"Code":"BadRequest","ExtendedCode":"01020","Message":"There was a conflict. The remote server returned an error: (403) Forbidden.","MessageTemplate":"There was a conflict. {0}","Parameters":["The remote server returned an error: (403) Forbidden."]}}]
Here is my terraform code
resource "azurerm_function_app" "func" {
name = "${var.func_basics.name}-func"
location = var.func_basics.location
resource_group_name = var.func_basics.resource_group_name
app_service_plan_id = azurerm_app_service_plan.svc_plan.id
storage_account_name = azurerm_storage_account.func_sa.name
storage_account_access_key = azurerm_storage_account.func_sa.primary_access_key
version = var.runtime_version
https_only = true
depends_on = [
azurerm_storage_account.func_sa,
azurerm_app_service_plan.svc_plan,
azurerm_application_insights.func_ai,
azurerm_virtual_network.func_vnet
]
app_settings = merge(var.app_settings, local.additional_app_settings)
}
resource "azurerm_app_service_plan" "svc_plan" {
name = "${var.func_basics.name}-func-plan"
location = var.func_basics.location
resource_group_name = var.func_basics.resource_group_name
kind = "elastic"
sku {
tier = "ElasticPremium"
size = "EP1"
}
}
resource "azurerm_application_insights" "func_ai" {
name = "${var.func_basics.name}-func-appi"
location = var.func_basics.location
resource_group_name = var.func_basics.resource_group_name
application_type = var.ai_app_type
}
resource "azurerm_storage_account" "func_sa" {
name = "st${lower(replace(var.func_basics.name, "/[-_]*/", ""))}"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
account_tier = var.sa_settings.tier
account_replication_type = var.sa_settings.replication_type
account_kind = "StorageV2"
enable_https_traffic_only = true
min_tls_version = "TLS1_2"
depends_on = [
azurerm_virtual_network.func_vnet
]
network_rules {
default_action = "Deny"
virtual_network_subnet_ids = [azurerm_subnet.func_endpoint_subnet.id]
bypass = [
"Metrics",
"Logging",
"AzureServices"
]
}
}
resource "azurerm_virtual_network" "func_vnet" {
name = "${var.func_basics.name}-func-vnet"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "func_service_subnet" {
name = "${var.func_basics.name}-func-svc-snet"
resource_group_name = var.func_basics.resource_group_name
virtual_network_name = azurerm_virtual_network.func_vnet.name
address_prefixes = ["10.0.1.0/24"]
enforce_private_link_service_network_policies = true
service_endpoints = ["Microsoft.Storage"]
delegation {
name = "${var.func_basics.name}-func-del"
service_delegation {
name = "Microsoft.Web/serverFarms"
actions = ["Microsoft.Network/virtualNetworks/subnets/action"]
}
}
}
resource "azurerm_subnet" "func_endpoint_subnet" {
name = "${var.func_basics.name}-func-end-snet"
resource_group_name = var.func_basics.resource_group_name
virtual_network_name = azurerm_virtual_network.func_vnet.name
address_prefixes = ["10.0.2.0/24"]
enforce_private_link_endpoint_network_policies = true
}
resource "azurerm_private_endpoint" "func_req_sa_blob_endpoint" {
name = "${var.func_basics.name}-func-req-sa-blob-end"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
subnet_id = azurerm_subnet.func_endpoint_subnet.id
private_service_connection {
name = "${var.func_basics.name}-func-req-sa-blob-pscon"
private_connection_resource_id = azurerm_storage_account.func_sa.id
is_manual_connection = false
subresource_names = ["blob"]
}
}
resource "azurerm_private_endpoint" "func_req_sa_file_endpoint" {
name = "${var.func_basics.name}-func-req-sa-file-end"
resource_group_name = var.func_basics.resource_group_name
location = var.func_basics.location
subnet_id = azurerm_subnet.func_endpoint_subnet.id
private_service_connection {
name = "${var.func_basics.name}-func-req-sa-file-pscon"
private_connection_resource_id = azurerm_storage_account.func_sa.id
is_manual_connection = false
subresource_names = ["file"]
}
}
resource "azurerm_app_service_virtual_network_swift_connection" "func_vnet_swift" {
app_service_id = azurerm_function_app.func.id
subnet_id = azurerm_subnet.func_service_subnet.id
}
locals {
additional_app_settings = {
"APPINSIGHTS_INSTRUMENTATIONKEY" = azurerm_application_insights.func_ai.instrumentation_key
"WEBSITE_CONTENTAZUREFILECONNECTIONSTRING" = azurerm_storage_account.func_sa.primary_connection_string
"AzureWebJobsStorage" = azurerm_storage_account.func_sa.primary_connection_string
"WEBSITE_VNET_ROUTE_ALL" = "1"
"WEBSITE_CONTENTOVERVNET" = "1"
"WEBSITE_DNS_SERVER" = "168.63.129.16"
}
}
It seems that it's a common error message when you create an Azure function where the storage account of the function is added to the Virtual Network, read here for more details.
To resolve it, you can use the local-exec Provisioner to invoke the az CLI command to deny the traffic after all of the provisions are finished.
az storage account update --name storage_account_name --resource-group reource_group_name --default-action 'Deny' --bypass 'AzureServices', 'Logging', 'Metrics'
Alternatively, you can separately configure the storage account network rules. You may need to allow your client's IP to access the storage account.
resource "azurerm_storage_account_network_rules" "test" {
resource_group_name = var.resourceGroupName
storage_account_name = azurerm_storage_account.func_sa.name
default_action = "Deny"
bypass = [
"Metrics",
"Logging",
"AzureServices"
]
ip_rules = ["x.x.x.x"]
depends_on = [
azurerm_storage_account.func_sa,
azurerm_app_service_plan.svc_plan,
azurerm_application_insights.func_ai,
azurerm_virtual_network.func_vnet,
azurerm_function_app.func
]
}
In addition, there is a possible solution for this similar case on Github.
I've had this issue in the past and found that it can be resolved as follows. I've tested this on v3.3.0 of the provider using the azurerm_windows_function_app resource. I think currently this is an Azure problem, in that it if you don't supply a share it will try and create one but will be denied. You'd expect this to work if Allow Azure services on the trusted services list to access this storage account is enabled, but webapps aren't trusted.
Create your storage account with IP rules and deny
Create a share within this for your function app content
within the function set the following configuration settings
WEBSITE_CONTENTAZUREFILECONNECTIONSTRING = <storage_account.primary_connection_string>
WEBSITE_CONTENTSHARE = <your share>
WEBSITE_CONTENTOVERVNET = 1
In the functions site configuration set the attribute vnet_route_all_enabled = true

Error on adding a storage share to the Azure storage account

I'm getting the following error on running terraform apply after adding an azurerm_storage_share.
Error: Error checking for existence of existing Storage Share "fileshare"
(Account "sttestforaddingfileshare" / Resource Group "resources"):
shares.Client#GetProperties: Failure responding to request: StatusCode=403
-- Original Error: autorest/azure: Service returned an error.
Status=403 Code="AuthorizationFailure"
Message="This request is not authorized to perform this operation.
\nRequestId:188ae38b-e01a-000b-35b3-a32ea2000000
\nTime:2020-10-16T11:55:16.7337008Z"
I think the reason is most likely that Terraform tries to list existing file shares in the storage account directly accessing the storage account's REST API instead of Azure Resource Manager's REST API.
It failed because there exist firewall rules in place not containing the IP of the host terraform runs on. When I add my laptop's IP to the firewall rules, it works. But it's not the desired behavior.
Do you know any workaround? Any help is appreciated.
My TF configuration is as follows:
provider "azurerm" {
version = "= 2.32.0"
features {}
}
resource "azurerm_resource_group" "rg" {
name = "resources"
location = var.location
}
resource "azurerm_virtual_network" "vnet" {
name = "vnet"
location = var.location
resource_group_name = azurerm_resource_group.rg.name
address_space = ["10.0.0.0/16"]
}
resource "azurerm_subnet" "snet" {
name = "snet"
resource_group_name = azurerm_resource_group.rg.name
virtual_network_name = azurerm_virtual_network.vnet.name
address_prefixes = ["10.0.1.0/24"]
service_endpoints = [ "Microsoft.Storage" ]
}
resource "azurerm_storage_account" "storage" {
name = "sttestforaddingfileshare"
resource_group_name = azurerm_resource_group.rg.name
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
network_rules {
default_action = "Deny"
virtual_network_subnet_ids = [ azurerm_subnet.snet.id ]
bypass = [ "None" ]
}
}
resource "azurerm_storage_share" "file_share" {
name = "fileshare"
storage_account_name = azurerm_storage_account.storage.name
quota = 100
}
You can use the azurerm_storage_account_network_rules resource to define the Network Rules and remove the Network Rules block defined directly on the azurerm_storage_account resource.
Also, you can create your file share via using az CLI instead of the separate resource "azurerm_storage_share"
After my validation, with the
PS D:\Terraform> .\terraform.exe -v
Terraform v0.13.4
+ provider registry.terraform.io/hashicorp/azurerm v2.32.0
It worked when terraform apply and terraform destroy.
resource "azurerm_storage_account" "storage" {
name = "nnnstore1"
resource_group_name = azurerm_resource_group.rg.name
location = var.location
account_tier = "Standard"
account_replication_type = "LRS"
provisioner "local-exec" {
command =<<EOT
az storage share create `
--account-name ${azurerm_storage_account.storage.name} `
--account-key ${azurerm_storage_account.storage.primary_access_key} `
--name ${var.myshare} `
--quota 100
EOT
interpreter = [ "Powershell", "-c"]
}
}
resource "azurerm_storage_account_network_rules" "test" {
resource_group_name = azurerm_resource_group.rg.name
storage_account_name = azurerm_storage_account.storage.name
default_action = "Deny"
virtual_network_subnet_ids = [azurerm_subnet.snet.id]
bypass = ["None"]
}
I recently ran into this issue when attempting to create a storage share for a container group. It was pretty much identical code to yours but with the additional container group.
I came across the issue when deploying the stack as new and I bypassed the error by deploying everything but the storage share component and all references to it.
Then when that was completed I introduced the storage share and redeployed without issue.
Crappy work around but its deployed again.

Resources