Unable to create a blob in a container - azure

Scenario: unable to deploy a blob within a container tha is created in a storage account, read in as a data source. Obscure error is produced in GitHub actions workflow.
Error:
Error: creating Blob "xrdpdeploy.sh" (Container "morpheus-tinkering-csecontainer" / Account "***"): opening: open # update available packages
[ERROR] provider.terraform-provider-azurerm_v3.35.0_x5: Response contains error diagnostic: #caller=github.com/hashicorp/terraform-plugin-go#v0.14.1/tfprotov5/internal/diag/diagnostics.go:55 #module=sdk.proto diagnostic_detail= diagnostic_severity=ERROR tf_proto_version=5.3 diagnostic_summary="creating Blob "xrdpdeploy.sh" (Container "morpheus-tinkering-csecontainer" / Account "***"): opening: open # update available packages
Hi folks, given the above scenario I am stuck with; do you have any suggestions to help me upload the blob object correct. The terraform config is shown below:
resource "azurerm_storage_container" "cse_container" {
name = "${local.naming_prefix}csecontainer"
storage_account_name = data.azurerm_storage_account.storage_account.name
container_access_type = "blob"
}
#-------------------------------------------------------------
# storage container blob script used to run as a custom script extension
resource "azurerm_storage_blob" "cse_blob" {
name = "xrdpdeploy.sh"
storage_account_name = data.azurerm_storage_account.storage_account.name
storage_container_name = azurerm_storage_container.cse_container.name
type = "Block"
access_tier = "Hot"
# absolute path to file on local system
source = file("${path.module}/cse-script/xrdpdeploy.sh")
#explicit dependency on storage container to be deployed first prior to uploading blobk blob
depends_on = [
azurerm_storage_container.cse_container
]
}
#-------------------------------------------------------------
data "azurerm_storage_account" "storage_account" {
name = "morpheuszcit10394"
resource_group_name = "zimcanit-morpheus-tinkering-rg"
}
#-------------------------------------------------------------
I'm using Azure Terraform provider version 3.35.0.
What I've done; ensured the container is being created correctly; explicitly set access tier and even tried dropping Azure provider version to 3.30.0.
Thanks in advance!

Related

A resource with the ID "/subscriptions/.../resourceGroups/rgaks/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists

I have created storage account and container inside it to store my aks backup using terraform. I have created child module for the storage account and container.I am creating the storage account and continer calling it from root module from "main.tf".i have created two modules such as module Ex:"module aks_backup_storage" and "module aks_backup_conatiner". The module have been created successfully after applying the terraform command "terraform apply" but at the end it is raising the following errors are mentioned bellow in the console.
A resource with the ID "/subscriptions/...../resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage" already exists - to be managed via Terraform this resource needs to be imported into the State. Please see the resource documentation for "azurerm_storage_account" for more information.
failed creating container: failed creating container: containers.Client#Create: Failure sending request: StatusCode=409 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="ContainerAlreadyExists" Message="The specified container already exists.\nRequestId:f.........\nTime:2022-12-28T12:52:08.2075701Z"
root module
module "aks_backup_storage" {
source = "../modules/aks_pv_storage_container"
rg_aks_backup_storage = var.rg_aks_backup_storage
aks_backup_storage_account = var.aks_backup_storage_account
aks_backup_container = var.aks_backup_container
rg_aks_backup_storage_location = var.rg_aks_backup_storage_location
aks_backup_retention_days = var.aks_backup_retention_days
}
Child module
resource "azurerm_resource_group" "rg_aksbackup" {
name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = var.aks_backup_storage_account
resource_group_name = var.rg_aks_backup_storage
location = var.rg_aks_backup_storage_location
account_kind = "StorageV2"
account_tier = "Standard"
account_replication_type = "ZRS"
access_tier = "Hot"
enable_https_traffic_only = true
min_tls_version = "TLS1_2"
#allow_blob_public_access = false
allow_nested_items_to_be_public = false
is_hns_enabled = false
blob_properties {
container_delete_retention_policy {
days = var.aks_backup_retention_days
}
delete_retention_policy {
days = var.aks_backup_retention_days
}
}
}
# Different container can be created for the different backup level such as cluster, Namespace, PV
resource "azurerm_storage_container" "aks_backup_container" {
#name = "aks-backup-container"
name = var.aks_backup_container
#storage_account_name = azurerm_storage_account.aks_backup_storage.name
storage_account_name= var.aks_backup_storage_account
}
I have also try to import the resource using the bellow command
terraform import ['azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage']
But it also saying ZSH command not found
zsh: no matches found: [azurerm_storage_account.aks_backup_storage /subscriptions/a3ae2713-0218-47a2-bb72-c6198f50c56f/resourceGroups/rg-aks-backup-storage/providers/Microsoft.Storage/storageAccounts/aksbackupstorage/]
I had no issue when i was creating the resources using the same code without declaring any module.
Now, I have several modules in root module in the main.tf file
here is my project directory structure
I really appreciate any suggestions thanks in advance
variable.tf
variable "rg_aks_backup_storage" {
type = string
description = "storage account name for the backup"
default = "rg-aks-backup-storage"
}
variable "aks_backup_storage_account" {
type = string
description = "storage account name for the backup"
default = "aksbackupstorage"
}
variable "aks_backup_container" {
type = string
description = "storage container name "
#default = "aks-storage-container"
default = "aksbackupstoragecontaine"
}
variable "rg_aks_backup_storage_location" {
type = string
default = "westeurope"
}
variable "aks_backup_retention_days" {
type = number
default = 90
}
The storage account name that you use must be unique within Azure (see naming restrictions). I checked, and the default storage account name that you are using is already taken. Have you tried changing the name to something you know is unique?
A way to consistently do this would be to add a random suffix at the end of the name, eg:
resource "random_string" "random_suffix" {
length = 6
special = false
upper = false
}
resource "azurerm_storage_account" "aks_backup_storage" {
name = join("", tolist([var.aks_backup_storage_account, random_string.random_suffix.result]))
...
}
I also received the same error when I tried to run terraform apply while creating container registry.
It usually occurs when the terraform state file (running locally) does not match the Portal terraform state file resources.
Even if a resource with the same name does not exist in the portal or resource group, it will appear in terraform state files if it was deployed previously. If you've received these types of issues, verify the tf state file in portal. If the resource is not existent, use the following command to import it.
Note: Validate that the terraform state files are identical. Run terraform init & terraform apply once you are done with the changes.
To resolve this error, Use terraform import .
Here I tried to import the container registry (let's say) and it imported successfully.
terraform import azurerm_container_registry.acr "/subscriptions/<subscriptionID>/resourceGroups/<resourceGroup>/providers/Microsoft.ContainerRegistry/registries/xxxxcontainerRegistry1"
Output:
After that I applied terraform apply and successfully deployed the resource without any errors.
Deployed successfully in Portal:

Unable to create Storage Sync Cloud Endpoint

When i am trying to create cloud endpoint from terraform script in azure i am getting following error,
Error: waiting for creation of Storage Sync Cloud Endpoint: (Cloud Endpoint Name “azbackup001zscallerc-file-sync-grp-CE” / Sync Group Name “azbackup001zscallerc-file-sync-grp” / Storage Sync Service Name “azbackup001zscallerc-file-sync” / Resource Group “RG”): Code=“-2134364065” Message=“Unable to read specified storage account. Please check the permissions and try again after some time.”
however when i am creating the same from azure portal i am able to create without any issues. I have checked all my permissions and even from global admin account as well, i am unable to do so. Please assist the possible solution
Please assist on checking permission issue as i can do same thing from az cli as well as powershell.
As it is even having issues with global admin account,Check When creation of Cloud Endpoint setup permission to that storage sync service that cloud sync is dependent on.
See Storage Sync service errors : make sure Azure File Sync has access to the storage account.
resource "azurerm_storage_sync" "example" {
name = "kaexample-ss"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
}
resource "azurerm_storage_sync_group" "example" {
name = "kaexample-ss-group"
storage_sync_id = azurerm_storage_sync.example.id
}
resource "azurerm_storage_account" "example" {
name = "kaaexample"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_share" "example" {
name = "kaexample-share"
storage_account_name = azurerm_storage_account.example.name
quota = 50
acl {
id = "GhostedRecall"
access_policy {
permissions = "r"
}
}
}
resource "azurerm_storage_sync_cloud_endpoint" "example" {
name = "example-ss-ce"
storage_sync_group_id = azurerm_storage_sync_group.example.id
file_share_name = azurerm_storage_share.example.name
storage_account_id = azurerm_storage_account.example.id
}
Please check this Az.StorageSync: Cloud endpoint creation access rigths failure issue · GitHub

Can I deploy an Elastic Beanstalk application from a local ZIP file not a S3 object with Terraform?

Is there a way to deploy an AWS Elastic Beanstalk Node.js application based on a local ZIP file with Terraform?
All examples I have seen are S3 based.
Here is my code so far:
resource "aws_iam_policy" "nodejs" {
name = "NodeJSPolicy"
policy = file("policy.json")
}
resource "aws_iam_role" "nodejs" {
name = "iam_for_lambda"
assume_role_policy = file("assumerole.json")
}
resource "aws_iam_role_policy_attachment" "nodejs" {
policy_arn = aws_iam_policy.nodejs.arn
role = aws_iam_role.nodejs.name
}
data "archive_file" "package"{
type = "zip"
source_file = "../app"
output_path = "../build/package.zip"
}
resource "aws_elastic_beanstalk_application" "nodejs" {
name = "nodejs-app"
description = "Noodle JP Application"
}
resource "aws_elastic_beanstalk_application_version" "nodejs" {
name = "v0.01"
application = aws_elastic_beanstalk_application.nodejs.name
//**??????? HOW CAN I HAVE THE SOURCE HERE?**
}
resource "aws_elastic_beanstalk_environment" "nodejs" {
application = aws_elastic_beanstalk_application.nodejs.name
name = "noodle-jp"
solution_stack_name = "Node.js 14 AL2 version 5.4.6"
}
Is there a way to deploy an AWS Elastic Beanstalk Node.js application based on a local ZIP file with Terraform?
No, this is not supported by Terraform currently.
The aws_elastic_beanstalk_application_version Terraform resource is the resource that is used to point to the application source bundle.
It only takes in bucket & key as the S3 bucket name and S3 object name parameters respectively for defining the source.
bucket - (Required) S3 bucket that contains the Application Version source bundle.
key - (Required) S3 object that is the Application Version source bundle.
It does not support defining a local path.

terraform backend state file storage using keys instead of AD account

It appears that Terraform uses Keys for backend state files when persisting to an Azure storage account. I wish to use a single storage account with dedicated folders for different service principals but without cross-folder write access. I am trying to avoid accidental overwrites of the state files by different service principals. But since Terraform is using the keys to update the storage account, every service principal technically has rights to update every file. And the developer would have to take care not to accidentally reference the wrong state file to update. Any thoughts on how to protect against this?
You can use a SAS token generated for a Container to be used by that service principal only and no other service principals .
I tested with something like below:
data "terraform_remote_state" "foo" {
backend = "azurerm"
config = {
storage_account_name = "cloudshellansuman"
container_name = "test"
key = "prod.terraform.tfstate"
sas_token = "sp=racwdl&st=2021-09-28T05:49:01Z&se=2023-04-01T13:49:01Z&sv=2020-08-04&sr=c&sig=O87nHO01sPxxxxxxxxxxxxxsyQGQGLSYzlp6F8%3D"
}
}
provider "azurerm" {
features {}
use_msi = true
subscription_id = "948d4068-xxxxx-xxxxxx-xxxxxxxxxx"
tenant_id = "72f988bf-xxxx-xxxxx-xxxxxx-xxxxxxx"
}
resource "azurerm_resource_group" "test" {
name="xterraformtest12345"
location ="east us"
}
But If I change container name to another container then I can't write as it will error out saying the authentication failed as the SAS token is for Test container not Test1 container.
For more information on how to generate SAS token for containers and how to set backend azurerm for terraform , please refer the below links:
Generate shared access signature (SAS) token for containers and blobs with Azure portal. | Microsoft Docs
Use Azure storage for Terraform remote state
OR
You can set the containers authentication method to azure ad user account, after assigning storage blob data contributor/owner role to the service principal which will use that specific container .
Then you can use something like below:
data "terraform_remote_state" "foo" {
backend = "azurerm"
config = {
storage_account_name = "cloudshellansuman"
container_name = "test1"
key = "prod.terraform.tfstate"
subscription_id = "b83c1ed3-xxxx-xxxxxx-xxxxxxx"
tenant_id = "72f988bf-xxx-xxx-xxx-xxx-xxxxxx"
client_id = "f6a2f33d-xxxx-xxxx-xxx-xxxxx"
client_secret = "y5L7Q~oiMOoGCxm7fK~xxxxxxxxxxxxxxxxx"
use_azuread_auth =true
}
}
provider "azurerm"{
subscription_id = "b83c1ed3-xxxx-xxxxxx-xxxxxxx"
tenant_id = "72f988bf-xxx-xxx-xxx-xxx-xxxxxx"
client_id = "f6a2f33d-xxxx-xxxx-xxx-xxxxx"
client_secret = "y5L7Q~oiMOoGCxm7fK~xxxxxxxxxxxxxxxxx"
features {}
}
data "azurerm_resource_group" "test" {
name="resourcegroupname"
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
resource_group_name = data.azurerm_resource_group.test.name
location = data.azurerm_resource_group.test.location
address_space = ["10.254.0.0/16"]
}
Output:
If service principal doesn't have Role assigned to it for the container , then it will give error like below:
Note: For the first scenario I have used managed system identity, but the same can be achieved for service principal as well.

Cannot access Azure backend storage using SSL

I am using Azure Blob Storage as a state backend, due to new security requirements, I now need to access the azure storage accounts using SSL. This however fails with the following:
module.core_infra.data.terraform_remote_state.mccp_core_infra:
data.terraform_remote_state.mccp_core_infra: storage: service returned
error: StatusCode=403, ErrorCode=AuthenticationFailed,
ErrorMessage=Server failed to authenticate the request. Make sure the
value of Authorization header is formed correctly including the
signature.
Here’s an example configuration:
resource "azurerm_storage_account" "terraform_state_account" {
name = "${lower(replace(var.azure_tenant_name, "/\\W|_/", ""))}tfstate"
resource_group_name = "${azurerm_resource_group.main.name}"
location = "${var.azure_location}"
account_tier = "Standard"
account_replication_type = "LRS"
enable_https_traffic_only = true
network_rules {
ip_rules = ["masked/24"]
virtual_network_subnet_ids = ["${azurerm_subnet.mccp_vnet_subnet.id}"]
}
tags = {
environment = "${var.azure_tenant_name} terraform state account"
}
}
data "terraform_remote_state" "mccp_core_infra" {
backend = "azurerm"
config = {
storage_account_name = "${lower(replace(var.azure_tenant_name, "/\\W|_/", ""))}tfstate"
container_name = "mccp-core-infra-tf-state"
key = "terraform.tfstate"
access_key = "${var.azure_mccp_storage_account_key}"
}
}
I am using Terraform 0.11.11 with azurerm provider 1.33.0. This works just fine without the enable_https_traffic_only flag. What am I missing here?
The enable_https_traffic_only feature would not affect on that error. This works fine with enable_https_traffic_only flag in the Terraform v0.12.9
+ provider.azurerm v1.35.0 on my side.
It looks like a credential issue. I can reproduce your issue when the access_key is invalid in the data source. You could verify if you could access that storage account blob with that access key or you are getting references from a correct storage account name that hosts the .tfstate.
You could also try to delete the local .terraform folder and try again as it is mentioned in this post.

Resources