Error: Error creating DB Instance: InvalidParameterValue: The input isn't valid. Input can't contain control characters. │ - amazon-rds

Failing to # terraform apply (on rds creation tf)
resource "aws_db_instance" "default" {
allocated_storage = 5
storage_type = "gp2"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t2.micro"
name = "mydb"
username = "foo"
password = file("../rds_pass.txt")
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = "true"
}
Error: Error creating DB Instance: InvalidParameterValue: The input isn't valid. Input can't contain control characters. status code: 400, request id: e4632bae-72fc-4912-9514-d7a8c37550e5,

I have come across the same problem.
I just removed parameter_group_name = "default.mysql5.7" from aws_db_instance.
Hope it will helpful for others too...

Most probably you are following the Zeal Voras Lectures, here is what I did to fix this issue.
resource "aws_db_instance" "default" {
allocated_storage = 10
db_name = "mydb"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
username = "foo"
password = "${file("../../../rds_pass.txt")}"
parameter_group_name = "default.mysql5.7"
skip_final_snapshot = true
}
Furthermore, make sure that RDS password is in small letters, capital letters are not allowed.
Wrong Example: MySuperPassword
Correct Example: mysuperpassword

i have some problem like this too :
aws_db_instance.default: Creating...
╷
│ Error: creating RDS DB Instance (terraform-20221124001123411100000001): InvalidParameterValue: The input isn't valid. Input can't contain control characters.
│ status code: 400, request id: 437bc1c1-92bc-43d8-8bc6-721c64ae513a
│
│ with aws_db_instance.default,
│ on ksm_rds_password.tf line 8, in resource "aws_db_instance" "default":
│ 8: resource "aws_db_instance" "default"{
here my terraform code
resource "aws_db_instance" "default"{
allocated_storage = 20
max_allocated_storage = 50
storage_type = var.storagetype["gp2"]
engine = var.engine["mysql"]
engine_version = var.engineversion["mysql8031"]
instance_class = var.dbinstancetype["dbmicro"]
db_name = var.custom["dbnm"]
username = data.aws_kms_secrets.rds-secret.plaintext["username"]
password = data.aws_kms_secrets.rds-secret.plaintext["master_password"]
#parameter_group_name = var.parameternoption["stagdevparameter"]
db_subnet_group_name = var.subnet["privdba"]
vpc_security_group_ids = aws_security_group.sg-rds-stag-ups[*].id
allow_major_version_upgrade = true
auto_minor_version_upgrade = true
backup_retention_period = 35
backup_window = "22:00-23:00"
maintenance_window = "Sat:00:00-Sat:03:00"
multi_az = true
skip_final_snapshot = true
tags = {
usage = "staging"
customer = "dev"
}
here version terraform
terraform version  1 ↵  10426  07:18:28
Terraform v1.3.5
on darwin_amd64
+ provider registry.terraform.io/hashicorp/aws v4.40.0
terraform validate  ✔  10427  07:18:30
Success! The configuration is valid.
# aws_db_instance.default will be created
+ resource "aws_db_instance" "default" {
+ address = (known after apply)
+ allocated_storage = 20
+ allow_major_version_upgrade = true
+ apply_immediately = (known after apply)
+ arn = (known after apply)
+ auto_minor_version_upgrade = true
+ availability_zone = (known after apply)
+ backup_retention_period = 35
+ backup_window = "22:00-23:00"
+ ca_cert_identifier = (known after apply)
+ character_set_name = (known after apply)
+ copy_tags_to_snapshot = false
+ db_name = "stag-dev"
+ db_subnet_group_name = "subnet-04257ecd16c4acbf2"
+ delete_automated_backups = true
+ endpoint = (known after apply)
+ engine = "mysql"
+ engine_version = "8.0.31"
+ engine_version_actual = (known after apply)
+ hosted_zone_id = (known after apply)
+ id = (known after apply)
+ identifier = (known after apply)
+ identifier_prefix = (known after apply)
+ instance_class = "db.t3.micro"
+ kms_key_id = (known after apply)
+ latest_restorable_time = (known after apply)
+ license_model = (known after apply)
+ maintenance_window = "sat:00:00-sat:03:00"
+ max_allocated_storage = 50
+ monitoring_interval = 0
+ monitoring_role_arn = (known after apply)
+ multi_az = false
+ name = (known after apply)
+ nchar_character_set_name = (known after apply)
+ network_type = (known after apply)
+ option_group_name = (known after apply)
+ password = (sensitive value)
+ performance_insights_enabled = false
+ performance_insights_kms_key_id = (known after apply)
+ performance_insights_retention_period = (known after apply)
+ port = (known after apply)
+ publicly_accessible = false
+ replica_mode = (known after apply)
+ replicas = (known after apply)
+ resource_id = (known after apply)
+ skip_final_snapshot = true
+ snapshot_identifier = (known after apply)
+ status = (known after apply)
+ storage_type = "gp2"
+ tags = {
+ "user" = "dev"
+ "usage" = "staging"
}
+ tags_all = {
+ "user" = "dev"
+ "usage" = "staging"
}
+ timezone = (known after apply)
+ username = ""
+ vpc_security_group_ids = [
+ "sg-xxx",
]
}
Plan: 1 to add, 0 to change, 0 to destroy.
need advance in here ... many thanks before and after

found root problem was :
can't load character - since i remove all variable containe -.
also can't working with payload data from aws_kms_secrets so i can't encrypted password and username using payload aws_kms_secrets

Related

Importing existing AWS Resources using Terraform Module

I am trying to import the existing S3 bucket using terraform module.I am able to import it successfully but the issue I am facing now is that after successful import when I ran terraform plan command it still showing that its going to create the resource again. It would be great if some one could help me what I am doing wrong here.
My Module:
module "log_s3" {
source = "../modules/s3/"
env_name = var.env_name
bucket_name = "${var.product_name}-logs-${var.env_name}"
enable_versioning = false
enable_cors = false
logging_bucket = module.log_s3.log_bucket_id
enable_bucket_policy = true
enable_static_site = false
}
My resource:
resource "aws_s3_bucket" "my_protected_bucket" {
bucket = var.bucket_name
tags = {
environment = var.env_name
}
}
resource "aws_s3_bucket_acl" "my_protected_bucket_acl" {
bucket = aws_s3_bucket.my_protected_bucket.id
acl = var.enable_static_site == true ? "public-read" : "private"
}
resource "aws_s3_bucket_public_access_block" "my_protected_bucket_access" {
bucket = aws_s3_bucket.my_protected_bucket.id
# Block public access
block_public_acls = var.enable_static_site == true ? false : true
block_public_policy = var.enable_static_site == true ? false : true
ignore_public_acls = var.enable_static_site == true ? false : true
restrict_public_buckets = var.enable_static_site == true ? false : true
}
resource "aws_s3_bucket_versioning" "my_protected_bucket_versioning" {
count = var.enable_versioning ? 1 : 0
bucket = aws_s3_bucket.my_protected_bucket.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_cors_configuration" "my_protected_bucket_cors" {
count = var.enable_cors ? 1 : 0
bucket = aws_s3_bucket.my_protected_bucket.id
cors_rule {
allowed_headers = ["*"]
allowed_methods = ["PUT", "POST", "DELETE", "GET", "HEAD"]
allowed_origins = ["*"]
expose_headers = [""]
}
lifecycle {
ignore_changes = [
cors_rule
]
}
}
resource "aws_s3_bucket_ownership_controls" "my_protected_bucket_ownership" {
bucket = aws_s3_bucket.my_protected_bucket.id
rule {
object_ownership = "ObjectWriter"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "my_protected_bucket_sse_config" {
bucket = aws_s3_bucket.my_protected_bucket.id
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_policy" "my_protected_bucket_policy" {
count = var.enable_bucket_policy ? 1 : 0
bucket = aws_s3_bucket.my_protected_bucket.id
policy = <<EOF
{
"Version": "2012-10-17",
"Id": "S3-Console-Auto-Gen-Policy-1659086042176",
"Statement": [
{
"Sid": "S3PolicyStmt-DO-NOT-MODIFY-1659086041783",
"Effect": "Allow",
"Principal": {
"Service": "logging.s3.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "${aws_s3_bucket.my_protected_bucket.arn}/*"
}
]
}
EOF
}
resource "aws_s3_object" "my_protected_bucket_object" {
bucket = var.logging_bucket
key = "s3_log/${aws_s3_bucket.my_protected_bucket.id}/"
}
resource "aws_s3_bucket_logging" "my_protected_bucket_logging" {
bucket = aws_s3_bucket.my_protected_bucket.id
target_bucket = var.logging_bucket
target_prefix = "s3_log/${aws_s3_bucket.my_protected_bucket.id}/"
depends_on = [aws_s3_bucket.my_protected_bucket, aws_s3_object.my_protected_bucket_object]
}
resource "aws_s3_bucket_website_configuration" "my_protected_bucket_static" {
bucket = aws_s3_bucket.my_protected_bucket.id
count = var.enable_static_site ? 1 : 0
index_document {
suffix = "index.html"
}
error_document {
key = "error.html"
}
}
output.tf
output "log_bucket_id" {
value = aws_s3_bucket.my_protected_bucket.id
Terraform Import command:
I ran the below command to import the bucket
terraform import module.log_s3.aws_s3_bucket.my_protected_bucket abcd-logs-dev
output :
module.log_s3.aws_s3_bucket.my_protected_bucket: Import prepared!
Prepared aws_s3_bucket for import
module.log_s3.aws_s3_bucket.my_protected_bucket: Refreshing state... [id=abcd-logs-deveu]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
Terraform plan:
After successfull import ..when I ran terraform plan command its showing that terraform going to create new resources
module.log_s3.aws_s3_bucket.my_protected_bucket: Refreshing state... [id=abcd-logs-dev]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# module.log_s3.aws_s3_bucket_acl.my_protected_bucket_acl will be created
+ resource "aws_s3_bucket_acl" "my_protected_bucket_acl" {
+ acl = "private"
+ bucket = "abcd-logs-dev"
+ id = (known after apply)
+ access_control_policy {
+ grant {
+ permission = (known after apply)
+ grantee {
+ display_name = (known after apply)
+ email_address = (known after apply)
+ id = (known after apply)
+ type = (known after apply)
+ uri = (known after apply)
}
}
+ owner {
+ display_name = (known after apply)
+ id = (known after apply)
}
}
}
# module.log_s3.aws_s3_bucket_logging.my_protected_bucket_logging will be created
+ resource "aws_s3_bucket_logging" "my_protected_bucket_logging" {
+ bucket = "abcd-logs-dev"
+ id = (known after apply)
+ target_bucket = "abcd-logs-dev"
+ target_prefix = "s3_log/abcd-logs-dev/"
}
# module.log_s3.aws_s3_bucket_ownership_controls.my_protected_bucket_ownership will be created
+ resource "aws_s3_bucket_ownership_controls" "my_protected_bucket_ownership" {
+ bucket = "abcd-logs-dev"
+ id = (known after apply)
+ rule {
+ object_ownership = "ObjectWriter"
}
}
# module.log_s3.aws_s3_bucket_policy.my_protected_bucket_policy[0] will be created
+ resource "aws_s3_bucket_policy" "my_protected_bucket_policy" {
+ bucket = "abcd-logs-dev"
+ id = (known after apply)
+ policy = jsonencode(
{
+ Id = "S3-Console-Auto-Gen-Policy-145342356879"
+ Statement = [
+ {
+ Action = "s3:PutObject"
+ Effect = "Allow"
+ Principal = {
+ Service = "logging.s3.amazonaws.com"
}
+ Resource = "arn:aws:s3:::abcd-logs-dev/*"
+ Sid = "S3PolicyStmt-DO-NOT-MODIFY-145342356879"
},
]
+ Version = "2012-10-17"
}
)
}
# module.log_s3.aws_s3_bucket_public_access_block.my_protected_bucket_access will be created
+ resource "aws_s3_bucket_public_access_block" "my_protected_bucket_access" {
+ block_public_acls = true
+ block_public_policy = true
+ bucket = "abcd-logs-dev"
+ id = (known after apply)
+ ignore_public_acls = true
+ restrict_public_buckets = true
}
# module.log_s3.aws_s3_bucket_server_side_encryption_configuration.my_protected_bucket_sse_config will be created
+ resource "aws_s3_bucket_server_side_encryption_configuration" "my_protected_bucket_sse_config" {
+ bucket = "abcd-logs-dev"
+ id = (known after apply)
+ rule {
+ apply_server_side_encryption_by_default {
+ sse_algorithm = "AES256"
}
}
}
# module.log_s3.aws_s3_object.my_protected_bucket_object will be created
+ resource "aws_s3_object" "my_protected_bucket_object" {
+ acl = "private"
+ bucket = "abcd-logs-dev"
+ bucket_key_enabled = (known after apply)
+ content_type = (known after apply)
+ etag = (known after apply)
+ force_destroy = false
+ id = (known after apply)
+ key = "s3_log/abcd-logs-dev/"
+ kms_key_id = (known after apply)
+ server_side_encryption = (known after apply)
+ storage_class = (known after apply)
+ tags_all = (known after apply)
+ version_id = (known after apply)
}
Plan: 7 to add, 0 to change, 0 to destroy.
It would be great if some one could help out what I am doing wrong. Help is much appreciated.
Thanks
The resource you imported is of type log_s3.aws_s3_bucket and named my_protected_bucket. There is no resource of type log_s3.aws_s3_bucket listed in the Terraform plan output. It correctly imported the S3 bucket resource and is not trying to create a new S3 bucket.
The resource types the Terraform plan say it is going to create are:
log_s3.aws_s3_bucket_acl
log_s3.aws_s3_bucket_logging
log_s3.aws_s3_bucket_ownership_controls
log_s3.aws_s3_bucket_policy
log_s3.aws_s3_bucket_public_access_block
log_s3.aws_s3_object
You haven't imported any of those resources yet. You still need to import each of those resources.
Yes, the simple problem here is that you are only importing S3 bucket resource into your state.
When you use a module, it's not enough to just import a single resource within that module. You have to run import command for all the resources present in that module.
You are currently running below import command.
terraform import module.log_s3.aws_s3_bucket.my_protected_bucket abcd-logs-dev
This is importing only S3 bucket into your state. But if you look at your module, you have other resources as well. So, you have to run similar import commands for other resources as well which are present in your module like below.
terraform import module.log_s3.aws_s3_bucket_acl.my_protected_bucket_acl abcd-logs-dev
Please check below for s3 bucket acl import https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/s3_bucket_acl#import
Similarly, run the import command for all the resources in your module and then run terraform plan. It will work.

AzureAd application and password trying to add and destroy instead of change

I am having an issue with terraform plan for azuread_application, azuread_application_password and azuread_service_principal, always wants to destroy and create a new one, lots of values are being shown as (known after apply), surely as already exists should just show as change when any differences, here is an example for password:
+ resource "azuread_application_password" "local_app_password" {
+ application_object_id = (known after apply)
+ description = (known after apply)
+ end_date = "2030-01-01T00:00:00Z"
+ id = (known after apply)
+ key_id = (known after apply)
+ start_date = (known after apply)
+ value = (sensitive value)
}
Can anyone advise what causes this issue with terraform azure provider?

Why do I got NoSuchBucket: The specified bucket does not exist error?

I am newbee to Terraform world. I am following one tutorial,but I tried to implement AWS Provider Upgrade Guide Upgrade4.
Terraform apply gives me
│ Error: error creating S3 bucket ACL for kevindenotariis-simple-web-app-logs: NoSuchBucket: The specified bucket does not exist
│ status code: 404, request id: W5K3YPKHMN8YA458, host id: fH5xGgvTn8JfprqbaCsVCS/ICirJdVcDS9GOo8R7TFshS+UquH/Xy1n0ZcSdLgrdbRqFp4wFKzQ=
│
│ with aws_s3_bucket_acl.simple-web-app-logs,
│ on s3.tf line 3, in resource "aws_s3_bucket_acl" "simple-web-app-logs":
│ 3: resource "aws_s3_bucket_acl" "simple-web-app-logs" {
My s3.tf
resource "aws_s3_bucket_acl" "simple-web-app-logs" {
bucket = "kevindenotariis-simple-web-app-logs"
acl = "private"
}
# S3 Bucket storing jenkins user data
resource "aws_s3_bucket_acl" "jenkins-config" {
bucket = "kevindenotariis-jenkins-config"
acl = "private"
}
From jenkins.tf,two relevent lines
bucket-logs-name = aws_s3_bucket_acl.simple-web-app-logs.id
bucket-config-name = aws_s3_bucket_acl.jenkins-config.id
I tried Terraform plan
Terraform will perform the following actions:
# aws_s3_bucket_acl.jenkins-config will be created
+ resource "aws_s3_bucket_acl" "jenkins-config" {
+ acl = "private"
+ bucket = "kevindenotariis-jenkins-config"
+ id = (known after apply)
+ access_control_policy {
+ grant {
+ permission = (known after apply)
+ grantee {
+ display_name = (known after apply)
+ email_address = (known after apply)
+ id = (known after apply)
+ type = (known after apply)
+ uri = (known after apply)
}
}
+ owner {
+ display_name = (known after apply)
+ id = (known after apply)
}
}
}
# aws_s3_bucket_acl.simple-web-app-logs will be created
+ resource "aws_s3_bucket_acl" "simple-web-app-logs" {
+ acl = "private"
+ bucket = "kevindenotariis-simple-web-app-logs"
+ id = (known after apply)
+ access_control_policy {
+ grant {
+ permission = (known after apply)
+ grantee {
+ display_name = (known after apply)
+ email_address = (known after apply)
+ id = (known after apply)
+ type = (known after apply)
+ uri = (known after apply)
}
}
+ owner {
+ display_name = (known after apply)
+ id = (known after apply)
}
}
}
Plan: 2 to add, 0 to change, 0 to destroy.
How to fix this issue?
SOLVED
By removing
acl = "private"
line
I reread again the above link, Terraform AWS Provider Version 4 Upgrade Guide.

Terraform destroys the instance inside RDS cluster when upgrading

I have created a RDS cluster with 2 instances using terraform. When I am upgrading the RDS from front-end, it modifies the cluster. But when I do the same using terraform, it destroys the instance.
We tried create_before_destroy, and it gives error.
We tried with ignore_changes=engine but that didn't make any changes.
Is there any way to prevent it?
resource "aws_rds_cluster" "rds_mysql" {
cluster_identifier = var.cluster_identifier
engine = var.engine
engine_version = var.engine_version
engine_mode = var.engine_mode
availability_zones = var.availability_zones
database_name = var.database_name
port = var.db_port
master_username = var.master_username
master_password = var.master_password
backup_retention_period = var.backup_retention_period
preferred_backup_window = var.engine_mode == "serverless" ? null : var.preferred_backup_window
db_subnet_group_name = var.create_db_subnet_group == "true" ? aws_db_subnet_group.rds_subnet_group[0].id : var.db_subnet_group_name
vpc_security_group_ids = var.vpc_security_group_ids
db_cluster_parameter_group_name = var.create_cluster_parameter_group == "true" ? aws_rds_cluster_parameter_group.rds_cluster_parameter_group[0].id : var.cluster_parameter_group
skip_final_snapshot = var.skip_final_snapshot
deletion_protection = var.deletion_protection
allow_major_version_upgrade = var.allow_major_version_upgrade
lifecycle {
create_before_destroy = false
ignore_changes = [availability_zones]
}
}
resource "aws_rds_cluster_instance" "cluster_instances" {
count = var.engine_mode == "serverless" ? 0 : var.cluster_instance_count
identifier = "${var.cluster_identifier}-${count.index}"
cluster_identifier = aws_rds_cluster.rds_mysql.id
instance_class = var.instance_class
engine = var.engine
engine_version = aws_rds_cluster.rds_mysql.engine_version
db_subnet_group_name = var.create_db_subnet_group == "true" ? aws_db_subnet_group.rds_subnet_group[0].id : var.db_subnet_group_name
db_parameter_group_name = var.create_db_parameter_group == "true" ? aws_db_parameter_group.rds_instance_parameter_group[0].id : var.db_parameter_group
apply_immediately = var.apply_immediately
auto_minor_version_upgrade = var.auto_minor_version_upgrade
lifecycle {
create_before_destroy = false
ignore_changes = [engine_version]
}
}
Error:
resource \"aws_rds_cluster_instance\" \"cluster_instances\" {\n\n\n\nError: error creating RDS Cluster (aurora-cluster-mysql) Instance: DBInstanceAlreadyExists: DB instance already exists\n\tstatus code: 400, request id: c6a063cc-4ffd-4710-aff2-eb0667b0774f\n\n on
Plan output:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
+/- create replacement and then destroy
Terraform will perform the following actions:
# module.rds_aurora_create[0].aws_rds_cluster.rds_mysql will be updated in-place
~ resource "aws_rds_cluster" "rds_mysql" {
~ allow_major_version_upgrade = false -> true
~ engine_version = "5.7.mysql_aurora.2.07.1" -> "5.7.mysql_aurora.2.08.1"
id = "aurora-cluster-mysql"
tags = {}
# (33 unchanged attributes hidden)
}
# module.rds_aurora_create[0].aws_rds_cluster_instance.cluster_instances[0] must be replaced
+/- resource "aws_rds_cluster_instance" "cluster_instances" {
~ arn = "arn:aws:rds:us-east-1:account:db:aurora-cluster-mysql-0" -> (known after apply)
~ availability_zone = "us-east-1a" -> (known after apply)
~ ca_cert_identifier = "rds-ca-" -> (known after apply)
~ dbi_resource_id = "db-32432432SDF" -> (known after apply)
~ endpoint = "aurora-cluster-mysql-0.jkjk.us-east-1.rds.amazonaws.com" -> (known after apply)
~ engine_version = "5.7.mysql_aurora.2.07.1" -> "5.7.mysql_aurora.2.08.1" # forces replacement
~ id = "aurora-cluster-mysql-0" -> (known after apply)
+ identifier_prefix = (known after apply)
+ kms_key_id = (known after apply)
+ monitoring_role_arn = (known after apply)
~ performance_insights_enabled = false -> (known after apply)
+ performance_insights_kms_key_id = (known after apply)
~ port = 3306 -> (known after apply)
~ preferred_backup_window = "07:00-09:00" -> (known after apply)
~ preferred_maintenance_window = "thu:06:12-thu:06:42" -> (known after apply)
~ storage_encrypted = false -> (known after apply)
- tags = {} -> null
~ tags_all = {} -> (known after apply)
~ writer = true -> (known after apply)
# (12 unchanged attributes hidden)
}
Plan: 1 to add, 1 to change, 1 to destroy.
I see apply_immediately argument not there in aws_rds_cluster resource , can you add that and try.
Terraform is seeing the engine version change on the instances and is detecting this as an action that forces replacement.
Remove (or ignore changes to) the engine_version input for the aws_rds_cluster_instance resources.
AWS RDS upgrades the engine version for cluster instances itself when you upgrade the engine version of the cluster (this is why you can do an in-place upgrade via the AWS console).
By excluding the engine_version input, Terraform will see no changes made to the aws_rds_cluster_instances and will do nothing.
AWS will handle the engine upgrades for the instances internally.
If you decide to ignore changes, use the ignore_changes argument within a lifecycle block:
resource "aws_rds_cluster_instance" "cluster_instance" {
engine_version = aws_rds_cluster.main.engine_version
...
lifecycle {
ignore_changes = [engine_version]
}
}
I didn't know that, but after some Googling I found this:
https://github.com/hashicorp/terraform-provider-aws/issues/10714
i.e. a bug report to AWS Terraform provider:
resource/aws_rds_cluster_instance is being destroyed and re-created when updating engine_version while apply_immediately is set to false
which seems to be the very same issue you are facing.
One comment there seems to point to a solution:
As of v3.63.0 (EDITED) of the provider, updates to the engine_version parameter of aws_rds_cluster_instance resources no longer forces replacement of the resource.
The original comment seems to have a typo - 3.36 vs. 3.63.
Can you try upgrading your aws Terraform provider?

Attempting to create a storage container for azurerm backend fails with 404 - The specified resource does not exist

I am trying to setup an azurerm backend using the following Terraform code:
modules\remote-state\main.tf
provider "azurerm" {
}
variable "env" {
type = string
description = "The SDLC environment (qa, dev, prod, etc...)"
}
locals {
extended_name = "dfpg-${lower(var.env)}-tfstate"
}
##################################################################################
# RESOURCES
##################################################################################
resource "azurerm_resource_group" "setup" {
name = "app505-${local.extended_name}-eastus2"
location = "eastus2"
}
resource "azurerm_storage_account" "sa" {
name = replace(local.extended_name, "-", "")
resource_group_name = azurerm_resource_group.setup.name
location = azurerm_resource_group.setup.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "ct" {
name = "terraform-state"
storage_account_name = azurerm_storage_account.sa.name
}
data "azurerm_storage_account_sas" "state" {
connection_string = azurerm_storage_account.sa.primary_connection_string
https_only = true
resource_types {
service = true
container = true
object = true
}
services {
blob = true
queue = false
table = false
file = false
}
start = timestamp()
expiry = timeadd(timestamp(), "17520h")
permissions {
read = true
write = true
delete = true
list = true
add = true
create = true
update = false
process = false
}
}
##################################################################################
# OUTPUT
##################################################################################
resource "null_resource" "post-config" {
depends_on = [azurerm_storage_container.ct]
provisioner "local-exec" {
command = <<EOT
Set-Content -Value 'storage_account_name = "${azurerm_storage_account.sa.name}"' -Path "backend-config.txt"
Add-Content -Value 'container_name = "terraform-state"' -Path "backend-config.txt"
Add-Content -Value 'key = "terraform.tfstate"' -Path "backend-config.txt"
Add-Content -Value 'sas_token = "${data.azurerm_storage_account_sas.state.sas}"' -Path "backend-config.txt"
EOT
interpreter = ["PowerShell", "-NoProfile", "-Command"]
}
}
qa\bootstrap\rs\main.tf
module "bootstrap" {
source = "../../../modules/remote-state"
env = "qa"
}
terraform init
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡]> terraform init
Initializing modules...
- bootstrap in ..\..\..\modules\remote-state
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "azurerm" (hashicorp/azurerm) 1.41.0...
- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.azurerm: version = "~> 1.41"
* provider.null: version = "~> 2.1"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡]>
terraform plan -out main.tfplan
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡]> terraform plan -out main.tfplan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
# module.bootstrap.data.azurerm_storage_account_sas.state will be read during apply
# (config refers to values not yet known)
<= data "azurerm_storage_account_sas" "state" {
+ connection_string = (sensitive value)
+ expiry = (known after apply)
+ https_only = true
+ id = (known after apply)
+ sas = (sensitive value)
+ start = (known after apply)
+ permissions {
+ add = true
+ create = true
+ delete = true
+ list = true
+ process = false
+ read = true
+ update = false
+ write = true
}
+ resource_types {
+ container = true
+ object = true
+ service = true
}
+ services {
+ blob = true
+ file = false
+ queue = false
+ table = false
}
+ timeouts {
+ read = (known after apply)
}
}
# module.bootstrap.azurerm_resource_group.setup will be created
+ resource "azurerm_resource_group" "setup" {
+ id = (known after apply)
+ location = "eastus2"
+ name = "app505-dfpg-qa-tfstate-eastus2"
+ tags = (known after apply)
}
# module.bootstrap.azurerm_storage_account.sa will be created
+ resource "azurerm_storage_account" "sa" {
+ access_tier = (known after apply)
+ account_encryption_source = "Microsoft.Storage"
+ account_kind = "Storage"
+ account_replication_type = "LRS"
+ account_tier = "Standard"
+ account_type = (known after apply)
+ enable_advanced_threat_protection = (known after apply)
+ enable_blob_encryption = true
+ enable_file_encryption = true
+ id = (known after apply)
+ is_hns_enabled = false
+ location = "eastus2"
+ name = "dfpgqatfstate"
+ primary_access_key = (sensitive value)
+ primary_blob_connection_string = (sensitive value)
+ primary_blob_endpoint = (known after apply)
+ primary_blob_host = (known after apply)
+ primary_connection_string = (sensitive value)
+ primary_dfs_endpoint = (known after apply)
+ primary_dfs_host = (known after apply)
+ primary_file_endpoint = (known after apply)
+ primary_file_host = (known after apply)
+ primary_location = (known after apply)
+ primary_queue_endpoint = (known after apply)
+ primary_queue_host = (known after apply)
+ primary_table_endpoint = (known after apply)
+ primary_table_host = (known after apply)
+ primary_web_endpoint = (known after apply)
+ primary_web_host = (known after apply)
+ resource_group_name = "app505-dfpg-qa-tfstate-eastus2"
+ secondary_access_key = (sensitive value)
+ secondary_blob_connection_string = (sensitive value)
+ secondary_blob_endpoint = (known after apply)
+ secondary_blob_host = (known after apply)
+ secondary_connection_string = (sensitive value)
+ secondary_dfs_endpoint = (known after apply)
+ secondary_dfs_host = (known after apply)
+ secondary_file_endpoint = (known after apply)
+ secondary_file_host = (known after apply)
+ secondary_location = (known after apply)
+ secondary_queue_endpoint = (known after apply)
+ secondary_queue_host = (known after apply)
+ secondary_table_endpoint = (known after apply)
+ secondary_table_host = (known after apply)
+ secondary_web_endpoint = (known after apply)
+ secondary_web_host = (known after apply)
+ tags = (known after apply)
+ blob_properties {
+ delete_retention_policy {
+ days = (known after apply)
}
}
+ identity {
+ principal_id = (known after apply)
+ tenant_id = (known after apply)
+ type = (known after apply)
}
+ network_rules {
+ bypass = (known after apply)
+ default_action = (known after apply)
+ ip_rules = (known after apply)
+ virtual_network_subnet_ids = (known after apply)
}
+ queue_properties {
+ cors_rule {
+ allowed_headers = (known after apply)
+ allowed_methods = (known after apply)
+ allowed_origins = (known after apply)
+ exposed_headers = (known after apply)
+ max_age_in_seconds = (known after apply)
}
+ hour_metrics {
+ enabled = (known after apply)
+ include_apis = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
}
+ logging {
+ delete = (known after apply)
+ read = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
+ write = (known after apply)
}
+ minute_metrics {
+ enabled = (known after apply)
+ include_apis = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
}
}
}
# module.bootstrap.azurerm_storage_container.ct will be created
+ resource "azurerm_storage_container" "ct" {
+ container_access_type = "private"
+ has_immutability_policy = (known after apply)
+ has_legal_hold = (known after apply)
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "terraform-state"
+ properties = (known after apply)
+ resource_group_name = (known after apply)
+ storage_account_name = "dfpgqatfstate"
}
# module.bootstrap.null_resource.post-config will be created
+ resource "null_resource" "post-config" {
+ id = (known after apply)
}
Plan: 4 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: main.tfplan
To perform exactly these actions, run the following command to apply:
terraform apply "main.tfplan"
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡]>
So far so good. Now applying:
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡]> terraform.exe apply .\main.tfplan
module.bootstrap.azurerm_resource_group.setup: Creating...
module.bootstrap.azurerm_resource_group.setup: Creation complete after 3s [id=/subscriptions/*SUB-GUID*/resourceGroups/app505-dfpg-qa-tfstate-eastus2]
module.bootstrap.azurerm_storage_account.sa: Creating...
module.bootstrap.azurerm_storage_account.sa: Still creating... [10s elapsed]
module.bootstrap.azurerm_storage_account.sa: Still creating... [20s elapsed]
module.bootstrap.azurerm_storage_account.sa: Still creating... [30s elapsed]
module.bootstrap.azurerm_storage_account.sa: Creation complete after 32s [id=/subscriptions/*SUB-GUID*/resourceGroups/app505-dfpg-qa-tfstate-eastus2/providers/Microsoft.Storage/storageAccounts/dfpgqatfstate]
module.bootstrap.data.azurerm_storage_account_sas.state: Refreshing state...
module.bootstrap.azurerm_storage_container.ct: Creating...
Error: Error creating Container "terraform-state" (Account "dfpgqatfstate" / Resource Group "app505-dfpg-qa-tfstate-eastus2"): containers.Client#Create: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The specified resource does not exist.\nRequestId:4dcdd560-901e-005e-130d-d3a867000000\nTime:2020-01-24T23:26:53.6811230Z"
on ..\..\..\modules\remote-state\main.tf line 29, in resource "azurerm_storage_container" "ct":
29: resource "azurerm_storage_container" "ct" {
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡ +1 ~0 -0 !]>
Now running the second time works:
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡ +1 ~0 -0 !]> terraform.exe apply .\main.tfplan
module.bootstrap.azurerm_resource_group.setup: Creating...
module.bootstrap.azurerm_resource_group.setup: Creation complete after 2s [id=/subscriptions/*SUB-GUID*/resourceGroups/app505-dfpg-qa-tfstate-eastus2]
module.bootstrap.azurerm_storage_account.sa: Creating...
module.bootstrap.azurerm_storage_account.sa: Creation complete after 7s [id=/subscriptions/*SUB-GUID*/resourceGroups/app505-dfpg-qa-tfstate-eastus2/providers/Microsoft.Storage/storageAccounts/dfpgqatfstate]
module.bootstrap.data.azurerm_storage_account_sas.state: Refreshing state...
module.bootstrap.azurerm_storage_container.ct: Creating...
module.bootstrap.azurerm_storage_container.ct: Creation complete after 1s [id=https://dfpgqatfstate.blob.core.windows.net/terraform-state]
module.bootstrap.null_resource.post-config: Creating...
module.bootstrap.null_resource.post-config: Provisioning with 'local-exec'...
module.bootstrap.null_resource.post-config (local-exec): Executing: ["PowerShell" "-NoProfile" "-Command" "Set-Content -Value 'storage_account_name = \"dfpgqatfstate\"' -Path \"backend-config.txt\"\r\nAdd-Content -Value 'container_name = \"terraform-state\"' -Path \"backend-config.txt\"\r\nAdd-Content -Value 'key = \"terraform.tfstate\"' -Path \"backend-config.txt\"\r\nAdd-Content -Value 'sas_token = \"?sv=2017-07-29&ss=b&srt=sco&sp=rwdlac&se=2022-01-23T23:29:47Z&st=2020-01-24T23:29:47Z&spr=https&sig=***\"' -Path \"backend-config.txt\"\r\n"]
module.bootstrap.null_resource.post-config: Creation complete after 1s [id=5713483326668430483]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡ +2 ~0 -0 !]>
But why do I have to run it twice? What am I doing wrong?
Edit 1
I tried it several times on my laptop (destroying in the middle, of course) and it consistently failed. Then I activated trace log and 5 minutes later it passed for the first time. No idea why it took it 5 minutes. The network seems to be just fine on my end.
My laptop is on a VPN to work. I will now try from the workstation at work without VPN.
Edit 2
Tried on the workstation in the office. First time succeeded, but suspicious as I am, I destroyed and retried and it failed the second time.
Edit 3
We have ZScaler installed.

Resources