I have created a RDS cluster with 2 instances using terraform. When I am upgrading the RDS from front-end, it modifies the cluster. But when I do the same using terraform, it destroys the instance.
We tried create_before_destroy, and it gives error.
We tried with ignore_changes=engine but that didn't make any changes.
Is there any way to prevent it?
resource "aws_rds_cluster" "rds_mysql" {
cluster_identifier = var.cluster_identifier
engine = var.engine
engine_version = var.engine_version
engine_mode = var.engine_mode
availability_zones = var.availability_zones
database_name = var.database_name
port = var.db_port
master_username = var.master_username
master_password = var.master_password
backup_retention_period = var.backup_retention_period
preferred_backup_window = var.engine_mode == "serverless" ? null : var.preferred_backup_window
db_subnet_group_name = var.create_db_subnet_group == "true" ? aws_db_subnet_group.rds_subnet_group[0].id : var.db_subnet_group_name
vpc_security_group_ids = var.vpc_security_group_ids
db_cluster_parameter_group_name = var.create_cluster_parameter_group == "true" ? aws_rds_cluster_parameter_group.rds_cluster_parameter_group[0].id : var.cluster_parameter_group
skip_final_snapshot = var.skip_final_snapshot
deletion_protection = var.deletion_protection
allow_major_version_upgrade = var.allow_major_version_upgrade
lifecycle {
create_before_destroy = false
ignore_changes = [availability_zones]
}
}
resource "aws_rds_cluster_instance" "cluster_instances" {
count = var.engine_mode == "serverless" ? 0 : var.cluster_instance_count
identifier = "${var.cluster_identifier}-${count.index}"
cluster_identifier = aws_rds_cluster.rds_mysql.id
instance_class = var.instance_class
engine = var.engine
engine_version = aws_rds_cluster.rds_mysql.engine_version
db_subnet_group_name = var.create_db_subnet_group == "true" ? aws_db_subnet_group.rds_subnet_group[0].id : var.db_subnet_group_name
db_parameter_group_name = var.create_db_parameter_group == "true" ? aws_db_parameter_group.rds_instance_parameter_group[0].id : var.db_parameter_group
apply_immediately = var.apply_immediately
auto_minor_version_upgrade = var.auto_minor_version_upgrade
lifecycle {
create_before_destroy = false
ignore_changes = [engine_version]
}
}
Error:
resource \"aws_rds_cluster_instance\" \"cluster_instances\" {\n\n\n\nError: error creating RDS Cluster (aurora-cluster-mysql) Instance: DBInstanceAlreadyExists: DB instance already exists\n\tstatus code: 400, request id: c6a063cc-4ffd-4710-aff2-eb0667b0774f\n\n on
Plan output:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
+/- create replacement and then destroy
Terraform will perform the following actions:
# module.rds_aurora_create[0].aws_rds_cluster.rds_mysql will be updated in-place
~ resource "aws_rds_cluster" "rds_mysql" {
~ allow_major_version_upgrade = false -> true
~ engine_version = "5.7.mysql_aurora.2.07.1" -> "5.7.mysql_aurora.2.08.1"
id = "aurora-cluster-mysql"
tags = {}
# (33 unchanged attributes hidden)
}
# module.rds_aurora_create[0].aws_rds_cluster_instance.cluster_instances[0] must be replaced
+/- resource "aws_rds_cluster_instance" "cluster_instances" {
~ arn = "arn:aws:rds:us-east-1:account:db:aurora-cluster-mysql-0" -> (known after apply)
~ availability_zone = "us-east-1a" -> (known after apply)
~ ca_cert_identifier = "rds-ca-" -> (known after apply)
~ dbi_resource_id = "db-32432432SDF" -> (known after apply)
~ endpoint = "aurora-cluster-mysql-0.jkjk.us-east-1.rds.amazonaws.com" -> (known after apply)
~ engine_version = "5.7.mysql_aurora.2.07.1" -> "5.7.mysql_aurora.2.08.1" # forces replacement
~ id = "aurora-cluster-mysql-0" -> (known after apply)
+ identifier_prefix = (known after apply)
+ kms_key_id = (known after apply)
+ monitoring_role_arn = (known after apply)
~ performance_insights_enabled = false -> (known after apply)
+ performance_insights_kms_key_id = (known after apply)
~ port = 3306 -> (known after apply)
~ preferred_backup_window = "07:00-09:00" -> (known after apply)
~ preferred_maintenance_window = "thu:06:12-thu:06:42" -> (known after apply)
~ storage_encrypted = false -> (known after apply)
- tags = {} -> null
~ tags_all = {} -> (known after apply)
~ writer = true -> (known after apply)
# (12 unchanged attributes hidden)
}
Plan: 1 to add, 1 to change, 1 to destroy.
I see apply_immediately argument not there in aws_rds_cluster resource , can you add that and try.
Terraform is seeing the engine version change on the instances and is detecting this as an action that forces replacement.
Remove (or ignore changes to) the engine_version input for the aws_rds_cluster_instance resources.
AWS RDS upgrades the engine version for cluster instances itself when you upgrade the engine version of the cluster (this is why you can do an in-place upgrade via the AWS console).
By excluding the engine_version input, Terraform will see no changes made to the aws_rds_cluster_instances and will do nothing.
AWS will handle the engine upgrades for the instances internally.
If you decide to ignore changes, use the ignore_changes argument within a lifecycle block:
resource "aws_rds_cluster_instance" "cluster_instance" {
engine_version = aws_rds_cluster.main.engine_version
...
lifecycle {
ignore_changes = [engine_version]
}
}
I didn't know that, but after some Googling I found this:
https://github.com/hashicorp/terraform-provider-aws/issues/10714
i.e. a bug report to AWS Terraform provider:
resource/aws_rds_cluster_instance is being destroyed and re-created when updating engine_version while apply_immediately is set to false
which seems to be the very same issue you are facing.
One comment there seems to point to a solution:
As of v3.63.0 (EDITED) of the provider, updates to the engine_version parameter of aws_rds_cluster_instance resources no longer forces replacement of the resource.
The original comment seems to have a typo - 3.36 vs. 3.63.
Can you try upgrading your aws Terraform provider?
Related
I have the following terraform resources in a file
resource "google_project_service" "cloud_resource_manager" {
project = var.tf_project_id
service = "cloudresourcemanager.googleapis.com"
disable_dependent_services = true
}
resource "google_project_service" "artifact_registry" {
project = var.tf_project_id
service = "artifactregistry.googleapis.com"
disable_dependent_services = true
depends_on = [google_project_service.cloud_resource_manager]
}
resource "google_artifact_registry_repository" "el" {
provider = google-beta
project = var.tf_project_id
location = var.region
repository_id = "el"
description = "Repository for extract/load docker images"
format = "DOCKER"
depends_on = [google_project_service.artifact_registry]
}
However, when I run terraform plan, I get this
Terraform will perform the following actions:
# google_artifact_registry_repository.el will be created
+ resource "google_artifact_registry_repository" "el" {
+ create_time = (known after apply)
+ description = "Repository for extract/load docker images"
+ format = "DOCKER"
+ id = (known after apply)
+ location = "us-central1"
+ name = (known after apply)
+ project = "backbone-third-party-data"
+ repository_id = "el"
+ update_time = (known after apply)
}
# google_project_iam_member.ingest_sa_roles["cloudscheduler.serviceAgent"] will be created
+ resource "google_project_iam_member" "ingest_sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ project = "backbone-third-party-data"
+ role = "roles/cloudscheduler.serviceAgent"
}
# google_project_iam_member.ingest_sa_roles["run.invoker"] will be created
+ resource "google_project_iam_member" "ingest_sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ project = <my project id>
+ role = "roles/run.invoker"
}
# google_project_service.artifact_registry will be created
+ resource "google_project_service" "artifact_registry" {
+ disable_dependent_services = true
+ disable_on_destroy = true
+ id = (known after apply)
+ project = <my project id>
+ service = "artifactregistry.googleapis.com"
}
See how google_project_service.artifact_registry is created after google_artifact_registry_repository.el. I was hoping that my depends_on in resource google_artifact_registry_repository.el would make it so the service was created first. Am I misunderstanding how depends_on works? Or does the ordering of resources listed from terraform plan not actually mean that thats the order they are created in?
Edit: when I run terraform apply it errors out with
Error 403: Cloud Resource Manager API has not been used in project 521986354168 before or it is disabled
Even though it is enabled. I think it's doing this because its running the artifact registry resource creation before creating the terraform services?
I don't think that it will be possible to enable this particular API this way, as google_project_service resource depends on Resource Manager API (and maybe also on Service Usage API?) being enabled. So you could either enable those manually or use null_resource with local-exec provisioner to do it automatically:
resource "null_resource" "enable_cloudresourcesmanager_api" {
provisioner "local-exec" {
command = "gcloud services enable cloudresourcesmanager.googleapis.com cloudresourcemanager.googleapis.com --project ${var.project_id}"
}
}
Another issue you might find is that enabling an API takes some time, depending on a service. So sometimes even though your resources depend on a resource enabling an API, you will still get the same error message. Then you can just reapply your configuration and as an API had time to initialize, second apply will work. In some cases this is good enough, but if you are building a reusable module you might want to avoid those reapplies. Then you can use time_sleep resources to wait for API initialization:
resource "time_sleep" "wait_for_cloudresourcemanager_api" {
depends_on = [null_resource.enable_cloudresourcesmanager_api]
# or: depends_on = [google_project_service.some_other_api]
create_duration = "30s"
}
Im trying to debug why my Terraform script is not working. Due an unknown reason Terraform keeps destroying my MySQL database and recreates it after that.
Below is the output of the execution plan:
# azurerm_mysql_server.test01 will be destroyed
- resource "azurerm_mysql_server" "test01" {
- administrator_login = "me" -> null
- auto_grow_enabled = true -> null
- backup_retention_days = 7 -> null
- create_mode = "Default" -> null
- fqdn = "db-test01.mysql.database.azure.com" -> null
- geo_redundant_backup_enabled = false -> null
- id = "/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01" -> null
- infrastructure_encryption_enabled = false -> null
- location = "westeurope" -> null
- name = "db-test01" -> null
- public_network_access_enabled = true -> null
- resource_group_name = "production-rg" -> null
- sku_name = "B_Gen5_1" -> null
- ssl_enforcement = "Disabled" -> null
- ssl_enforcement_enabled = false -> null
- ssl_minimal_tls_version_enforced = "TLSEnforcementDisabled" -> null
- storage_mb = 51200 -> null
- tags = {} -> null
- version = "8.0" -> null
- storage_profile {
- auto_grow = "Enabled" -> null
- backup_retention_days = 7 -> null
- geo_redundant_backup = "Disabled" -> null
- storage_mb = 51200 -> null
}
- timeouts {}
}
# module.databases.module.test.azurerm_mysql_server.test01 will be created
+ resource "azurerm_mysql_server" "test01" {
+ administrator_login = "me"
+ administrator_login_password = (sensitive value)
+ auto_grow_enabled = true
+ backup_retention_days = 7
+ create_mode = "Default"
+ fqdn = (known after apply)
+ geo_redundant_backup_enabled = false
+ id = (known after apply)
+ infrastructure_encryption_enabled = false
+ location = "westeurope"
+ name = "db-test01"
+ public_network_access_enabled = true
+ resource_group_name = "production-rg"
+ sku_name = "B_Gen5_1"
+ ssl_enforcement = (known after apply)
+ ssl_enforcement_enabled = false
+ ssl_minimal_tls_version_enforced = "TLSEnforcementDisabled"
+ storage_mb = 51200
+ version = "8.0"
+ storage_profile {
+ auto_grow = (known after apply)
+ backup_retention_days = (known after apply)
+ geo_redundant_backup = (known after apply)
+ storage_mb = (known after apply)
}
}
As far as i know all is exactly the same. To prevent this i also did a manually terraform import to sync the state with the remote state.
The actually resource as defined in my main.tf
resource "azurerm_mysql_server" "test01" {
name = "db-test01"
location = "West Europe"
resource_group_name = var.rg
administrator_login = "me"
administrator_login_password = var.root_password
sku_name = "B_Gen5_1"
storage_mb = 51200
version = "8.0"
auto_grow_enabled = true
backup_retention_days = 7
geo_redundant_backup_enabled = false
infrastructure_encryption_enabled = false
public_network_access_enabled = true
ssl_enforcement_enabled = false
}
The other odd thing is that below command will output that all is actually in sync?
➜ terraform git:(develop) ✗ terraform plan --refresh-only
azurerm_mysql_server.test01: Refreshing state... [id=/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/firstklas-production-rg/providers/Microsoft.DBforMySQL/servers/db-test01]
No changes. Your infrastructure still matches the configuration.
After an actual import the same still happens even though the import states all is in state:
➜ terraform git:(develop) ✗ terraform import azurerm_mysql_server.test01 /subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01
azurerm_mysql_server.test01: Importing from ID "/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01"...
azurerm_mysql_server.test01: Import prepared!
Prepared azurerm_mysql_server for import
azurerm_mysql_server.test01: Refreshing state... [id=/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
What can i do to prevent this destroy? Or even figure out why the actually destroy is triggered? This is happening on multiple azure instances at this point.
NOTE: the subscription ID is spoofed so don't worry
Best,
Pim
Your plan output shows that Terraform is seeing two different resource addresses:
# azurerm_mysql_server.test01 will be destroyed
# module.databases.module.test.azurerm_mysql_server.test01 will be created
Notice that the one to be created is in a nested module, not in the root module.
If your intent is to import this object to the address that is shown as needing to be created above, you'll need to specify this full address in the terraform import command:
terraform import 'module.databases.module.test.azurerm_mysql_server.test01' /subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01
The terraform import command tells Terraform to bind an existing remote object to a particular Terraform address, and so when you use it you need to be careful to specify the correct Terraform address to bind to.
In your case, you told Terraform to bind the object to a hypothetical resource "azurerm_mysql_server" "test01" block in the root module, but your configuration has no such block and so when you ran terraform plan Terraform assumed that you wanted to delete that object, because deleting a resource block is how we typically tell Terraform that we intend to delete something.
There is a way.
user
Plan
Apply
terraform state rm "resource_name" --------This will eliminate or remove resource from current state
next Apply
Worked perfectly on GCP for creating 2 successive VM using same TF script.
Only thing is we need to write/code to get current resources and store somewhere and create commands in config: require blocks for upstream dependencies #3. While destroying we can add back using terraform state mv "resource_name"
Note : This has risk as the very first VM did not get deleted as it is considered generated out of scope of terraform. So your cost may persist. So you have to have back up (incremental for states)
Hope this helps.
Im trying to create/register application in Azure AD with oauth2_permissions / scopes.
Im following this documentatino page to do so: https://www.terraform.io/docs/providers/azuread/r/application.html
And I have reduced it to that simple .tf file:
provider "azuread" {
version = "=0.7.0"
subscription_id = "*******************************"
tenant_id = var.tenant-id
}
resource "azuread_application" "example" {
name = "example"
// oauth2_permissions {
// admin_consent_description = "Allow the application to access example on behalf of the signed-in user."
// admin_consent_display_name = "Access example"
// is_enabled = true
// type = "User"
// user_consent_description = "Allow the application to access example on your behalf."
// user_consent_display_name = "Access example"
// value = "user_impersonation"
// }
}
Running script like this with terraform plan says:
C:\source\ITAN\terraform (master -> origin) λ terraform plan
Refreshing Terraform state in-memory prior to plan... The refreshed
state will be used to calculate this plan, but will not be persisted
to local or remote state storage.
An execution plan has been generated and is shown below. Resource
actions are indicated with the following symbols: + create
Terraform will perform the following actions:
azuread_application.example will be created + resource
azuread_application" "example" {
+ application_id = (known after apply)
+ homepage = (known after apply)
+ id = (known after apply)
+ identifier_uris = (known after apply)
+ name = "example"
+ object_id = (known after apply)
+ owners = (known after apply)
+ public_client = (known after apply)
+ reply_urls = (known after apply)
+ type = "webapp/api"
+ oauth2_permissions {
+ admin_consent_description = (known after apply)
+ admin_consent_display_name = (known after apply)
+ id = (known after apply)
+ is_enabled = (known after apply)
+ type = (known after apply)
+ user_consent_description = (known after apply)
+ user_consent_display_name = (known after apply)
+ value = (known after apply)
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Note: You didn't specify an "-out" parameter to save this plan, so
Terraform can't guarantee that exactly these actions will be performed
if "terraform apply" is subsequently run.
But when I uncomment the oauth2_permissions
provider "azuread" {
version = "=0.7.0"
subscription_id = "******************"
tenant_id = var.tenant-id
}
resource "azuread_application" "example" {
name = "example"
oauth2_permissions {
admin_consent_description = "Allow the application to access example on behalf of the signed-in user."
admin_consent_display_name = "Access example"
is_enabled = true
type = "User"
user_consent_description = "Allow the application to access example on your behalf."
user_consent_display_name = "Access example"
value = "user_impersonation"
}
}
Problem occurs and it states like this:
Error: "oauth2_permissions.0.user_consent_display_name": this field
cannot be set
on itan-azure-ad.tf line 7, in resource "azuread_application"
"example": 7: resource "azuread_application" "example" {
Any idea what am I doing wrong? Im logged in, I have selected proper subscription and switched to it. I own the azure account. I have created application via azure portal successully, yet I want to have it done automatically. Running on terraform:
terraform -v
Terraform v0.12.28
+ provider.azuread v0.7.0
Looks like it's not supported to set user_consent_display_name in the version provider.azuread v0.7.0. See oauth2_permissions in the change log here.
Please use the latest azuread provider version 0.11.0. It will fix your issue.
provider "azuread" {
version = "~>0.11.0"
subscription_id = "*******************************"
tenant_id = var.tenant-id
}
I have terraform that is producing the following log on apply:
Terraform will perform the following actions:
# aws_lambda_permission.allow_bucket must be replaced
-/+ resource "aws_lambda_permission" "allow_bucket" {
action = "lambda:InvokeFunction"
function_name = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler"
~ id = "AllowExecutionFromS3Bucket" -> (known after apply)
principal = "s3.amazonaws.com"
~ source_arn = "arn:aws:s3:::bucket-example-us-east-1" -> (known after apply) # forces replacement
statement_id = "AllowExecutionFromS3Bucket"
}
# aws_s3_bucket.bucket must be replaced
-/+ resource "aws_s3_bucket" "bucket" {
+ acceleration_status = (known after apply)
acl = "private"
~ arn = "arn:aws:s3:::bucket-example-us-east-1" -> (known after apply)
~ bucket = "bucket-example-us-east-1" -> "sftp-assembly-us-east-1" # forces replacement
~ bucket_domain_name = "bucket-example-us-east-1.s3.amazonaws.com" -> (known after apply)
~ bucket_regional_domain_name = "bucket-example-us-east-1.s3.amazonaws.com" -> (known after apply)
force_destroy = false
~ hosted_zone_id = "FOOBAR" -> (known after apply)
~ id = "bucket-example-us-east-1" -> (known after apply)
~ region = "us-east-1" -> (known after apply)
~ request_payer = "BucketOwner" -> (known after apply)
~ tags = {
~ "Name" = "bucket-example-us-east-1" -> "sftp-assembly-us-east-1"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
versioning {
enabled = false
mfa_delete = false
}
}
# aws_s3_bucket_notification.bucket_notification must be replaced
-/+ resource "aws_s3_bucket_notification" "bucket_notification" {
~ bucket = "bucket-example-us-east-1" -> (known after apply) # forces replacement
~ id = "bucket-example-us-east-1" -> (known after apply)
~ lambda_function {
events = [
"s3:ObjectCreated:*",
]
~ id = "tf-s3-lambda-FOOBAR" -> (known after apply)
lambda_function_arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler"
}
}
# module.large-file-breaker-lambda-primary.aws_lambda_function.lambda will be updated in-place
~ resource "aws_lambda_function" "lambda" {
arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyLargeFileBreaker"
function_name = "ProggyLargeFileBreaker"
handler = "break-large-files"
id = "ProggyLargeFileBreaker"
invoke_arn = "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:<secret>:function:ProggyLargeFileBreaker/invocations"
last_modified = "2020-03-13T20:17:33.376+0000"
layers = []
memory_size = 3008
publish = false
qualified_arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyLargeFileBreaker:$LATEST"
reserved_concurrent_executions = -1
role = "arn:aws:iam::<secret>:role/ProggyLargeFileBreaker-role20200310215329691700000001"
runtime = "go1.x"
s3_bucket = "repo-us-east-1"
s3_key = "Proggy-large-file-breaker/1.0.10/break-large-files-1.0.10.zip"
source_code_hash = "TbwleLcqD+xL2zOYk6ZdiBWAAznCIiTS/6nzrWqYZhE="
source_code_size = 7294687
~ tags = {
"Name" = "ProggyLargeFileBreaker"
~ "TerraformRepo" = "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.11" -> "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.16"
"Version" = "1.0.10"
"account" = "main"
"environment" = "assembly"
"source" = "terraform"
"type" = "ops"
}
timeout = 360
version = "$LATEST"
environment {
variables = {
"ARCHIVE_BUCKET" = "Proggy-archive-assembly-us-east-1"
"S3_OBJECT_SIZE_LIMIT" = "15000000"
}
}
tracing_config {
mode = "PassThrough"
}
}
# module.notifier-lambda-primary.aws_lambda_function.lambda will be updated in-place
~ resource "aws_lambda_function" "lambda" {
arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler"
function_name = "ProggyS3ObjectCreatedHandler"
handler = "s3-Proggy-notifier"
id = "ProggyS3ObjectCreatedHandler"
invoke_arn = "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler/invocations"
last_modified = "2020-03-11T20:52:33.256+0000"
layers = []
memory_size = 128
publish = false
qualified_arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler:$LATEST"
reserved_concurrent_executions = -1
role = "arn:aws:iam::<secret>:role/ProggyS3ObjectCreatedHandler-role20200310215329780600000001"
runtime = "go1.x"
s3_bucket = "repo-us-east-1"
s3_key = "s3-Proggy-notifier/1.0.55/s3-Proggy-notifier-1.0.55.zip"
source_code_hash = "4N+B1GpaUY/wB4S7tR1eWRnNuHnBExcEzmO+mqhQ5B4="
source_code_size = 6787828
~ tags = {
"Name" = "ProggyS3ObjectCreatedHandler"
~ "TerraformRepo" = "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.11" -> "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.16"
"Version" = "1.0.55"
"account" = "main"
"environment" = "assembly"
"source" = "terraform"
"type" = "ops"
}
timeout = 360
version = "$LATEST"
environment {
variables = {
"FILE_BREAKER_LAMBDA_FUNCTION_NAME" = "ProggyLargeFileBreaker"
"Proggy_SQS_QUEUE_NAME" = "Proggy_Proggy_edi-notification.fifo"
"Proggy_SQS_QUEUE_URL" = "https://sqs.us-east-1.amazonaws.com/<secret>/Proggy_Proggy_edi-notification.fifo"
"S3_OBJECT_SIZE_LIMIT" = "15000000"
}
}
tracing_config {
mode = "PassThrough"
}
}
Plan: 3 to add, 2 to change, 3 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.large-file-breaker-lambda-primary.aws_lambda_function.lambda: Modifying... [id=ProggyLargeFileBreaker]
module.notifier-lambda-primary.aws_lambda_function.lambda: Modifying... [id=ProggyS3ObjectCreatedHandler]
aws_lambda_permission.allow_bucket: Destroying... [id=AllowExecutionFromS3Bucket]
aws_s3_bucket_notification.bucket_notification: Destroying... [id=bucket-example-us-east-1]
module.large-file-breaker-lambda-primary.aws_lambda_function.lambda: Modifications complete after 0s [id=ProggyLargeFileBreaker]
aws_s3_bucket_notification.bucket_notification: Destruction complete after 0s
aws_lambda_permission.allow_bucket: Destruction complete after 0s
aws_s3_bucket.bucket: Destroying... [id=bucket-example-us-east-1]
module.notifier-lambda-primary.aws_lambda_function.lambda: Modifications complete after 0s [id=ProggyS3ObjectCreatedHandler]
Error: error deleting S3 Bucket (bucket-example-us-east-1): BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
status code: 409, request id: 0916517C5F1DF875, host id: +l5yzHjw7EMmdT4xdCmgg0Zx5W7zxpEil/dUWeJmnL8IvfPw2uKgvJ2Ee7utlRI0rkohdY+pjYI=
It wants to delete that bucket because I formerly told it to create it (I think). But I don't want terraform to delete that bucket. The bucket it's trying to delete is in use by other teams.
How can I tell terraform to do the apply, but not delete that bucket?
Put prevent_destroy in the S3 bucket resource to make sure it will not be deleted by any chance.
lifecycle {
prevent_destroy = true
}
Remove all resource definitions from .tf files other than the S3 bucket. Run terraform plan to see if Terraform will destroy other than the S3 bucket.
If it shows expected, then run terraform apply.
Then recreate resources you need besides the S3 bucket, but those resource being placed in different place, so that the S3 bucket will be left out.
Or, use terraform rm to remove other resources from the state file other than the S3 bucket. Then run terraform import to import them into new tf files in other location.
~ bucket = "bucket-example-us-east-1" -> "sftp-assembly-us-east-1" # forces replacement
You can see the annotation from terraform at the end of this line tells you that changing the bucket name forces the resource to be replaced(deleted and created).
If this is a separate bucket you need to create a new terraform resource for it, call it bucket1 or whatever:
resource "aws_s3_bucket" "bucket1" {
...
}
If the bucket is now empty it may be easier to just delete the bucket and have terraform create it for you, that way you don't have to mess with the import commands. If bucket already exists and contains files you will need to import the resource into the state file. This will help you on how to import:
https://www.terraform.io/docs/import/usage.html
The change in the tag value is forcing a new -/+ resource "aws_s3_bucket" "bucket"
~ tags = {
~ "Name" = "bucket-example-us-east-1" -> "sftp-assembly-us-east-1"
}
You can use Lifecycle - ignore_changes meta-argument to ignore tag value changes forcing a new resource.
resource "aws_s3_bucket" "bucket" {
# ...
lifecycle {
ignore_changes = [
tags["Name"],
]
}
}
After adding tags in the ignore_changes meta argument if you find other change in values of any other parameter forcing a new bucket then you can also include them in ignore_changes
You can also remove this resource from the state file using the command terraform state rm aws_s3_bucket.bucket. This will allow you to create a new bucket without deleting the old one. Since the S3 bucket names are universally unique so you need to give new bucket name.
Another option is, after deleting the resource from the state file, import it using the command terraform import aws_s3_bucket.bucket <bucket name>
I am trying to setup an azurerm backend using the following Terraform code:
modules\remote-state\main.tf
provider "azurerm" {
}
variable "env" {
type = string
description = "The SDLC environment (qa, dev, prod, etc...)"
}
locals {
extended_name = "dfpg-${lower(var.env)}-tfstate"
}
##################################################################################
# RESOURCES
##################################################################################
resource "azurerm_resource_group" "setup" {
name = "app505-${local.extended_name}-eastus2"
location = "eastus2"
}
resource "azurerm_storage_account" "sa" {
name = replace(local.extended_name, "-", "")
resource_group_name = azurerm_resource_group.setup.name
location = azurerm_resource_group.setup.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "ct" {
name = "terraform-state"
storage_account_name = azurerm_storage_account.sa.name
}
data "azurerm_storage_account_sas" "state" {
connection_string = azurerm_storage_account.sa.primary_connection_string
https_only = true
resource_types {
service = true
container = true
object = true
}
services {
blob = true
queue = false
table = false
file = false
}
start = timestamp()
expiry = timeadd(timestamp(), "17520h")
permissions {
read = true
write = true
delete = true
list = true
add = true
create = true
update = false
process = false
}
}
##################################################################################
# OUTPUT
##################################################################################
resource "null_resource" "post-config" {
depends_on = [azurerm_storage_container.ct]
provisioner "local-exec" {
command = <<EOT
Set-Content -Value 'storage_account_name = "${azurerm_storage_account.sa.name}"' -Path "backend-config.txt"
Add-Content -Value 'container_name = "terraform-state"' -Path "backend-config.txt"
Add-Content -Value 'key = "terraform.tfstate"' -Path "backend-config.txt"
Add-Content -Value 'sas_token = "${data.azurerm_storage_account_sas.state.sas}"' -Path "backend-config.txt"
EOT
interpreter = ["PowerShell", "-NoProfile", "-Command"]
}
}
qa\bootstrap\rs\main.tf
module "bootstrap" {
source = "../../../modules/remote-state"
env = "qa"
}
terraform init
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡]> terraform init
Initializing modules...
- bootstrap in ..\..\..\modules\remote-state
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "azurerm" (hashicorp/azurerm) 1.41.0...
- Downloading plugin for provider "null" (hashicorp/null) 2.1.2...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.azurerm: version = "~> 1.41"
* provider.null: version = "~> 2.1"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡]>
terraform plan -out main.tfplan
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡]> terraform plan -out main.tfplan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
# module.bootstrap.data.azurerm_storage_account_sas.state will be read during apply
# (config refers to values not yet known)
<= data "azurerm_storage_account_sas" "state" {
+ connection_string = (sensitive value)
+ expiry = (known after apply)
+ https_only = true
+ id = (known after apply)
+ sas = (sensitive value)
+ start = (known after apply)
+ permissions {
+ add = true
+ create = true
+ delete = true
+ list = true
+ process = false
+ read = true
+ update = false
+ write = true
}
+ resource_types {
+ container = true
+ object = true
+ service = true
}
+ services {
+ blob = true
+ file = false
+ queue = false
+ table = false
}
+ timeouts {
+ read = (known after apply)
}
}
# module.bootstrap.azurerm_resource_group.setup will be created
+ resource "azurerm_resource_group" "setup" {
+ id = (known after apply)
+ location = "eastus2"
+ name = "app505-dfpg-qa-tfstate-eastus2"
+ tags = (known after apply)
}
# module.bootstrap.azurerm_storage_account.sa will be created
+ resource "azurerm_storage_account" "sa" {
+ access_tier = (known after apply)
+ account_encryption_source = "Microsoft.Storage"
+ account_kind = "Storage"
+ account_replication_type = "LRS"
+ account_tier = "Standard"
+ account_type = (known after apply)
+ enable_advanced_threat_protection = (known after apply)
+ enable_blob_encryption = true
+ enable_file_encryption = true
+ id = (known after apply)
+ is_hns_enabled = false
+ location = "eastus2"
+ name = "dfpgqatfstate"
+ primary_access_key = (sensitive value)
+ primary_blob_connection_string = (sensitive value)
+ primary_blob_endpoint = (known after apply)
+ primary_blob_host = (known after apply)
+ primary_connection_string = (sensitive value)
+ primary_dfs_endpoint = (known after apply)
+ primary_dfs_host = (known after apply)
+ primary_file_endpoint = (known after apply)
+ primary_file_host = (known after apply)
+ primary_location = (known after apply)
+ primary_queue_endpoint = (known after apply)
+ primary_queue_host = (known after apply)
+ primary_table_endpoint = (known after apply)
+ primary_table_host = (known after apply)
+ primary_web_endpoint = (known after apply)
+ primary_web_host = (known after apply)
+ resource_group_name = "app505-dfpg-qa-tfstate-eastus2"
+ secondary_access_key = (sensitive value)
+ secondary_blob_connection_string = (sensitive value)
+ secondary_blob_endpoint = (known after apply)
+ secondary_blob_host = (known after apply)
+ secondary_connection_string = (sensitive value)
+ secondary_dfs_endpoint = (known after apply)
+ secondary_dfs_host = (known after apply)
+ secondary_file_endpoint = (known after apply)
+ secondary_file_host = (known after apply)
+ secondary_location = (known after apply)
+ secondary_queue_endpoint = (known after apply)
+ secondary_queue_host = (known after apply)
+ secondary_table_endpoint = (known after apply)
+ secondary_table_host = (known after apply)
+ secondary_web_endpoint = (known after apply)
+ secondary_web_host = (known after apply)
+ tags = (known after apply)
+ blob_properties {
+ delete_retention_policy {
+ days = (known after apply)
}
}
+ identity {
+ principal_id = (known after apply)
+ tenant_id = (known after apply)
+ type = (known after apply)
}
+ network_rules {
+ bypass = (known after apply)
+ default_action = (known after apply)
+ ip_rules = (known after apply)
+ virtual_network_subnet_ids = (known after apply)
}
+ queue_properties {
+ cors_rule {
+ allowed_headers = (known after apply)
+ allowed_methods = (known after apply)
+ allowed_origins = (known after apply)
+ exposed_headers = (known after apply)
+ max_age_in_seconds = (known after apply)
}
+ hour_metrics {
+ enabled = (known after apply)
+ include_apis = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
}
+ logging {
+ delete = (known after apply)
+ read = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
+ write = (known after apply)
}
+ minute_metrics {
+ enabled = (known after apply)
+ include_apis = (known after apply)
+ retention_policy_days = (known after apply)
+ version = (known after apply)
}
}
}
# module.bootstrap.azurerm_storage_container.ct will be created
+ resource "azurerm_storage_container" "ct" {
+ container_access_type = "private"
+ has_immutability_policy = (known after apply)
+ has_legal_hold = (known after apply)
+ id = (known after apply)
+ metadata = (known after apply)
+ name = "terraform-state"
+ properties = (known after apply)
+ resource_group_name = (known after apply)
+ storage_account_name = "dfpgqatfstate"
}
# module.bootstrap.null_resource.post-config will be created
+ resource "null_resource" "post-config" {
+ id = (known after apply)
}
Plan: 4 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: main.tfplan
To perform exactly these actions, run the following command to apply:
terraform apply "main.tfplan"
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡]>
So far so good. Now applying:
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡]> terraform.exe apply .\main.tfplan
module.bootstrap.azurerm_resource_group.setup: Creating...
module.bootstrap.azurerm_resource_group.setup: Creation complete after 3s [id=/subscriptions/*SUB-GUID*/resourceGroups/app505-dfpg-qa-tfstate-eastus2]
module.bootstrap.azurerm_storage_account.sa: Creating...
module.bootstrap.azurerm_storage_account.sa: Still creating... [10s elapsed]
module.bootstrap.azurerm_storage_account.sa: Still creating... [20s elapsed]
module.bootstrap.azurerm_storage_account.sa: Still creating... [30s elapsed]
module.bootstrap.azurerm_storage_account.sa: Creation complete after 32s [id=/subscriptions/*SUB-GUID*/resourceGroups/app505-dfpg-qa-tfstate-eastus2/providers/Microsoft.Storage/storageAccounts/dfpgqatfstate]
module.bootstrap.data.azurerm_storage_account_sas.state: Refreshing state...
module.bootstrap.azurerm_storage_container.ct: Creating...
Error: Error creating Container "terraform-state" (Account "dfpgqatfstate" / Resource Group "app505-dfpg-qa-tfstate-eastus2"): containers.Client#Create: Failure responding to request: StatusCode=404 -- Original Error: autorest/azure: Service returned an error. Status=404 Code="ResourceNotFound" Message="The specified resource does not exist.\nRequestId:4dcdd560-901e-005e-130d-d3a867000000\nTime:2020-01-24T23:26:53.6811230Z"
on ..\..\..\modules\remote-state\main.tf line 29, in resource "azurerm_storage_container" "ct":
29: resource "azurerm_storage_container" "ct" {
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡ +1 ~0 -0 !]>
Now running the second time works:
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡ +1 ~0 -0 !]> terraform.exe apply .\main.tfplan
module.bootstrap.azurerm_resource_group.setup: Creating...
module.bootstrap.azurerm_resource_group.setup: Creation complete after 2s [id=/subscriptions/*SUB-GUID*/resourceGroups/app505-dfpg-qa-tfstate-eastus2]
module.bootstrap.azurerm_storage_account.sa: Creating...
module.bootstrap.azurerm_storage_account.sa: Creation complete after 7s [id=/subscriptions/*SUB-GUID*/resourceGroups/app505-dfpg-qa-tfstate-eastus2/providers/Microsoft.Storage/storageAccounts/dfpgqatfstate]
module.bootstrap.data.azurerm_storage_account_sas.state: Refreshing state...
module.bootstrap.azurerm_storage_container.ct: Creating...
module.bootstrap.azurerm_storage_container.ct: Creation complete after 1s [id=https://dfpgqatfstate.blob.core.windows.net/terraform-state]
module.bootstrap.null_resource.post-config: Creating...
module.bootstrap.null_resource.post-config: Provisioning with 'local-exec'...
module.bootstrap.null_resource.post-config (local-exec): Executing: ["PowerShell" "-NoProfile" "-Command" "Set-Content -Value 'storage_account_name = \"dfpgqatfstate\"' -Path \"backend-config.txt\"\r\nAdd-Content -Value 'container_name = \"terraform-state\"' -Path \"backend-config.txt\"\r\nAdd-Content -Value 'key = \"terraform.tfstate\"' -Path \"backend-config.txt\"\r\nAdd-Content -Value 'sas_token = \"?sv=2017-07-29&ss=b&srt=sco&sp=rwdlac&se=2022-01-23T23:29:47Z&st=2020-01-24T23:29:47Z&spr=https&sig=***\"' -Path \"backend-config.txt\"\r\n"]
module.bootstrap.null_resource.post-config: Creation complete after 1s [id=5713483326668430483]
Apply complete! Resources: 4 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
C:\xyz\terraform\qa\bootstrap\rs [shelve/terraform ≡ +2 ~0 -0 !]>
But why do I have to run it twice? What am I doing wrong?
Edit 1
I tried it several times on my laptop (destroying in the middle, of course) and it consistently failed. Then I activated trace log and 5 minutes later it passed for the first time. No idea why it took it 5 minutes. The network seems to be just fine on my end.
My laptop is on a VPN to work. I will now try from the workstation at work without VPN.
Edit 2
Tried on the workstation in the office. First time succeeded, but suspicious as I am, I destroyed and retried and it failed the second time.
Edit 3
We have ZScaler installed.