Im trying to debug why my Terraform script is not working. Due an unknown reason Terraform keeps destroying my MySQL database and recreates it after that.
Below is the output of the execution plan:
# azurerm_mysql_server.test01 will be destroyed
- resource "azurerm_mysql_server" "test01" {
- administrator_login = "me" -> null
- auto_grow_enabled = true -> null
- backup_retention_days = 7 -> null
- create_mode = "Default" -> null
- fqdn = "db-test01.mysql.database.azure.com" -> null
- geo_redundant_backup_enabled = false -> null
- id = "/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01" -> null
- infrastructure_encryption_enabled = false -> null
- location = "westeurope" -> null
- name = "db-test01" -> null
- public_network_access_enabled = true -> null
- resource_group_name = "production-rg" -> null
- sku_name = "B_Gen5_1" -> null
- ssl_enforcement = "Disabled" -> null
- ssl_enforcement_enabled = false -> null
- ssl_minimal_tls_version_enforced = "TLSEnforcementDisabled" -> null
- storage_mb = 51200 -> null
- tags = {} -> null
- version = "8.0" -> null
- storage_profile {
- auto_grow = "Enabled" -> null
- backup_retention_days = 7 -> null
- geo_redundant_backup = "Disabled" -> null
- storage_mb = 51200 -> null
}
- timeouts {}
}
# module.databases.module.test.azurerm_mysql_server.test01 will be created
+ resource "azurerm_mysql_server" "test01" {
+ administrator_login = "me"
+ administrator_login_password = (sensitive value)
+ auto_grow_enabled = true
+ backup_retention_days = 7
+ create_mode = "Default"
+ fqdn = (known after apply)
+ geo_redundant_backup_enabled = false
+ id = (known after apply)
+ infrastructure_encryption_enabled = false
+ location = "westeurope"
+ name = "db-test01"
+ public_network_access_enabled = true
+ resource_group_name = "production-rg"
+ sku_name = "B_Gen5_1"
+ ssl_enforcement = (known after apply)
+ ssl_enforcement_enabled = false
+ ssl_minimal_tls_version_enforced = "TLSEnforcementDisabled"
+ storage_mb = 51200
+ version = "8.0"
+ storage_profile {
+ auto_grow = (known after apply)
+ backup_retention_days = (known after apply)
+ geo_redundant_backup = (known after apply)
+ storage_mb = (known after apply)
}
}
As far as i know all is exactly the same. To prevent this i also did a manually terraform import to sync the state with the remote state.
The actually resource as defined in my main.tf
resource "azurerm_mysql_server" "test01" {
name = "db-test01"
location = "West Europe"
resource_group_name = var.rg
administrator_login = "me"
administrator_login_password = var.root_password
sku_name = "B_Gen5_1"
storage_mb = 51200
version = "8.0"
auto_grow_enabled = true
backup_retention_days = 7
geo_redundant_backup_enabled = false
infrastructure_encryption_enabled = false
public_network_access_enabled = true
ssl_enforcement_enabled = false
}
The other odd thing is that below command will output that all is actually in sync?
➜ terraform git:(develop) ✗ terraform plan --refresh-only
azurerm_mysql_server.test01: Refreshing state... [id=/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/firstklas-production-rg/providers/Microsoft.DBforMySQL/servers/db-test01]
No changes. Your infrastructure still matches the configuration.
After an actual import the same still happens even though the import states all is in state:
➜ terraform git:(develop) ✗ terraform import azurerm_mysql_server.test01 /subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01
azurerm_mysql_server.test01: Importing from ID "/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01"...
azurerm_mysql_server.test01: Import prepared!
Prepared azurerm_mysql_server for import
azurerm_mysql_server.test01: Refreshing state... [id=/subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01]
Import successful!
The resources that were imported are shown above. These resources are now in
your Terraform state and will henceforth be managed by Terraform.
What can i do to prevent this destroy? Or even figure out why the actually destroy is triggered? This is happening on multiple azure instances at this point.
NOTE: the subscription ID is spoofed so don't worry
Best,
Pim
Your plan output shows that Terraform is seeing two different resource addresses:
# azurerm_mysql_server.test01 will be destroyed
# module.databases.module.test.azurerm_mysql_server.test01 will be created
Notice that the one to be created is in a nested module, not in the root module.
If your intent is to import this object to the address that is shown as needing to be created above, you'll need to specify this full address in the terraform import command:
terraform import 'module.databases.module.test.azurerm_mysql_server.test01' /subscriptions/8012-4035-b8f3-860f8cb1119e/resourceGroups/production-rg/providers/Microsoft.DBforMySQL/servers/db-test01
The terraform import command tells Terraform to bind an existing remote object to a particular Terraform address, and so when you use it you need to be careful to specify the correct Terraform address to bind to.
In your case, you told Terraform to bind the object to a hypothetical resource "azurerm_mysql_server" "test01" block in the root module, but your configuration has no such block and so when you ran terraform plan Terraform assumed that you wanted to delete that object, because deleting a resource block is how we typically tell Terraform that we intend to delete something.
There is a way.
user
Plan
Apply
terraform state rm "resource_name" --------This will eliminate or remove resource from current state
next Apply
Worked perfectly on GCP for creating 2 successive VM using same TF script.
Only thing is we need to write/code to get current resources and store somewhere and create commands in config: require blocks for upstream dependencies #3. While destroying we can add back using terraform state mv "resource_name"
Note : This has risk as the very first VM did not get deleted as it is considered generated out of scope of terraform. So your cost may persist. So you have to have back up (incremental for states)
Hope this helps.
Related
I have the following terraform resources in a file
resource "google_project_service" "cloud_resource_manager" {
project = var.tf_project_id
service = "cloudresourcemanager.googleapis.com"
disable_dependent_services = true
}
resource "google_project_service" "artifact_registry" {
project = var.tf_project_id
service = "artifactregistry.googleapis.com"
disable_dependent_services = true
depends_on = [google_project_service.cloud_resource_manager]
}
resource "google_artifact_registry_repository" "el" {
provider = google-beta
project = var.tf_project_id
location = var.region
repository_id = "el"
description = "Repository for extract/load docker images"
format = "DOCKER"
depends_on = [google_project_service.artifact_registry]
}
However, when I run terraform plan, I get this
Terraform will perform the following actions:
# google_artifact_registry_repository.el will be created
+ resource "google_artifact_registry_repository" "el" {
+ create_time = (known after apply)
+ description = "Repository for extract/load docker images"
+ format = "DOCKER"
+ id = (known after apply)
+ location = "us-central1"
+ name = (known after apply)
+ project = "backbone-third-party-data"
+ repository_id = "el"
+ update_time = (known after apply)
}
# google_project_iam_member.ingest_sa_roles["cloudscheduler.serviceAgent"] will be created
+ resource "google_project_iam_member" "ingest_sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ project = "backbone-third-party-data"
+ role = "roles/cloudscheduler.serviceAgent"
}
# google_project_iam_member.ingest_sa_roles["run.invoker"] will be created
+ resource "google_project_iam_member" "ingest_sa_roles" {
+ etag = (known after apply)
+ id = (known after apply)
+ member = (known after apply)
+ project = <my project id>
+ role = "roles/run.invoker"
}
# google_project_service.artifact_registry will be created
+ resource "google_project_service" "artifact_registry" {
+ disable_dependent_services = true
+ disable_on_destroy = true
+ id = (known after apply)
+ project = <my project id>
+ service = "artifactregistry.googleapis.com"
}
See how google_project_service.artifact_registry is created after google_artifact_registry_repository.el. I was hoping that my depends_on in resource google_artifact_registry_repository.el would make it so the service was created first. Am I misunderstanding how depends_on works? Or does the ordering of resources listed from terraform plan not actually mean that thats the order they are created in?
Edit: when I run terraform apply it errors out with
Error 403: Cloud Resource Manager API has not been used in project 521986354168 before or it is disabled
Even though it is enabled. I think it's doing this because its running the artifact registry resource creation before creating the terraform services?
I don't think that it will be possible to enable this particular API this way, as google_project_service resource depends on Resource Manager API (and maybe also on Service Usage API?) being enabled. So you could either enable those manually or use null_resource with local-exec provisioner to do it automatically:
resource "null_resource" "enable_cloudresourcesmanager_api" {
provisioner "local-exec" {
command = "gcloud services enable cloudresourcesmanager.googleapis.com cloudresourcemanager.googleapis.com --project ${var.project_id}"
}
}
Another issue you might find is that enabling an API takes some time, depending on a service. So sometimes even though your resources depend on a resource enabling an API, you will still get the same error message. Then you can just reapply your configuration and as an API had time to initialize, second apply will work. In some cases this is good enough, but if you are building a reusable module you might want to avoid those reapplies. Then you can use time_sleep resources to wait for API initialization:
resource "time_sleep" "wait_for_cloudresourcemanager_api" {
depends_on = [null_resource.enable_cloudresourcesmanager_api]
# or: depends_on = [google_project_service.some_other_api]
create_duration = "30s"
}
Due to some technical issues during a migration we had to do some changes to our Azure resource directly into the portal. In order to get our Terraform State files again up to date we plan to import some resources.
But, when doing a trial on a POC environment with just 1 recource group we already run into trouble.
I'm having these instructions executed.
TerraForm import -var-file="T:\_config\%SIB_Subscription%\%Core%\terraform.tfvars" "module.provision_resourcegroup.module.rg_create[\"edw-10\"].azurerm_resource_group.rg" /subscriptions/8dc72845-b367-4dcc-98f9-d9a4a933defc/resourceGroups/rg-poc-edw-010
TerraForm plan -var-file="T:\_config\%SIB_Subscription%\%Core%\terraform.tfvars" -out "T:\_CommandLine\_Logs\planfile.log"
environment variables are set correctly as the state files is created on the blob storage.
But when looking into the output on screen I see this.
# module.provision_resourcegroup.module.rg_create["edw-1"].azurerm_resource_group.rg will be destroyed
- - resource "azurerm_resource_group" "rg" {
- id = "/subscriptions/oooooo-zzzz-xxxx-yyyy-zzzz/resourceGroups/rg-poc-edw-001" -> null
- location = "westeurope" -> null
- name = "rg-poc-edw-001" -> null
- tags = {
- "APMId" = "00000"
- "CMDBApplicationId" = "tbd"
- "CMDBApplicationURL" = "tbd"
- "Capability" = "DAS - Data Analytics Services"
- "Das_Desc" = "DAS Common Purpose"
- "Solution" = "EDW"
} -> null
- timeouts {}
}
# module.provision_resourcegroup[0].module.rg_create["edw-1"].azurerm_resource_group.rg will be created
+ resource "azurerm_resource_group" "rg" {
+ id = (known after apply)
+ location = "westeurope"
+ name = "rg-poc-edw-001"
+ tags = {
+ "APMId" = "00000"
+ "CMDBApplicationId" = "tbd"
+ "CMDBApplicationURL" = "tbd"
+ "Capability" = "DAS - Data Analytics Services"
+ "Das_Desc" = "DAS Common Purpose"
+ "Solution" = "EDW"
}
}
So this can't be used as the Apply would delete the RG before creating the new one. Is there a way that I can see WHY TF wants to recreate ?
This is my code to create the module .
resource "azurerm_resource_group" "rg" {
name = "rg-${module.subscription.environment}-${local.rg_name_solution}-${var.rg_name_seqnr}"
location = module.location.azure
tags = {
"Das_Desc" = var.tag_Desc
"Capability" = var.tag_capability
"Solution" = var.tag_solution
"APMId" = var.tag_APMId
"CMDBApplicationURL" = var.tag_CMDBApplicationURL
"CMDBApplicationId" = var.tag_CMDBApplicationId
}
}
It appears that your outer module declaration now has a count meta-argument, so you need to rename the resource path in your state according to the new namespace. You can rename resources in your state with terraform state mv <former name> <current name>:
terraform state mv 'module.provision_resourcegroup.module.rg_create["edw-1"].azurerm_resource_group.rg' 'module.provision_resourcegroup[0].module.rg_create["edw-1"].azurerm_resource_group.rg'
We are attempting to build a state file for recently deployed AWS Organisation resources using Terraform v1.0.9 and aws provider v3.64.2.
The aws_organizations_organization was corrected imported with terraform import aws_organizations_organization.my_organisation [id]. Terraform doesn't want to destroy the organisation after it is imported.
However, when the unit is imported - AWS_DEFAULT_REGION=eu-west-2 terraform import -config=tf/units/infrastructure -var 'organisation_root=[id]' aws_organizations_organizational_unit.my-ou-infrastructure ou-abc0-ab0cdefg it appears to import successfully, however on terraform plan it wants to destroy the OU and recreate it.
# aws_organizations_organizational_unit.my-ou-infrastructure will be destroyed
- resource "aws_organizations_organizational_unit" "my-ou-infrastructure" {
- accounts = [] -> null
- arn = "arn:aws:organizations::000000000000:ou/o-xxxxx/ou-xxxx-xxxxxx" -> null
- id = "ou-xxxx-xxxxxx" -> null
- name = "name" -> null
- parent_id = "id" -> null
- tags = {} -> null
}
...
# module.my_organisation_units.module.my_organisation_unit_infrastructure.aws_organizations_organizational_unit.my-ou-infrastructure will be created
+ resource "aws_organizations_organizational_unit" "my-ou-infrastructure" {
+ accounts = (known after apply)
+ arn = (known after apply)
+ id = (known after apply)
+ name = "name"
+ parent_id = "id"
}
...
Plan: 31 to add, 0 to change, 1 to destroy.
Should this be happening? From the docs, importing an OU looks as simple as it gets.
We needed to import the resource as a module with terraform import module.my_organisation_units.module.my_organisation_unit_infrastructure.aws_organizations_organizational_unit.my-ou-infrastructure ou-abc0-ab0cdefg.
I have created a RDS cluster with 2 instances using terraform. When I am upgrading the RDS from front-end, it modifies the cluster. But when I do the same using terraform, it destroys the instance.
We tried create_before_destroy, and it gives error.
We tried with ignore_changes=engine but that didn't make any changes.
Is there any way to prevent it?
resource "aws_rds_cluster" "rds_mysql" {
cluster_identifier = var.cluster_identifier
engine = var.engine
engine_version = var.engine_version
engine_mode = var.engine_mode
availability_zones = var.availability_zones
database_name = var.database_name
port = var.db_port
master_username = var.master_username
master_password = var.master_password
backup_retention_period = var.backup_retention_period
preferred_backup_window = var.engine_mode == "serverless" ? null : var.preferred_backup_window
db_subnet_group_name = var.create_db_subnet_group == "true" ? aws_db_subnet_group.rds_subnet_group[0].id : var.db_subnet_group_name
vpc_security_group_ids = var.vpc_security_group_ids
db_cluster_parameter_group_name = var.create_cluster_parameter_group == "true" ? aws_rds_cluster_parameter_group.rds_cluster_parameter_group[0].id : var.cluster_parameter_group
skip_final_snapshot = var.skip_final_snapshot
deletion_protection = var.deletion_protection
allow_major_version_upgrade = var.allow_major_version_upgrade
lifecycle {
create_before_destroy = false
ignore_changes = [availability_zones]
}
}
resource "aws_rds_cluster_instance" "cluster_instances" {
count = var.engine_mode == "serverless" ? 0 : var.cluster_instance_count
identifier = "${var.cluster_identifier}-${count.index}"
cluster_identifier = aws_rds_cluster.rds_mysql.id
instance_class = var.instance_class
engine = var.engine
engine_version = aws_rds_cluster.rds_mysql.engine_version
db_subnet_group_name = var.create_db_subnet_group == "true" ? aws_db_subnet_group.rds_subnet_group[0].id : var.db_subnet_group_name
db_parameter_group_name = var.create_db_parameter_group == "true" ? aws_db_parameter_group.rds_instance_parameter_group[0].id : var.db_parameter_group
apply_immediately = var.apply_immediately
auto_minor_version_upgrade = var.auto_minor_version_upgrade
lifecycle {
create_before_destroy = false
ignore_changes = [engine_version]
}
}
Error:
resource \"aws_rds_cluster_instance\" \"cluster_instances\" {\n\n\n\nError: error creating RDS Cluster (aurora-cluster-mysql) Instance: DBInstanceAlreadyExists: DB instance already exists\n\tstatus code: 400, request id: c6a063cc-4ffd-4710-aff2-eb0667b0774f\n\n on
Plan output:
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
~ update in-place
+/- create replacement and then destroy
Terraform will perform the following actions:
# module.rds_aurora_create[0].aws_rds_cluster.rds_mysql will be updated in-place
~ resource "aws_rds_cluster" "rds_mysql" {
~ allow_major_version_upgrade = false -> true
~ engine_version = "5.7.mysql_aurora.2.07.1" -> "5.7.mysql_aurora.2.08.1"
id = "aurora-cluster-mysql"
tags = {}
# (33 unchanged attributes hidden)
}
# module.rds_aurora_create[0].aws_rds_cluster_instance.cluster_instances[0] must be replaced
+/- resource "aws_rds_cluster_instance" "cluster_instances" {
~ arn = "arn:aws:rds:us-east-1:account:db:aurora-cluster-mysql-0" -> (known after apply)
~ availability_zone = "us-east-1a" -> (known after apply)
~ ca_cert_identifier = "rds-ca-" -> (known after apply)
~ dbi_resource_id = "db-32432432SDF" -> (known after apply)
~ endpoint = "aurora-cluster-mysql-0.jkjk.us-east-1.rds.amazonaws.com" -> (known after apply)
~ engine_version = "5.7.mysql_aurora.2.07.1" -> "5.7.mysql_aurora.2.08.1" # forces replacement
~ id = "aurora-cluster-mysql-0" -> (known after apply)
+ identifier_prefix = (known after apply)
+ kms_key_id = (known after apply)
+ monitoring_role_arn = (known after apply)
~ performance_insights_enabled = false -> (known after apply)
+ performance_insights_kms_key_id = (known after apply)
~ port = 3306 -> (known after apply)
~ preferred_backup_window = "07:00-09:00" -> (known after apply)
~ preferred_maintenance_window = "thu:06:12-thu:06:42" -> (known after apply)
~ storage_encrypted = false -> (known after apply)
- tags = {} -> null
~ tags_all = {} -> (known after apply)
~ writer = true -> (known after apply)
# (12 unchanged attributes hidden)
}
Plan: 1 to add, 1 to change, 1 to destroy.
I see apply_immediately argument not there in aws_rds_cluster resource , can you add that and try.
Terraform is seeing the engine version change on the instances and is detecting this as an action that forces replacement.
Remove (or ignore changes to) the engine_version input for the aws_rds_cluster_instance resources.
AWS RDS upgrades the engine version for cluster instances itself when you upgrade the engine version of the cluster (this is why you can do an in-place upgrade via the AWS console).
By excluding the engine_version input, Terraform will see no changes made to the aws_rds_cluster_instances and will do nothing.
AWS will handle the engine upgrades for the instances internally.
If you decide to ignore changes, use the ignore_changes argument within a lifecycle block:
resource "aws_rds_cluster_instance" "cluster_instance" {
engine_version = aws_rds_cluster.main.engine_version
...
lifecycle {
ignore_changes = [engine_version]
}
}
I didn't know that, but after some Googling I found this:
https://github.com/hashicorp/terraform-provider-aws/issues/10714
i.e. a bug report to AWS Terraform provider:
resource/aws_rds_cluster_instance is being destroyed and re-created when updating engine_version while apply_immediately is set to false
which seems to be the very same issue you are facing.
One comment there seems to point to a solution:
As of v3.63.0 (EDITED) of the provider, updates to the engine_version parameter of aws_rds_cluster_instance resources no longer forces replacement of the resource.
The original comment seems to have a typo - 3.36 vs. 3.63.
Can you try upgrading your aws Terraform provider?
I have terraform that is producing the following log on apply:
Terraform will perform the following actions:
# aws_lambda_permission.allow_bucket must be replaced
-/+ resource "aws_lambda_permission" "allow_bucket" {
action = "lambda:InvokeFunction"
function_name = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler"
~ id = "AllowExecutionFromS3Bucket" -> (known after apply)
principal = "s3.amazonaws.com"
~ source_arn = "arn:aws:s3:::bucket-example-us-east-1" -> (known after apply) # forces replacement
statement_id = "AllowExecutionFromS3Bucket"
}
# aws_s3_bucket.bucket must be replaced
-/+ resource "aws_s3_bucket" "bucket" {
+ acceleration_status = (known after apply)
acl = "private"
~ arn = "arn:aws:s3:::bucket-example-us-east-1" -> (known after apply)
~ bucket = "bucket-example-us-east-1" -> "sftp-assembly-us-east-1" # forces replacement
~ bucket_domain_name = "bucket-example-us-east-1.s3.amazonaws.com" -> (known after apply)
~ bucket_regional_domain_name = "bucket-example-us-east-1.s3.amazonaws.com" -> (known after apply)
force_destroy = false
~ hosted_zone_id = "FOOBAR" -> (known after apply)
~ id = "bucket-example-us-east-1" -> (known after apply)
~ region = "us-east-1" -> (known after apply)
~ request_payer = "BucketOwner" -> (known after apply)
~ tags = {
~ "Name" = "bucket-example-us-east-1" -> "sftp-assembly-us-east-1"
}
+ website_domain = (known after apply)
+ website_endpoint = (known after apply)
versioning {
enabled = false
mfa_delete = false
}
}
# aws_s3_bucket_notification.bucket_notification must be replaced
-/+ resource "aws_s3_bucket_notification" "bucket_notification" {
~ bucket = "bucket-example-us-east-1" -> (known after apply) # forces replacement
~ id = "bucket-example-us-east-1" -> (known after apply)
~ lambda_function {
events = [
"s3:ObjectCreated:*",
]
~ id = "tf-s3-lambda-FOOBAR" -> (known after apply)
lambda_function_arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler"
}
}
# module.large-file-breaker-lambda-primary.aws_lambda_function.lambda will be updated in-place
~ resource "aws_lambda_function" "lambda" {
arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyLargeFileBreaker"
function_name = "ProggyLargeFileBreaker"
handler = "break-large-files"
id = "ProggyLargeFileBreaker"
invoke_arn = "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:<secret>:function:ProggyLargeFileBreaker/invocations"
last_modified = "2020-03-13T20:17:33.376+0000"
layers = []
memory_size = 3008
publish = false
qualified_arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyLargeFileBreaker:$LATEST"
reserved_concurrent_executions = -1
role = "arn:aws:iam::<secret>:role/ProggyLargeFileBreaker-role20200310215329691700000001"
runtime = "go1.x"
s3_bucket = "repo-us-east-1"
s3_key = "Proggy-large-file-breaker/1.0.10/break-large-files-1.0.10.zip"
source_code_hash = "TbwleLcqD+xL2zOYk6ZdiBWAAznCIiTS/6nzrWqYZhE="
source_code_size = 7294687
~ tags = {
"Name" = "ProggyLargeFileBreaker"
~ "TerraformRepo" = "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.11" -> "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.16"
"Version" = "1.0.10"
"account" = "main"
"environment" = "assembly"
"source" = "terraform"
"type" = "ops"
}
timeout = 360
version = "$LATEST"
environment {
variables = {
"ARCHIVE_BUCKET" = "Proggy-archive-assembly-us-east-1"
"S3_OBJECT_SIZE_LIMIT" = "15000000"
}
}
tracing_config {
mode = "PassThrough"
}
}
# module.notifier-lambda-primary.aws_lambda_function.lambda will be updated in-place
~ resource "aws_lambda_function" "lambda" {
arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler"
function_name = "ProggyS3ObjectCreatedHandler"
handler = "s3-Proggy-notifier"
id = "ProggyS3ObjectCreatedHandler"
invoke_arn = "arn:aws:apigateway:us-east-1:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler/invocations"
last_modified = "2020-03-11T20:52:33.256+0000"
layers = []
memory_size = 128
publish = false
qualified_arn = "arn:aws:lambda:us-east-1:<secret>:function:ProggyS3ObjectCreatedHandler:$LATEST"
reserved_concurrent_executions = -1
role = "arn:aws:iam::<secret>:role/ProggyS3ObjectCreatedHandler-role20200310215329780600000001"
runtime = "go1.x"
s3_bucket = "repo-us-east-1"
s3_key = "s3-Proggy-notifier/1.0.55/s3-Proggy-notifier-1.0.55.zip"
source_code_hash = "4N+B1GpaUY/wB4S7tR1eWRnNuHnBExcEzmO+mqhQ5B4="
source_code_size = 6787828
~ tags = {
"Name" = "ProggyS3ObjectCreatedHandler"
~ "TerraformRepo" = "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.11" -> "https://git.com/wwexdevelopment/aws-terraform-projects/commits/tag/v2.0.16"
"Version" = "1.0.55"
"account" = "main"
"environment" = "assembly"
"source" = "terraform"
"type" = "ops"
}
timeout = 360
version = "$LATEST"
environment {
variables = {
"FILE_BREAKER_LAMBDA_FUNCTION_NAME" = "ProggyLargeFileBreaker"
"Proggy_SQS_QUEUE_NAME" = "Proggy_Proggy_edi-notification.fifo"
"Proggy_SQS_QUEUE_URL" = "https://sqs.us-east-1.amazonaws.com/<secret>/Proggy_Proggy_edi-notification.fifo"
"S3_OBJECT_SIZE_LIMIT" = "15000000"
}
}
tracing_config {
mode = "PassThrough"
}
}
Plan: 3 to add, 2 to change, 3 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
module.large-file-breaker-lambda-primary.aws_lambda_function.lambda: Modifying... [id=ProggyLargeFileBreaker]
module.notifier-lambda-primary.aws_lambda_function.lambda: Modifying... [id=ProggyS3ObjectCreatedHandler]
aws_lambda_permission.allow_bucket: Destroying... [id=AllowExecutionFromS3Bucket]
aws_s3_bucket_notification.bucket_notification: Destroying... [id=bucket-example-us-east-1]
module.large-file-breaker-lambda-primary.aws_lambda_function.lambda: Modifications complete after 0s [id=ProggyLargeFileBreaker]
aws_s3_bucket_notification.bucket_notification: Destruction complete after 0s
aws_lambda_permission.allow_bucket: Destruction complete after 0s
aws_s3_bucket.bucket: Destroying... [id=bucket-example-us-east-1]
module.notifier-lambda-primary.aws_lambda_function.lambda: Modifications complete after 0s [id=ProggyS3ObjectCreatedHandler]
Error: error deleting S3 Bucket (bucket-example-us-east-1): BucketNotEmpty: The bucket you tried to delete is not empty. You must delete all versions in the bucket.
status code: 409, request id: 0916517C5F1DF875, host id: +l5yzHjw7EMmdT4xdCmgg0Zx5W7zxpEil/dUWeJmnL8IvfPw2uKgvJ2Ee7utlRI0rkohdY+pjYI=
It wants to delete that bucket because I formerly told it to create it (I think). But I don't want terraform to delete that bucket. The bucket it's trying to delete is in use by other teams.
How can I tell terraform to do the apply, but not delete that bucket?
Put prevent_destroy in the S3 bucket resource to make sure it will not be deleted by any chance.
lifecycle {
prevent_destroy = true
}
Remove all resource definitions from .tf files other than the S3 bucket. Run terraform plan to see if Terraform will destroy other than the S3 bucket.
If it shows expected, then run terraform apply.
Then recreate resources you need besides the S3 bucket, but those resource being placed in different place, so that the S3 bucket will be left out.
Or, use terraform rm to remove other resources from the state file other than the S3 bucket. Then run terraform import to import them into new tf files in other location.
~ bucket = "bucket-example-us-east-1" -> "sftp-assembly-us-east-1" # forces replacement
You can see the annotation from terraform at the end of this line tells you that changing the bucket name forces the resource to be replaced(deleted and created).
If this is a separate bucket you need to create a new terraform resource for it, call it bucket1 or whatever:
resource "aws_s3_bucket" "bucket1" {
...
}
If the bucket is now empty it may be easier to just delete the bucket and have terraform create it for you, that way you don't have to mess with the import commands. If bucket already exists and contains files you will need to import the resource into the state file. This will help you on how to import:
https://www.terraform.io/docs/import/usage.html
The change in the tag value is forcing a new -/+ resource "aws_s3_bucket" "bucket"
~ tags = {
~ "Name" = "bucket-example-us-east-1" -> "sftp-assembly-us-east-1"
}
You can use Lifecycle - ignore_changes meta-argument to ignore tag value changes forcing a new resource.
resource "aws_s3_bucket" "bucket" {
# ...
lifecycle {
ignore_changes = [
tags["Name"],
]
}
}
After adding tags in the ignore_changes meta argument if you find other change in values of any other parameter forcing a new bucket then you can also include them in ignore_changes
You can also remove this resource from the state file using the command terraform state rm aws_s3_bucket.bucket. This will allow you to create a new bucket without deleting the old one. Since the S3 bucket names are universally unique so you need to give new bucket name.
Another option is, after deleting the resource from the state file, import it using the command terraform import aws_s3_bucket.bucket <bucket name>