I have following code:
resource "aws_s3_bucket" "create_5_buckets" {
count = "${length(var.name)}"
bucket = "${var.name[count.index]}"
acl = "private"
region = "us-east-2"
force_destroy = "true"
versioning {
enabled = "true"
mfa_delete = "false"
}
}
I am using terraform version .12. It keeps on running and gives me following error:
Error creating S3 bucket: Error creating S3 bucket name-a, retrying: OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
Nothing wrong with the code.
provider "aws" {
region = "us-east-2"
shared_credentials_file = "/root/.aws/credentials"
profile = "default"
}
variable name {
default=["demo-123.com","demo-124.com","demo-125.com"]
}
resource "aws_s3_bucket" "create_5_buckets" {
count = "${length(var.name)}"
bucket = "${var.name[count.index]}"
acl = "private"
region = "us-east-2"
force_destroy = "true"
versioning {
enabled = "true"
mfa_delete = "false"
}
}
Code seems perfectly fine to me and running well, this error is not something with terraform.
It is related to AWS error herethere could be some synchronization time after deleting the S3 bucket need to try after sometime.
It could be duplicate of AWS Error Message: A conflicting conditional operation is currently in progress against this resource
Related
I am using terraform to deploy the backend code to AWS. While configuring the terraform environments, I typed the terraform init. It works fine, however, the next command - terraform plan is not working for a long time. The command doesn't tell anything. I am waiting for a long time but can't see any message from cli. I would love to get helped from you developers.
Here is my main.tf code.
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
default_tags {
tags = {
Owner = "Example Owner"
Project = "Example"
}
}
}
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "react-app/build"
template_vars = {
vpc_id = "vpc-abc123123123"
}
}
resource "aws_s3_bucket" "test_tf_bucket" {
bucket = local.test_tf_creds.bucket
website {
index_document = "index.html"
}
tags = {
Bucket = "Example Terraform Bucket"
}
}
resource "aws_s3_bucket_object" "build_test_tf" {
for_each = module.template_files.files
bucket = local.test_tf_creds.bucket
key = each.key
content_type = each.value.content_type
source = each.value.source_path
content = each.value.content
etag = each.value.digests.md5
tags = {
Bucket-Object = "Example Bucket Object"
}
}
I would love you developers to help me solve this problem.
Friends,
I am learning terraform and got stuck with this issue: I can create all the resources listed bellow without problems but when I run terraform destroy to delete them, Terraform returns a permission denied error while trying to destroy the s3 bucket (where .tfstate was being saved):
Error: deleting S3 Bucket (tfur-state-bucket): AccessDenied: Access Denied
It's also worth mentioning, that the IAM user of terraform has already administrator permissions. I am trying to delete the resources like this:
I (1) delete the backend resource from main.tf,(2) run terraform init -migrate-state and (3) finally run terraform destroy.
The version of Terraform I am running:
Terraform v1.3.2
darwin_arm64
Any idea about what I may have been missing? Here is my code:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "us-east-1"
shared_credentials_files = ["~/.aws/credentials"]
profile = "vscode"
}
resource "aws_s3_bucket" "tf_state" {
bucket = "tfur-state-bucket"
#lifecycle {
# prevent_destroy = true
#}
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
bucket = aws_s3_bucket.tf_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "encryption_for_tf_state" {
bucket = aws_s3_bucket.tf_state.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_public_access_block" "public_access" {
bucket = aws_s3_bucket.tf_state.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
bucket = "tfur-state-bucket"
key = "global/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
When trying to add dynamic block for aws s3 versioning configuration , I'm getting the error 'versioning_configuration' argument is required. please find below code instance and suggest the best answer.
resource "aws_s3_bucket_acl" "firehose_to_s3" {
bucket = aws_s3_bucket.firehose_to_s3.id
acl = "private"
}
resource "aws_s3_bucket_versioning" "firehose_to_s3" {
bucket = aws_s3_bucket.firehose_to_s3.id
dynamic "versioning_configuration" {
#value of replicate is false and true.
for_each = var.replicate ? ["yes"] : []
content {
status = "Enabled"
}
}
}
below is the error I'm getting
The argument "versioning_configuration" is required, but no definition was
found.
If your goal is only to enable or disable versioning based on a boolean variable you can do with count
resource "aws_s3_bucket_versioning" "firehose_to_s3" {
count = var.replicate ? 1 : 0
bucket = aws_s3_bucket.firehose_to_s3.id
"versioning_configuration" {
status = "Enabled"
}
}
This because by default the versioning is not enabled, so you could not create resource when you don't want it.
Or you can add a condition on status variable, like
resource "aws_s3_bucket_versioning" "firehose_to_s3" {
bucket = aws_s3_bucket.firehose_to_s3.id
"versioning_configuration" {
status = var.replicate ? "Enabled" : "Suspended"
}
}
I am using terraform to create multiple AWS accounts using aws_organizations_account. What I am now trying to do is to create a aws_S3_bucket on each new created accounts.
resource "aws_organizations_account" "this" {
for_each = local.all_user_ids
name = "Dev Sandbox ${each.value}"
email = "${var.manager}+sbx_${each.value}#example.com"
role_name = "Administrator"
parent_id = var.sandbox_organizational_unit_id
}
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "private"
}
Everything is working as expected for aws_organizations_account during my terraform apply but my S3 bucket is created inside my current AWS project while I am trying to create a S3 bucket for every new AWS account.
Step 1: Create_terraform_s3_buckets.tf
# First configure the AWS Provider
provider "aws" {
access_key = var.aws_access_key
secret_key = var.aws_secret_key
region = var.aws_region
}
// then use the resource block and create all the buckets in the variable array
// Here setup your accounts would in the variable for e.g. My_Accounts_s3_buckets
variable "My_Accounts_s3_buckets" {
type = list
default = ["Testbucket1.app", "Testbucket2.app", "Testbucket3.app"]
}
Look up the s3_bucket objectfor more help from Terraform ref. aws_s3_bucket
// resource "aws_s3_bucket" "rugged_buckets" "log_bucket" { <- different types of options on your buckets
resource "aws_s3_bucket" "b" {
count = length(var.My_Accounts_s3_buckets) // here are you 3 accounts
bucket = var.My_Accounts_s3_buckets[count.index]
acl = "private"
region = "us-east-1"
force_destroy = true
tags = {
Name = "My Test bucket"
Environment = "Dev"
}
}
Step 2: You can now automate this with the variables file.
# Make sure you keep this order
variable "My_Accounts_s3_buckets" {
type = list
default = ["mybucket1.app",
"mybucket2.app" // you can add more.. as needed
]
}
I'm trying to configure s3 bucket with replication using Terraform. I'm getting the following error.
Error: insufficient items for attribute "destination"; must have at least 1
on main.tf line 114, in resource "aws_s3_bucket" "ps-db-backups":
114: lifecycle_rule {
I don't understand this error message. First in the replication section I have destination defined. Second the error message mentions lifecycle_rule which does not have
destination attribute. The bucket definition is below.
resource "aws_s3_bucket" "ps-db-backups" {
bucket = "ps-db-backups-b3bd1643-8cbf-4927-a64a-f0cf9b58dfab"
acl = "private"
region = "eu-west-1"
versioning {
enabled = true
}
lifecycle_rule {
id = "transition"
enabled = true
transition {
days = 30
storage_class = "STANDARD_IA"
}
expiration {
days = 180
}
}
replication_configuration {
role = "${aws_iam_role.ps-db-backups-replication.arn}"
rules {
id = "ps-db-backups-replication"
status = "Enabled"
destination {
bucket = "${aws_s3_bucket.ps-db-backups-replica.arn}"
storage_class = "STANDARD_IA"
}
}
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
Go through the terraform docs carefully.
You need to create a separate terraform resource for destination like this one:
resource "aws_s3_bucket" "destination" {
bucket = "tf-test-bucket-destination-12345"
region = "eu-west-1"
versioning {
enabled = true
}
}
And then refer it in your replication_configuration as
destination {
bucket = "${aws_s3_bucket.destination.arn}"
storage_class = "STANDARD"
}
I hope this helps. Try and let me know.
This appears to be a bug in Terraform 0.12.
See this issue https://github.com/terraform-providers/terraform-provider-aws/issues/9048
As a side note, if you also need to enable monitoring for S3 replication you won't be able to. Terraform does not have this implemented.
But there's a PR opened for this, please vote with a thumbs UP, https://github.com/terraform-providers/terraform-provider-aws/pull/11337