I'm trying to configure s3 bucket with replication using Terraform. I'm getting the following error.
Error: insufficient items for attribute "destination"; must have at least 1
on main.tf line 114, in resource "aws_s3_bucket" "ps-db-backups":
114: lifecycle_rule {
I don't understand this error message. First in the replication section I have destination defined. Second the error message mentions lifecycle_rule which does not have
destination attribute. The bucket definition is below.
resource "aws_s3_bucket" "ps-db-backups" {
bucket = "ps-db-backups-b3bd1643-8cbf-4927-a64a-f0cf9b58dfab"
acl = "private"
region = "eu-west-1"
versioning {
enabled = true
}
lifecycle_rule {
id = "transition"
enabled = true
transition {
days = 30
storage_class = "STANDARD_IA"
}
expiration {
days = 180
}
}
replication_configuration {
role = "${aws_iam_role.ps-db-backups-replication.arn}"
rules {
id = "ps-db-backups-replication"
status = "Enabled"
destination {
bucket = "${aws_s3_bucket.ps-db-backups-replica.arn}"
storage_class = "STANDARD_IA"
}
}
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
Go through the terraform docs carefully.
You need to create a separate terraform resource for destination like this one:
resource "aws_s3_bucket" "destination" {
bucket = "tf-test-bucket-destination-12345"
region = "eu-west-1"
versioning {
enabled = true
}
}
And then refer it in your replication_configuration as
destination {
bucket = "${aws_s3_bucket.destination.arn}"
storage_class = "STANDARD"
}
I hope this helps. Try and let me know.
This appears to be a bug in Terraform 0.12.
See this issue https://github.com/terraform-providers/terraform-provider-aws/issues/9048
As a side note, if you also need to enable monitoring for S3 replication you won't be able to. Terraform does not have this implemented.
But there's a PR opened for this, please vote with a thumbs UP, https://github.com/terraform-providers/terraform-provider-aws/pull/11337
Related
I am using terraform to deploy the backend code to AWS. While configuring the terraform environments, I typed the terraform init. It works fine, however, the next command - terraform plan is not working for a long time. The command doesn't tell anything. I am waiting for a long time but can't see any message from cli. I would love to get helped from you developers.
Here is my main.tf code.
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
default_tags {
tags = {
Owner = "Example Owner"
Project = "Example"
}
}
}
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "react-app/build"
template_vars = {
vpc_id = "vpc-abc123123123"
}
}
resource "aws_s3_bucket" "test_tf_bucket" {
bucket = local.test_tf_creds.bucket
website {
index_document = "index.html"
}
tags = {
Bucket = "Example Terraform Bucket"
}
}
resource "aws_s3_bucket_object" "build_test_tf" {
for_each = module.template_files.files
bucket = local.test_tf_creds.bucket
key = each.key
content_type = each.value.content_type
source = each.value.source_path
content = each.value.content
etag = each.value.digests.md5
tags = {
Bucket-Object = "Example Bucket Object"
}
}
I would love you developers to help me solve this problem.
Friends,
I am learning terraform and got stuck with this issue: I can create all the resources listed bellow without problems but when I run terraform destroy to delete them, Terraform returns a permission denied error while trying to destroy the s3 bucket (where .tfstate was being saved):
Error: deleting S3 Bucket (tfur-state-bucket): AccessDenied: Access Denied
It's also worth mentioning, that the IAM user of terraform has already administrator permissions. I am trying to delete the resources like this:
I (1) delete the backend resource from main.tf,(2) run terraform init -migrate-state and (3) finally run terraform destroy.
The version of Terraform I am running:
Terraform v1.3.2
darwin_arm64
Any idea about what I may have been missing? Here is my code:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "us-east-1"
shared_credentials_files = ["~/.aws/credentials"]
profile = "vscode"
}
resource "aws_s3_bucket" "tf_state" {
bucket = "tfur-state-bucket"
#lifecycle {
# prevent_destroy = true
#}
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
bucket = aws_s3_bucket.tf_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "encryption_for_tf_state" {
bucket = aws_s3_bucket.tf_state.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_public_access_block" "public_access" {
bucket = aws_s3_bucket.tf_state.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
bucket = "tfur-state-bucket"
key = "global/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
When trying to add dynamic block for aws s3 versioning configuration , I'm getting the error 'versioning_configuration' argument is required. please find below code instance and suggest the best answer.
resource "aws_s3_bucket_acl" "firehose_to_s3" {
bucket = aws_s3_bucket.firehose_to_s3.id
acl = "private"
}
resource "aws_s3_bucket_versioning" "firehose_to_s3" {
bucket = aws_s3_bucket.firehose_to_s3.id
dynamic "versioning_configuration" {
#value of replicate is false and true.
for_each = var.replicate ? ["yes"] : []
content {
status = "Enabled"
}
}
}
below is the error I'm getting
The argument "versioning_configuration" is required, but no definition was
found.
If your goal is only to enable or disable versioning based on a boolean variable you can do with count
resource "aws_s3_bucket_versioning" "firehose_to_s3" {
count = var.replicate ? 1 : 0
bucket = aws_s3_bucket.firehose_to_s3.id
"versioning_configuration" {
status = "Enabled"
}
}
This because by default the versioning is not enabled, so you could not create resource when you don't want it.
Or you can add a condition on status variable, like
resource "aws_s3_bucket_versioning" "firehose_to_s3" {
bucket = aws_s3_bucket.firehose_to_s3.id
"versioning_configuration" {
status = var.replicate ? "Enabled" : "Suspended"
}
}
I am particularly looking for way to have the following settings as shown in the image bellow. I want to make the S3 bucket restricted and choose to create new origin access identity as shown bellow.
Also it should make the update in S3 bucket policy, the settings might look different in image though.
In nutshell, I could not find or may be I didn't understand the official terraform documentations for achieving it.
You can use the below one for the reference,
resource "aws_cloudfront_distribution" "www" {
origin {
domain_name = "${var.bucket_name}.s3.amazonaws.com"
origin_id = "wwwS3Origin"
s3_origin_config {
origin_access_identity = "${aws_cloudfront_origin_access_identity.origin_access_identity.cloudfront_access_identity_path}"
}
}
enabled = true
is_ipv6_enabled = true
comment = "Some comment"
default_root_object = "index.html"
......
resource "aws_cloudfront_origin_access_identity" "origin_access_identity" {
comment = "S3 bucket OAI"
}
Update bucket policy
data "aws_iam_policy_document" "s3_policy" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.example.arn}/*"]
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
}
}
statement {
actions = ["s3:ListBucket"]
resources = ["${aws_s3_bucket.example.arn}"]
principals {
type = "AWS"
identifiers = ["${aws_cloudfront_origin_access_identity.origin_access_identity.iam_arn}"]
}
}
}
resource "aws_s3_bucket_policy" "example" {
bucket = "${aws_s3_bucket.example.id}"
policy = "${data.aws_iam_policy_document.s3_policy.json}"
}
Refer the below link for more details
https://www.terraform.io/docs/providers/aws/r/cloudfront_origin_access_identity.html#updating-your-bucket-policy
I have following code:
resource "aws_s3_bucket" "create_5_buckets" {
count = "${length(var.name)}"
bucket = "${var.name[count.index]}"
acl = "private"
region = "us-east-2"
force_destroy = "true"
versioning {
enabled = "true"
mfa_delete = "false"
}
}
I am using terraform version .12. It keeps on running and gives me following error:
Error creating S3 bucket: Error creating S3 bucket name-a, retrying: OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
Nothing wrong with the code.
provider "aws" {
region = "us-east-2"
shared_credentials_file = "/root/.aws/credentials"
profile = "default"
}
variable name {
default=["demo-123.com","demo-124.com","demo-125.com"]
}
resource "aws_s3_bucket" "create_5_buckets" {
count = "${length(var.name)}"
bucket = "${var.name[count.index]}"
acl = "private"
region = "us-east-2"
force_destroy = "true"
versioning {
enabled = "true"
mfa_delete = "false"
}
}
Code seems perfectly fine to me and running well, this error is not something with terraform.
It is related to AWS error herethere could be some synchronization time after deleting the S3 bucket need to try after sometime.
It could be duplicate of AWS Error Message: A conflicting conditional operation is currently in progress against this resource