Update bucket created in Terraform file results in BucketAlreadyOwnedByYou error - terraform

I need to add a policy to a bucket I create earlier on in my Terraform file.
However, this errors with
Error creating S3 bucket: BucketAlreadyOwnedByYou: Your previous
request to create the named bucket succeeded and you already own it.
How can I amend my .tf file to create the bucket, then update it?
resource "aws_s3_bucket" "bucket" {
bucket = "my-new-bucket-123"
acl = "public-read"
region = "eu-west-1"
website {
index_document = "index.html"
}
}
data "aws_iam_policy_document" "s3_bucket_policy_document" {
statement {
actions = ["s3:GetObject"]
resources = ["${aws_s3_bucket.bucket.arn}/*"]
principals {
type = "AWS"
identifiers = ["*"]
}
}
}
resource "aws_s3_bucket" "s3_bucket_policy" {
bucket = "${aws_s3_bucket.bucket.bucket}"
policy = "${data.aws_iam_policy_document.s3_bucket_policy_document.json}"
}

You should use the aws_s3_bucket_policy resource to add a bucket policy to an existing S3 bucket:
resource "aws_s3_bucket" "b" {
bucket = "my_tf_test_bucket"
}
resource "aws_s3_bucket_policy" "b" {
bucket = "${aws_s3_bucket.b.id}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "MYBUCKETPOLICY",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my_tf_test_bucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "8.8.8.8/32"}
}
}
]
}
POLICY
}
But if you are doing this at the same time then it's probably worth just inlining this into the original aws_s3_bucket resource like this:
locals {
bucket_name = "my-new-bucket-123"
}
resource "aws_s3_bucket" "bucket" {
bucket = "${local.bucket_name}"
acl = "public-read"
policy = "${data.aws_iam_policy_document.s3_bucket_policy_document.json}"
region = "eu-west-1"
website {
index_document = "index.html"
}
}
data "aws_iam_policy_document" "s3_bucket_policy_document" {
statement {
actions = ["s3:GetObject"]
resources = ["arn:aws:s3:::${local.bucket_name}/*"]
principals {
type = "AWS"
identifiers = ["*"]
}
}
}
This builds the S3 ARN in the bucket policy by hand to avoid a potential cycle error from trying to reference the output arn from the aws_s3_bucket resource.
If you had created the bucket without the policy (by applying the Terraform without the policy resource) then adding the policy argument to the aws_s3_bucket resource will then cause Terraform to detect the drift and the plan will show an update to the bucket, adding the policy.
It's probably worth noting that your canned ACL used in the acl of the aws_s3_bucket resource is overlapping with your policy and is unnecessary. You could use either the policy or the canned ACL to allow your S3 bucket to be read by all but the public-read ACL also allows your bucket contents to be anonymously listed like old school Apache directory listings which isn't what most people want.

When setting up terraform to use s3 as a backend for the first time with a config similar to below:
# backend.tf
terraform {
backend "s3" {
bucket = "<bucket_name>"
region = "eu-west-2"
key = "state"
dynamodb_endpoint = "https://dynamodb.eu-west-2.amazonaws.com"
dynamodb_table = "<table_name>"
}
}
resource "aws_s3_bucket" "<bucket_label>" {
bucket = "<bucket_name>"
lifecycle {
prevent_destroy = true
}
}
After creating the s3 bucket manually in the AWS console, run the following command to update the terraform state to inform it that the s3 bucket already exists:
terraform import aws_s3_bucket.<bucket_label> <bucket_name>
The s3 bucket will now be in your Terraform state and will henceforth be managed by Terraform.

Related

Terraform Data Source: aws_s3_object can't get object from S3 bucket in another account

Hi Stack overflow community,
I have some Terraform code that needs access to an object in a bucket that is located in a different AWS account than the one I'm deploying the Terraform to.
The AWS S3 bucket is in us-west-2 and I'm deploying the Terraform in us-east-1 (I don't think this should matter).
I set up the following bucket level policy in the S3 bucket:
{
"Version": "2012-10-17",
"Id": "Policy1",
"Statement": [
{
"Sid": "Stmt1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<aws-account-number-where-terraform-will-be-deployed>:user/<user-deploying-terraform>"
},
"Action": [
"s3:GetObject*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::<bucket-name>/*",
"arn:aws:s3:::<bucket-name>"
]
},
]
}
When I run the following AWS CLI command I'm able to get the bucket object using the user that will be deploying the Terraform:
aws s3api get-object --bucket "<bucket-name>" --key "<path-to-file>" "test.txt"
But when I run the following Terraform code:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "= 4.6.0"
}
}
}
data "aws_s3_object" "this" {
bucket = "<bucket-name>"
key = "<path-to-file>"
}
output "test" {
value = data.aws_s3_object.this.body
}
I get the following error:
Error: failed getting S3 Bucket (<bucket-name>) Object (<path-to-file>): BadRequest: Bad Request
status code: 400, request id: <id>, host id: <host-id>
with data.aws_s3_object.challenge_file,
on main.tf line 10, in data "aws_s3_object" "this":
10: data "aws_s3_object" "this" {
The provider configuration, as specified by AWS and Hashicorp, uses a single set of credentials, region, etc. You need a second provider configuration with an alias for the other region.
provider "aws" {
alias = "us-west-2"
region = "us-west-2"
}
data "aws_s3_object" "this" {
provider = aws.us-west-2
bucket = "<bucket-name>"
key = "<path-to-file>"
}
if your supplied credentials are not sufficient for permissions to retrieve information about the bucket in the other account, then the provider configuration block will also need separate credentials.

How do I use the S3 bucket arn from a terraform module output with a count index?

I have a terraform module that creates an S3 bucket based on a variable creates3bucket is true or false.
The resource block looks like this.
#Codepipeline s3 bucket artifact store
resource "aws_s3_bucket" "LambdaCodePipelineBucket" {
count = var.creates3bucket ? 1 : 0
bucket = var.lambdacodepipelinebucketname
}
I output the bucket arn in the outputs.tf file like this.
output "codepipelines3bucketarn"{
description = "CodePipeline S3 Bucket arn"
value = aws_s3_bucket.LambdaCodePipelineBucket[*].arn
}
From the calling module I want to pass this arn value in the bucket policy. This works fine when the bucket is not an indexed resource. But Terraform plan complains when there is a count associated with the bucket.
From the calling module I pass the bucket policy like this:
cps3bucketpolicy = jsonencode({
Version = "2012-10-17"
Id = "LambdaCodePipelineBucketPolicy"
Statement = [
{
Sid = "AllowPipelineRoles"
Effect = "Allow"
Principal = {
AWS = ["${module.lambdapipeline.codepipelinerolearn}"]
}
Action = "s3:*"
Resource = [
"${module.lambdapipeline.codepipelines3bucketarn}",
"${module.lambdapipeline.codepipelines3bucketarn}/*",
]
},
{
Sid : "AllowSSLRequestsOnly",
Effect : "Deny",
Principal : "*",
Action : "*",
Resource : [
"${module.lambdapipeline.codepipelines3bucketarn}",
"${module.lambdapipeline.codepipelines3bucketarn}/*",
],
Condition : {
Bool : {
"aws:SecureTransport" : "false"
}
}
}
]
})
Terraform Plan error: So for some reason once i added the count to the s3 bucket resource terraform does not like the "${module.lambdapipeline.codepipelines3bucketarn}/*" in the policy.
How do I pass the bucket arn in the policy from the calling module?
Like Marko E. wrote, you need to use the indexed resource. In your case, you should use this:
output "codepipelines3bucketarn"{
description = "CodePipeline S3 Bucket arn"
value = aws_s3_bucket.LambdaCodePipelineBucket[0].arn
}
But, in your case, your output would be empty, if the variable var.creates3bucket is false.
So I conclude, eighter the bucket is available or you will create it. If this is the case, use the data source for your policy.
data "aws_s3_bucket" "LambdaCodePipelineBucket" {
bucket = var.lambdacodepipelinebucketname
}
and change in your policy
"${module.lambdapipeline.codepipelines3bucketarn}"
to
"${data.aws_s3_bucket.LambdaCodePipelineBucket.arn"
Now, the only "error" will be, if the bucket is not available (then justs set your variable to true and the data source will find a bucket.

Create a S3 bucket on each AWS account created with terraform

I am using terraform to create multiple AWS accounts using aws_organizations_account. What I am now trying to do is to create a aws_S3_bucket on each new created accounts.
resource "aws_organizations_account" "this" {
for_each = local.all_user_ids
name = "Dev Sandbox ${each.value}"
email = "${var.manager}+sbx_${each.value}#example.com"
role_name = "Administrator"
parent_id = var.sandbox_organizational_unit_id
}
resource "aws_s3_bucket" "b" {
bucket = "my-tf-test-bucket"
acl = "private"
}
Everything is working as expected for aws_organizations_account during my terraform apply but my S3 bucket is created inside my current AWS project while I am trying to create a S3 bucket for every new AWS account.
Step 1: Create_terraform_s3_buckets.tf
# First configure the AWS Provider
provider "aws" {
access_key = var.aws_access_key
secret_key = var.aws_secret_key
region = var.aws_region
}
// then use the resource block and create all the buckets in the variable array
// Here setup your accounts would in the variable for e.g. My_Accounts_s3_buckets
variable "My_Accounts_s3_buckets" {
type = list
default = ["Testbucket1.app", "Testbucket2.app", "Testbucket3.app"]
}
Look up the s3_bucket objectfor more help from Terraform ref. aws_s3_bucket
// resource "aws_s3_bucket" "rugged_buckets" "log_bucket" { <- different types of options on your buckets
resource "aws_s3_bucket" "b" {
count = length(var.My_Accounts_s3_buckets) // here are you 3 accounts
bucket = var.My_Accounts_s3_buckets[count.index]
acl = "private"
region = "us-east-1"
force_destroy = true
tags = {
Name = "My Test bucket"
Environment = "Dev"
}
}
Step 2: You can now automate this with the variables file.
# Make sure you keep this order
variable "My_Accounts_s3_buckets" {
type = list
default = ["mybucket1.app",
"mybucket2.app" // you can add more.. as needed
]
}

Setting s3 bucket with replication using Terraform

I'm trying to configure s3 bucket with replication using Terraform. I'm getting the following error.
Error: insufficient items for attribute "destination"; must have at least 1
on main.tf line 114, in resource "aws_s3_bucket" "ps-db-backups":
114: lifecycle_rule {
I don't understand this error message. First in the replication section I have destination defined. Second the error message mentions lifecycle_rule which does not have
destination attribute. The bucket definition is below.
resource "aws_s3_bucket" "ps-db-backups" {
bucket = "ps-db-backups-b3bd1643-8cbf-4927-a64a-f0cf9b58dfab"
acl = "private"
region = "eu-west-1"
versioning {
enabled = true
}
lifecycle_rule {
id = "transition"
enabled = true
transition {
days = 30
storage_class = "STANDARD_IA"
}
expiration {
days = 180
}
}
replication_configuration {
role = "${aws_iam_role.ps-db-backups-replication.arn}"
rules {
id = "ps-db-backups-replication"
status = "Enabled"
destination {
bucket = "${aws_s3_bucket.ps-db-backups-replica.arn}"
storage_class = "STANDARD_IA"
}
}
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
Go through the terraform docs carefully.
You need to create a separate terraform resource for destination like this one:
resource "aws_s3_bucket" "destination" {
bucket = "tf-test-bucket-destination-12345"
region = "eu-west-1"
versioning {
enabled = true
}
}
And then refer it in your replication_configuration as
destination {
bucket = "${aws_s3_bucket.destination.arn}"
storage_class = "STANDARD"
}
I hope this helps. Try and let me know.
This appears to be a bug in Terraform 0.12.
See this issue https://github.com/terraform-providers/terraform-provider-aws/issues/9048
As a side note, if you also need to enable monitoring for S3 replication you won't be able to. Terraform does not have this implemented.
But there's a PR opened for this, please vote with a thumbs UP, https://github.com/terraform-providers/terraform-provider-aws/pull/11337

Terraform s3 backend vs terraform_remote_state

According to the documentation, to use s3 and not a local terraform.tfstate file for state storage, one should configure a backend more or less as follows:
terraform {
backend "s3" {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
I was
using a local (terraform.tfstate) file
added the above snippet in my provided.tf file
run (again) terraform init
was asked by terraform to migrate my state to the above bucket
...so far so good...
But then comes this confusing part about terraform_remote_state ...
Why do I need this?
Isn't my state now saved remotely (on the aforemenetioned s3 bucket) already?
terraform_remote_state isn't for storage of your state its for retrieval in another terraform plan if you have outputs. It is a data source. For example if you output your Elastic IP Address in one state:
resource "aws_eip" "default" {
vpc = true
}
output "eip_id" {
value = "${aws_eip.default.id}"
}
Then wanted to retrieve that in another state:
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.eip_id}"
}
edit: If you are retrieving outputs in Terraform > 0.12 you need to include outputs
data "terraform_remote_state" "remote" {
backend = "s3"
config {
bucket = "my-bucket-name"
key = "my-key-name"
region = "my-region"
}
}
resource "aws_instance" "foo" {
...
}
resource "aws_eip_association" "eip_assoc" {
instance_id = "${aws_instance.foo.id}"
allocation_id = "${data.terraform_remote_state.remote.outputs.eip_id}"
}
Remote State allows you to collaborate with other team members, and central location to store your infrastructure state.
Apart from that by enabling s3 versioning, you can have versioning for state file, to track changes.

Resources