terraform - delete s3 bucket (tfs-state-bucket): access denied - terraform

Friends,
I am learning terraform and got stuck with this issue: I can create all the resources listed bellow without problems but when I run terraform destroy to delete them, Terraform returns a permission denied error while trying to destroy the s3 bucket (where .tfstate was being saved):
Error: deleting S3 Bucket (tfur-state-bucket): AccessDenied: Access Denied
It's also worth mentioning, that the IAM user of terraform has already administrator permissions. I am trying to delete the resources like this:
I (1) delete the backend resource from main.tf,(2) run terraform init -migrate-state and (3) finally run terraform destroy.
The version of Terraform I am running:
Terraform v1.3.2
darwin_arm64
Any idea about what I may have been missing? Here is my code:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "us-east-1"
shared_credentials_files = ["~/.aws/credentials"]
profile = "vscode"
}
resource "aws_s3_bucket" "tf_state" {
bucket = "tfur-state-bucket"
#lifecycle {
# prevent_destroy = true
#}
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
bucket = aws_s3_bucket.tf_state.id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_server_side_encryption_configuration" "encryption_for_tf_state" {
bucket = aws_s3_bucket.tf_state.bucket
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
resource "aws_s3_bucket_public_access_block" "public_access" {
bucket = aws_s3_bucket.tf_state.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
terraform {
backend "s3" {
bucket = "tfur-state-bucket"
key = "global/s3/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}

Related

Terraform plan not working for a long time with AWS S3

I am using terraform to deploy the backend code to AWS. While configuring the terraform environments, I typed the terraform init. It works fine, however, the next command - terraform plan is not working for a long time. The command doesn't tell anything. I am waiting for a long time but can't see any message from cli. I would love to get helped from you developers.
Here is my main.tf code.
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
default_tags {
tags = {
Owner = "Example Owner"
Project = "Example"
}
}
}
module "template_files" {
source = "hashicorp/dir/template"
base_dir = "react-app/build"
template_vars = {
vpc_id = "vpc-abc123123123"
}
}
resource "aws_s3_bucket" "test_tf_bucket" {
bucket = local.test_tf_creds.bucket
website {
index_document = "index.html"
}
tags = {
Bucket = "Example Terraform Bucket"
}
}
resource "aws_s3_bucket_object" "build_test_tf" {
for_each = module.template_files.files
bucket = local.test_tf_creds.bucket
key = each.key
content_type = each.value.content_type
source = each.value.source_path
content = each.value.content
etag = each.value.digests.md5
tags = {
Bucket-Object = "Example Bucket Object"
}
}
I would love you developers to help me solve this problem.

Terragrunt with Hashicorp Vault does not initialise & apply the vault dependency from root folder,need to exclusively initialise Vault from sub folder

I am new to terraform and terragrunt. I am using Vault to store Azure Service Principal and provisioning infra using terragrunt. I am unable to initialize and apply the vault dependency from root folder, need to exclusively initialize Vault from its sub folder.
Modules:
Resource Group, VM, VNET and Vault.
RG depends on Vault, VNet depends on RG, and VM depends on RG and VNet.
My repo looks like this:
.
When I run terragrunt init in the root level with terragrunt.hcl file , it gets stuck in initialising state because it does not receive vault module outputs. But when I go to Vault-tf folder and do terragrunt init and terragrunt apply, it fetches the vault secrets properly and post this when I run terragrunt init and terragrunt apply at root level , it works fine and creates the azure resources successfully.
My root terragrunt.hcl file looks like this:
dependency "credentials" {
config_path = "/root/terragrunt-new/BaseConfig/Vault-tf"
mock_outputs = {
tenant_id = "temp-tenant-id"
client_id = "temp-client-id"
client_secret = "temp-secret-id"
subscription_id = "temp-subscription-id"
}
}
terraform {
source = "git::https://git link to modules//"
extra_arguments "force_subscription" {
commands = [
"init",
"apply",
"destroy",
"refresh",
"import",
"plan",
"taint",
"untaint"
]
env_vars = {
ARM_TENANT_ID = dependency.credentials.outputs.tenant_id
ARM_CLIENT_ID = dependency.credentials.outputs.client_id
ARM_CLIENT_SECRET = dependency.credentials.outputs.client_secret
ARM_SUBSCRIPTION_ID = dependency.credentials.outputs.subscription_id
}
}
}
inputs = {
prefix = "terragrunt-nbux"
location = "centralus"
}
locals {
subscription_id = "xxxxxxxxxx-cc3e-4014-a891-xxxxxxxxxx"
}
generate "versions" {
path = "versions_override.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "3.9.0"
}
vault = {
source = "hashicorp/vault"
version = "3.7.0"
}
}
}
provider "vault" {
address = "http://xx.xx.xx.xx:8200"
skip_tls_verify = true
token = "hvs.xxxxxxxxxxxxxxxxx"
}
provider "azurerm" {
features {}
}
EOF
}
remote_state {
backend = "azurerm"
config = {
subscription_id = "${local.subscription_id}"
key = "${path_relative_to_include()}/terraform.tfstate"
resource_group_name = "rg-terragrunt-vault"
storage_account_name = "terragruntnbuxstorage"
container_name = "base-config-tfstate"
}
generate = {
path = "backend.tf"
if_exists = "overwrite_terragrunt"
}
}
And my Vault-tf folder's terragrunt.hcl file looks like this
terraform {
source = "git::https:path/terragrunt-new//Modules/Vault"
}
generate "versions" {
path = "versions_override.tf"
if_exists = "overwrite_terragrunt"
contents = <<EOF
terraform {
required_providers {
vault = {
source = "hashicorp/vault"
version = "3.7.0"
}
}
}
provider "vault" {
address = "http://xx.xx.xx.xx:8200"
skip_tls_verify = true
token = "hvs.xxxxxxxxxxxxxxxxxxxxI"
}
EOF
}

loop over aws provider to create ressources in every aws account

I have a list of objects in Terraform called users and the structure is the following:
variable "accounts" {
type = list(object({
id = string #used in assume_role
email = string
is_enabled = bool
}))
}
What I am trying to achieve now is to create a simple S3 bucket in each of those AWS accounts (if the is_enabled is true). I was able to do it for a single account but I am not sure if there is a way to loop over a provider?
Code for a single account - main.tf
provider "aws" {
alias = "new_account"
region = "eu-west-3"
assume_role {
role_arn = "arn:aws:iam::${aws_organizations_account.this.id}:role/OrganizationAccountAccessRole"
session_name = "new_account_creation"
}
}
resource "aws_s3_bucket" "bucket" {
provider = aws.new_account
bucket = "new-account-bucket-${aws_organizations_account.this.id}"
acl = "private"
}
You need to define one provider for each aws account you want to use:
Create a module (i.e. a directory), where your bucket configuration lives:
├── main.tf
└── module
└── bucket.tf
bucket.tf should contain the resource definition: resource "aws_s3_bucket" "bucket" {...}
In main.tf , define multiple aws providers and call the module with each of them:
provider "aws" {
alias = "account1"
region = "eu-west-1"
...
}
provider "aws" {
alias = "account2"
region = "us-west-1"
...
}
module "my_module" {
source = "./module"
providers = {
aws.account1 = aws.account1
aws.account2 = aws.account2
}
}
I guess you could also get fancy by creating a variable containing the providers, and pass it to the module invocation (you could probably also use a filter on the list to take into account the is_enabled flag)
More details about providers: https://www.terraform.io/docs/language/modules/develop/providers.html
Found what I was looking for here: https://github.com/hashicorp/terraform/issues/19932
Thanks Bryan Karaffa
## Just some data... a list(map())
locals {
aws_accounts = [
{ "aws_account_id": "123456789012", "foo_value": "foo", "bar_value": "bar" },
{ "aws_account_id": "987654321098", "foo_value": "foofoo", "bar_value": "barbar" },
]
}
## Here's the proposed magic... `provider.for_each`
provider "aws" {
for_each = local.aws_accounts
alias = each.value.aws_account_id
assume_role {
role_arn = "arn:aws:iam::${each.value.aws_account_id}:role/TerraformAccessRole"
}
}
## Modules reference the provider dynamically using `each.value.aws_account_id`
module "foo" {
source = "./foo"
for_each = local.aws_accounts
providers = {
aws = "aws.${each.value.aws_account_id}"
}
foo = each.value.foo_value
}
module "bar" {
source = "./bar"
for_each = local.aws_accounts
providers = {
aws = "aws.${each.value.aws_account_id}"
}
bar = each.value.bar_value
}

Unable to create 5 buckets in terraform

I have following code:
resource "aws_s3_bucket" "create_5_buckets" {
count = "${length(var.name)}"
bucket = "${var.name[count.index]}"
acl = "private"
region = "us-east-2"
force_destroy = "true"
versioning {
enabled = "true"
mfa_delete = "false"
}
}
I am using terraform version .12. It keeps on running and gives me following error:
Error creating S3 bucket: Error creating S3 bucket name-a, retrying: OperationAborted: A conflicting conditional operation is currently in progress against this resource. Please try again.
Nothing wrong with the code.
provider "aws" {
region = "us-east-2"
shared_credentials_file = "/root/.aws/credentials"
profile = "default"
}
variable name {
default=["demo-123.com","demo-124.com","demo-125.com"]
}
resource "aws_s3_bucket" "create_5_buckets" {
count = "${length(var.name)}"
bucket = "${var.name[count.index]}"
acl = "private"
region = "us-east-2"
force_destroy = "true"
versioning {
enabled = "true"
mfa_delete = "false"
}
}
Code seems perfectly fine to me and running well, this error is not something with terraform.
It is related to AWS error herethere could be some synchronization time after deleting the S3 bucket need to try after sometime.
It could be duplicate of AWS Error Message: A conflicting conditional operation is currently in progress against this resource

Setting s3 bucket with replication using Terraform

I'm trying to configure s3 bucket with replication using Terraform. I'm getting the following error.
Error: insufficient items for attribute "destination"; must have at least 1
on main.tf line 114, in resource "aws_s3_bucket" "ps-db-backups":
114: lifecycle_rule {
I don't understand this error message. First in the replication section I have destination defined. Second the error message mentions lifecycle_rule which does not have
destination attribute. The bucket definition is below.
resource "aws_s3_bucket" "ps-db-backups" {
bucket = "ps-db-backups-b3bd1643-8cbf-4927-a64a-f0cf9b58dfab"
acl = "private"
region = "eu-west-1"
versioning {
enabled = true
}
lifecycle_rule {
id = "transition"
enabled = true
transition {
days = 30
storage_class = "STANDARD_IA"
}
expiration {
days = 180
}
}
replication_configuration {
role = "${aws_iam_role.ps-db-backups-replication.arn}"
rules {
id = "ps-db-backups-replication"
status = "Enabled"
destination {
bucket = "${aws_s3_bucket.ps-db-backups-replica.arn}"
storage_class = "STANDARD_IA"
}
}
}
server_side_encryption_configuration {
rule {
apply_server_side_encryption_by_default {
sse_algorithm = "AES256"
}
}
}
}
Go through the terraform docs carefully.
You need to create a separate terraform resource for destination like this one:
resource "aws_s3_bucket" "destination" {
bucket = "tf-test-bucket-destination-12345"
region = "eu-west-1"
versioning {
enabled = true
}
}
And then refer it in your replication_configuration as
destination {
bucket = "${aws_s3_bucket.destination.arn}"
storage_class = "STANDARD"
}
I hope this helps. Try and let me know.
This appears to be a bug in Terraform 0.12.
See this issue https://github.com/terraform-providers/terraform-provider-aws/issues/9048
As a side note, if you also need to enable monitoring for S3 replication you won't be able to. Terraform does not have this implemented.
But there's a PR opened for this, please vote with a thumbs UP, https://github.com/terraform-providers/terraform-provider-aws/pull/11337

Resources