I have a project whose infra is managed by terraform. I'd like to push the state to a s3 bucket so other teams can use the state. Below is the backend configuration:
terraform {
backend "s3" {
bucket = "MY_BUCKET"
key = "tfstate"
region = "ap-southeast-2"
}
}
when I run terraform init I got below error:
AccessDenied: Access Denied
status code: 403, request id: 107E6007C9C64805, host id: kWASxeq1msxvGPZIKdi+7htg3yncMFxW9PQuXdC8ouwsEHMhx8ZPu6dKGUGWzDtblC6WRg1P1ew=
Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.
It seems that terraform tries to load state from s3 bucket rather than push to s3 bucket. How can I configure terraform to push state to s3?
I have configured aws profile on a tf file:
provider "aws" {
region = "ap-southeast-2"
profile = "me"
}
The credential for the current user has admin permission on the bucket.
I was facing the same issue and the found like the bucket mentioned in the backend.tf file was not created in my aws console. Hence I create the bucket with the same bucket name mentioned in the backend.tf file and it worked for me.
For further readers:
AWS credentials can be provided As #Thiago Arrais mentioned
Another way to provide credentials in backend block is to define profile:
terraform {
backend "s3" {
profile = "me" <-- aws profile
bucket = "MY_BUCKET"
key = "tfstate"
region = "ap-southeast-2"
}
}
And your ~/.aws/credentails file has profile me with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY defined in it as follows:
[me]
AWS_ACCESS_KEY_ID = access_key_value
AWS_SECRET_ACCESS_KEY = secret_key_value
I had exact same problem. When terraform {backend "s3" {}} is defined then that block is evaluated before provider "aws" {} block. That's why backend cannot find credentials info defined in provider block.
You're not providing the S3 credentials in the backend block. You'll need to set them there (access_key and secret_key parameters) or via environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).
You'll also need to make sure that the bucket exists and that these credentials do have access to it.
By the way, you don't need an AWS provider block. The S3 backend is usable even if you don't manage AWS resources in your Terraform config.
For me I was having different aws region in backend.tf than where my bucket was.
Related
I'm trying to create a mechanism so that I use terraform backend to upload the state to a S3, so that my teammate can use my terraform state to resume my work. This is my setup:
terraform {
backend "s3" {
bucket = "username-terraform-state"
key = "billow/terraform.tfstate"
region = "us-west-2"
dynamodb_table = "username-terraform-state-test-locks"
encrypt = true
}
}
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "username-terraform-state"
key = "billow/terraform.tfstate"
region = var.region
}
}
With this setup, I have two folders in the S3 bucket. One is billow/ with a terraform.tfstate file. There's another folder env:/remote_s3/billow/ (remote_s3 is the name of my terraform workspace) with another terraform.tfstate. Both of them are updated also when I execute a terraform import command.
What I want is when I create a new workspace, I would be able to pull the state file from existing folder in the bucket, and continue the project. The step I took was placing the same .tf file in the directory and run terraform init, terraform refresh and then terraform state pull to pull the state file. However, this only pulls an empty state file, and I would need to re-import all the resources again.
So here are my two questions:
Why are there two folders in the bucket? I thought with my backend setup there should be only one of them.
What should I do to make it so that when I set up a new terraform workspace, I would be able to import the whole state file from my previously saved terraform state?
Thanks!
I am using terraform cloud to manage the state of the infrastructure provisioned in AWS.
I am trying to use terraform import to import an existing resource that is currently not managed by terraform.
I understand terraform import is a local only command. I have set up a workspace reference as follows:
terraform {
required_version = "~> 0.12.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "foo"
workspaces {
name = "bar"
}
}
}
The AWS credentials are configured in the remote cloud workspace but terraform does not appear to be referencing the AWS credentials from the workspace but instead falls back trying to using the local credentials which points to a different AWS account. I would like Terraform to use the credentials by referencing the variables in the workspace when I run terraform import.
When I comment out the locally configured credentials, I get the error:
Error: No valid credential sources found for AWS Provider.
I would have expected terraform to use the credentials configured in the workspace.
Note that terraform is able to use the credentials correctly, when I run the plan/apply command directly from the cloud console.
Per the backends section of the import docs, plan and apply run in Terraform Cloud whereas import runs locally. Therefore, the import command will not have access to workspace credentials set in Terraform Cloud. From the docs:
In order to use Terraform import with a remote state backend, you may need to set local variables equivalent to the remote workspace variables.
So instead of running the following locally (assuming you've provided access keys to Terraform Cloud):
terraform import aws_instance.myserver i-12345
we should run for example:
export AWS_ACCESS_KEY_ID=abc
export AWS_SECRET_ACCESS_KEY=1234
terraform import aws_instance.myserver i-12345
where the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY have the same permissions as those configured in Terraform Cloud.
Note for AWS SSO users
If you are using AWS SSO and CLI v2, functionality for terraform to be able to use the credential cache for sso was added per this AWS provider issue. The steps for importing with an SSO profile are:
Ensure you've performed a login and have an active session with e.g. aws sso login --profile my-profile
Make the profile name available to terraform as an environment variable with e.g. AWS_PROFILE=my-profile terraform import aws_instance.myserver i-12345
If the following error is displayed, ensure you are using a version of the cli > 2.1.23:
Error: SSOProviderInvalidToken: the SSO session has expired or is invalid
│ caused by: expected RFC3339 timestamp: parsing time "2021-07-18T23:10:46UTC" as "2006-01-02T15:04:05Z07:00": cannot parse "UTC" as "Z07:00"
Use the data provider, for Example:-
data "terraform_remote_state" "test" {
backend = "s3"
config = {
bucket = "BUCKET_NAME"
key = "BUCKET_KEY WHERE YOUR TERRAFORM.TFSTATE FILE IS PRESENT"
region = "CLOUD REGION"
}
}
Now you can call your provisioned resources
Example :-
For getting the VPC ID:-
data.terraform_remote_state.test.*.outputs.vpc_id
Just make the cloud resource property you want to refer should be in exported as output and stored in terraform.tfstate file
I am using terraform 0.12.9 and state is saved on s3 bucket. I'd like to list all resources by terraform state list. Based on this document, https://www.terraform.io/docs/commands/state/list.html, it says -state=path - Path to the state file. Defaults to "terraform.tfstate". Ignored when remote state is used.. How can I pass the state file if it is on remote s3 bucket?
You need to configure the tfstate bucket path in your terraform.tf file:
terraform {
backend "s3" {
bucket = "bucket_name"
key = "my/key/location/terraform.tfstate"
region = "bucket region"
}
}
and later you need to run terraform init so that terraform would fetch the state from the remote bucket
I have a $HOME/.aws/credentials file like this:
[config1]
aws_access_key_id=accessKeyId1
aws_secret_access_key=secretAccesskey1
[config2]
aws_access_key_id=accessKeyId2
aws_secret_access_key=secretAccesskey2
So I was expecting that with this configuration, terraform will choose the second credentials:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
But when I try terraform init it says it hasn't found any valid credentials:
Initializing the backend...
Error: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
As as workaround, I changed config2 by default in my credentials file and I removed the profile field from the provider block so it works but I really need to use something like the first approach. What am I missing here?
Unfortunately you also need to provide the IAM credential configuration to the backend configuration as well as your AWS provider configuration.
The S3 backend configuration takes the same parameters here as the AWS provider so you can specify the backend configuration like this:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
profile = "config2"
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
There's a few reasons behind this needing to be done separately. One of the reasons would be that you can independently use different IAM credentials, accounts and regions for the S3 bucket and the resources you will be managing with the AWS provider. You might also want to use S3 as a backend even if you are creating resources in another cloud provider or not using a cloud provider at all, Terraform can manage resources in a lot of places that don't have a way to store Terraform state. The main reason though is that the backends are actually managed by the core Terraform binary rather than the provider binaries and the backend initialisation happens before pretty much anything else.
I want to create rds instance and entire required infrastructure for its in aws. I couldn't understand the security part of terraform. I want to encrypt the sensitive data in .tfstate at least. e.g: password/username for rds instance etc. What will be best way to store sensitive data for .tfstate? If not supported then please suggest other ways to do that? Thank you.
Use AWS provider as sample.
Recommend to save the .tfstate file to s3 bucket and set the policy on it that only nominates roles have the permission to access this bucket and the related kms key.
terraform {
required_version = "~> 0.10"
backend "s3" {
bucket = "<global_unique_bucket_name>"
key = "development/vpc.tfstate"
region = "ap-southeast-2"
kms_key_id = "alias/terraform"
encrypt = true
}
}
Always enable version control on that s3 bucket.