How to list resources when using remote state? - terraform

I am using terraform 0.12.9 and state is saved on s3 bucket. I'd like to list all resources by terraform state list. Based on this document, https://www.terraform.io/docs/commands/state/list.html, it says -state=path - Path to the state file. Defaults to "terraform.tfstate". Ignored when remote state is used.. How can I pass the state file if it is on remote s3 bucket?

You need to configure the tfstate bucket path in your terraform.tf file:
terraform {
backend "s3" {
bucket = "bucket_name"
key = "my/key/location/terraform.tfstate"
region = "bucket region"
}
}
and later you need to run terraform init so that terraform would fetch the state from the remote bucket

Related

Unable to pull terraform state from AWS-S3

I'm trying to create a mechanism so that I use terraform backend to upload the state to a S3, so that my teammate can use my terraform state to resume my work. This is my setup:
terraform {
backend "s3" {
bucket = "username-terraform-state"
key = "billow/terraform.tfstate"
region = "us-west-2"
dynamodb_table = "username-terraform-state-test-locks"
encrypt = true
}
}
data "terraform_remote_state" "network" {
backend = "s3"
config = {
bucket = "username-terraform-state"
key = "billow/terraform.tfstate"
region = var.region
}
}
With this setup, I have two folders in the S3 bucket. One is billow/ with a terraform.tfstate file. There's another folder env:/remote_s3/billow/ (remote_s3 is the name of my terraform workspace) with another terraform.tfstate. Both of them are updated also when I execute a terraform import command.
What I want is when I create a new workspace, I would be able to pull the state file from existing folder in the bucket, and continue the project. The step I took was placing the same .tf file in the directory and run terraform init, terraform refresh and then terraform state pull to pull the state file. However, this only pulls an empty state file, and I would need to re-import all the resources again.
So here are my two questions:
Why are there two folders in the bucket? I thought with my backend setup there should be only one of them.
What should I do to make it so that when I set up a new terraform workspace, I would be able to import the whole state file from my previously saved terraform state?
Thanks!

Terraform wants to recreate imported resources

Locally I:
Created main.tf
Initialize with ‘terraform init’
Imported GCP project and Google Run service
Updated main.tf so ‘terraform plan’ was not trying to do anything.
Checked main.tf to GitHub
I setup GitHub actions so:
Checkout
Setup Gcloud
Initialize with ‘terraform init’
Plan with ‘terraform plan’
Terraform plan is trying to recreate everything.
How do I make it detect existing resources?
By default Terraform will initialise a local state. The problem with this state is that it will be available only for you on your PC. If you execute a plan somewhere else, this state will be lost. To solve this issue, you need to set up a remote backend for Terraform for being able to store the state file in a centralised location.
If you are using Google Cloud, you can use a Cloud Store bucket for storing the state file. Terraform offers gcs module for being able to configure this backend using Cloud Store. You have to create a bucket and provide the bucket name to the gcs backend configuration:
terraform {
backend "gcs" {
bucket = "tf-state-prod"
prefix = "terraform/state"
}
}

Get the .tf file from azure remote backend in terraform

I am trying the terraform operations using azure as remote backend. So far, I have been able to store the state remotely in azure. I am now unable to retrieve the state from remote, as I do not store the .tf files or .tfstate files locally. How to fetch the same from remote? This is the code I have:
terraform init -backend-config=config.backend.tfbackend
terraform state pull # fails due to no configuration file existing locally
This is my config.backend.tfbackend
backend "azurerm" {
resource_group_name = "rg098"
storage_account_name = "tstate6565"
container_name = "tstate"
key = "test5176"
}

How can I upload terraform state to s3 bucket?

I have a project whose infra is managed by terraform. I'd like to push the state to a s3 bucket so other teams can use the state. Below is the backend configuration:
terraform {
backend "s3" {
bucket = "MY_BUCKET"
key = "tfstate"
region = "ap-southeast-2"
}
}
when I run terraform init I got below error:
AccessDenied: Access Denied
status code: 403, request id: 107E6007C9C64805, host id: kWASxeq1msxvGPZIKdi+7htg3yncMFxW9PQuXdC8ouwsEHMhx8ZPu6dKGUGWzDtblC6WRg1P1ew=
Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.
It seems that terraform tries to load state from s3 bucket rather than push to s3 bucket. How can I configure terraform to push state to s3?
I have configured aws profile on a tf file:
provider "aws" {
region = "ap-southeast-2"
profile = "me"
}
The credential for the current user has admin permission on the bucket.
I was facing the same issue and the found like the bucket mentioned in the backend.tf file was not created in my aws console. Hence I create the bucket with the same bucket name mentioned in the backend.tf file and it worked for me.
For further readers:
AWS credentials can be provided As #Thiago Arrais mentioned
Another way to provide credentials in backend block is to define profile:
terraform {
backend "s3" {
profile = "me" <-- aws profile
bucket = "MY_BUCKET"
key = "tfstate"
region = "ap-southeast-2"
}
}
And your ~/.aws/credentails file has profile me with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY defined in it as follows:
[me]
AWS_ACCESS_KEY_ID = access_key_value
AWS_SECRET_ACCESS_KEY = secret_key_value
I had exact same problem. When terraform {backend "s3" {}} is defined then that block is evaluated before provider "aws" {} block. That's why backend cannot find credentials info defined in provider block.
You're not providing the S3 credentials in the backend block. You'll need to set them there (access_key and secret_key parameters) or via environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).
You'll also need to make sure that the bucket exists and that these credentials do have access to it.
By the way, you don't need an AWS provider block. The S3 backend is usable even if you don't manage AWS resources in your Terraform config.
For me I was having different aws region in backend.tf than where my bucket was.

is it expected that data terraform_remote_sate creates state when it does not exist?

I'm defining a remote state data resource (gcp backend) in terraform. When I plan, the state file, for the remote state, is created if it does not exist previously, even when I'm not referencing the state in other resources.
Terraform v0.11.14
So when I plan for env dev:
data "terraform_remote_state" "example" {
backend = "gcs"
workspace = "dev-us-east1"
config {
bucket = "bucket"
prefix = "global/projects/example-project"
}
}
and the file in gcs bucket/global/projects/example-project/dev-us-east1 does not exist, then it's created as an empty state.
I expected kind of a state not found error, but instead, the remote state is created with an empty content.

Resources