Can't use S3 backend with Terraform - missing credentials - terraform

I have the most pedestrian of a Terraform sample:
# Configure AWS provider
provider "aws" {
region = "us-east-1"
access_key = "xxxxxxxxx"
secret_key = "yyyyyyyyyyy"
}
# Terraform configuration
terraform {
backend "s3" {
bucket = "terraform.example.com"
key = "85/182/terraform.tfstate"
region = "us-east-1"
}
}
When I run terraform init I receive the following (traced) response:
2018/08/14 14:19:13 [INFO] Terraform version: 0.11.7 41e50bd32a8825a84535e353c3674af8ce799161
2018/08/14 14:19:13 [INFO] Go runtime version: go1.10.1
2018/08/14 14:19:13 [INFO] CLI args: []string{"C:\\cygwin64\\usr\\local\\bin\\terraform.exe", "init"}
2018/08/14 14:19:13 [DEBUG] Attempting to open CLI config file: C:\Users\judall\AppData\Roaming\terraform.rc
2018/08/14 14:19:13 [DEBUG] File doesn't exist, but doesn't need to. Ignoring.
2018/08/14 14:19:13 [INFO] CLI command args: []string{"init"}
2018/08/14 14:19:13 [DEBUG] command: loading backend config file: C:\cygwin64\home\judall\t2
2018/08/14 14:19:13 [DEBUG] command: no data state file found for backend config
Initializing the backend...
2018/08/14 14:19:13 [DEBUG] New state was assigned lineage "5113646b-318f-9612-5057-bc4803292c3a"
2018/08/14 14:19:13 [INFO] Building AWS region structure
2018/08/14 14:19:13 [INFO] Building AWS auth structure
2018/08/14 14:19:13 [INFO] Setting AWS metadata API timeout to 100ms
2018/08/14 14:19:13 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2018/08/14 14:19:13 [DEBUG] plugin: waiting for all plugin processes to complete...
Error configuring the backend "s3": No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
Please update the configuration in your Terraform files to fix this error
then run this command again.
I've been googling for hours on this. I've tried to use the 'profile' property - which yields slightly different trace logs, but the same end result. I've tried setting the AWS_ environment variables - with the same result.
I'm running terraform version 0.11.7. Any suggestions?

The provider configuration is independent from your backend configuration.
The credentials, you have configured in the provider block, are used to create your AWS related resources. For accessing S3 bucket as a storage for your remote state, you also need to provide credentials. This can be the same like in the config for your provider or can be completely different (with permissions only on this specific bucket for security reasons).
You can fix it by adding the credentials in the backend block:
# Terraform configuration
terraform {
backend "s3" {
bucket = "terraform.example.com"
key = "85/182/terraform.tfstate"
region = "us-east-1"
access_key = "xxxxxxxxx"
secret_key = "yyyyyyyyyyy"
}
}
Or you can create an AWS (default) profile in your home directory (Docs) and remove your credentials in your terraform code (preferred option, when you store your config in a version control system).

As pointed by #JimUdall in the comment, if you are re-running init on an updated backend configuration, you need to use -reconfigure for the updated config to apply the changed configuration.
terraform init -reconfigure

Related

Terraform Cloud in Git mode - failed to read schema, permission denied

I'm new to terraform and trying to use a 'custom' provider with Terraform cloud. To be clear, if I use it on my Windows machine without the TCloud everything works just fine.
On the TCloud I've got a workbook synchronized to my Git repo. The custom provider is uploaded to the Git repo: \terraform.d\plugins\zscaler.com\zpa\zpa\2.0.5\linux_amd64\terraform-provider-zpa_v2.0.5.
I've ran the chmod command to compensate for Window's lack of ability to set the provider as executable:
git update-index --chmod=+x .\terraform.d\plugins\zscaler.com\zpa\zpa\2.0.5\linux_amd64\terraform-provider-zpa_v2.0.5
I've also updated the lock file to allow both windows and linux provider hashes to deal with "local provider doesn't match any of the checksums" issue:
terraform providers lock -fs-mirror="C:\Users\user1\AppData\Roaming\terraform.d\plugins\" -platform=windows_amd64 -platform=linux_amd64 zscaler.com/zpa/zpa
When I run terraform plan from VSCode (on my Windows machine) on the repo that's initialized to the TCloud I get the following error:
> terraform plan -var-file terraform.tfvar
. . .
2022-02-02T10:14:28.328-0600 [INFO] cloud: starting Plan operation
Terraform v1.1.4
on linux_amd64
Configuring remote state backend...
Initializing Terraform configuration...
╷
│ Error: failed to read schema for zpa_provisioning_key.iot_edge_key in zscaler.com/zpa/zpa: failed to instantiate provider "zscaler.com/zpa/zpa" to obtain schema: fork/exec .terraform/providers/zscaler.com/zpa/zpa/2.0.5/linux_amd64/terraform-provider-zpa_v2.0.5: permission denied
Enabling debug doesn't give me any more clue on what's wrong. Appreciate any suggestions.
Thank you

Terragrunt: Provider "tfstate" not available for installation

When trying to provision resources for an application in an AWS Beanstalk einvironment with terragrunt, it gives:
Initializing modules...
- module.db
- module.db.db_subnet_group
- module.db.db_parameter_group
- module.db.db_option_group
- module.db.db_instance
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
Provider "tfstate" not available for installation.
A provider named "tfstate" could not be found in the official repository.
This may result from mistyping the provider name, or the given provider may
be a third-party provider that cannot be installed automatically.
terragrunt version v0.18.7
Terraform v0.11.14

How can I run terraform init with azure on my local machine

I'm trying to run terraform locally but it should connect to an azure machine. We have azure agents that do exactly this. If I run it locally, it would help me to move faster.
Here is my command
terraform init -reconfigure -backend-config ~/common.tfvars
Here is the error
Initializing modules... │··················································
- module.kubernetes │··················································
- module.database │··················································
- module.trafficmanager │··················································
- module.appInsights │··················································
│··················································
Initializing the backend... │··················································
│··················································
Error configuring the backend "azurerm": resource_group_name and credentials must be provided when access_key is absent │··················································
│··················································
Please update the configuration in your Terraform files to fix this error │··················································
then run this command again.
cat ~/common.tfvars
resource_group_name = "myproject-nst-config-RG"
storage_account_name = "myprojectnstterraform"
container_name = "tfstatemyprojectact"
key = "nstproject"
What am I missing? Is what I want even possible?
Thank you!
You need to provide credentials for Terraform to connect to Azure, usually a service principal, this will include username/applicationID, password and tenant. Have a read of the MS documentation on using Terraform with Azure and you will see they set environment variables with these details.
If you are trying to use the az cli login, you'll want to make sure you are running Terraform 0.12.

Providing Terraform with credentials in terraform files instead of env variable

I have set-up a terraform project with a remote back-end on GCP. Now when I want to deploy the infrastructure, I run into issues with credentials. I have a credentials file in
\home\mike\.config\gcloud\credentials.json
In my terraform project I have the following data referring to the remote state:
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config {
bucket = "${var.bucket_name}"
prefix = "${var.prefix_project}"
}
}
and I specify the cloud provider with a the details of my credentials file.
provider "google" {
version = "~> 1.16"
project = "${data.terraform_remote_state.project_id.project_id}"
region = "${var.region}"
credentials = "${file(var.credentials)}"
}
However, this runs into
data.terraform_remote_state.project_id: data.terraform_remote_state.project_id:
error initializing backend:
storage.NewClient() failed: dialing: google: could not find default
credentials.
if I add
export GOOGLE_APPLICATION_CREDENTIALS=/home/mike/.config/gcloud/credentials.json
I do get it to run as desired. My issue is that I would like to specify the credentials in the terraform files as I am running the terraform commands in an automated way from a python script where I cannot set the environment variables. How can I let terraform know where the credentials are without setting the env variable?
I was facing the same error when trying to run terraform (version 1.1.5) commands in spite of having successfully authenticated via gcloud auth login.
Error message in my case:
Error: storage.NewClient() failed: dialing: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
It turned out that I had to also authenticate via gcloud auth application-default login and was able to run terraform commands thereafter.
I figured this out in the end.
Also the data needs to have the credentials.
E.g.
data "terraform_remote_state" "project_id" {
backend = "gcs"
workspace = "${terraform.workspace}"
config = {
bucket = "${var.bucket_name}"
prefix = "${var.prefix_project}"
credentials = "${var.credentials}" <- added
}
}

Error inspecting states in the "s3" backend: NoSuchBucket: The specified bucket does not exist

I am running terraform init on my terraform modules folder and I am getting below.
Error inspecting states in the "s3" backend:
NoSuchBucket: The specified bucket does not exist
status code: 404, request id: 6667B0A661F9C62F, host id: 3mC8DNrS/gGHtp7mhVMRtpIUeMaNXs2cEozEY+akZf1ixFD6x2qQx7c3mX02M1BIbyfYowYt35s=
I was having the same issue.
Using aws command aws s3 ls and aws s3api list-objects --bucket bucket-name
I could list the content but was still having the same issue.
Error inspecting states in the "s3" backend:
NoSuchBucket: The specified bucket does not exist
status code: 404, request id: xxxxx, host id: xxxxxx
Erased the ./terraform directory and it fixed my issue.
I was getting the similar issue as below
Error inspecting states in the "s3" backend:
AccessDenied: Access Denied
status code: 403, request id: XXXXXX, host id: XXXXX
And after removing the .terraform file it worked (In my case I was using Jenkins, so I have to remove it from Jenkins server's project directory and it worked for me). Thanks !
In my case even when I removed that .terraform as mentioned does not work, I have to add the profile in s3 backend module even profile exist in provider.
adding configuration might help others.
provider "aws" {
region = var.region
profile = "myprofile"
}
terraform {
backend "s3" {
encrypt = true
bucket = "appname-terraform-state"
region = "ap-southeast-1"
key = "terraform.tfstate"
profile = "myprofile"
}
}
In my case just remove all directory from terraform and again hit terraform init command via git, now working for me.

Resources