Given existence of first_profile in ~/.aws/credentials
[first_profile]
aws_access_key_id=ACOYHFVDLCHVNOISYGV
aws_secret_access_key=RApidgudsphAFdIK+097dslvxchnv
and a backend_role whose role_arn is arn:aws:iam::123456789101:role/roleA in ~/.aws/config
[profile roleA]
role_arn=arn:aws:iam::123456789101:role/roleA
source_profile=first_profile
using aws cli, I confirm that first_profile can assume backend_role and has permissions to an s3 bucket and dynamodb table by running:
aws s3 ls s3://random-tf-state-bucket --profile backend_role
aws dynamodb describe-table --table-name random-tf-state-lock-table --profile backend_role --region us-east-2
The above commands do not return (AccessDenied) thus conforming access
Expectation:
According to terraform documentation/blog and given a main.tf file set up like the below:
terraform {
required_version = "1.0.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.53.0"
}
}
}
terraform {
backend "s3" {
}
}
provider "aws" {
region = "us-eat-1"
profile = "first_profile"
shared_credentials_file = "~/.aws/credentials"
assume_role {
role_arn = "role_arn=arn:aws:iam::123456789101:role/roleA"
}
}
and s3.backend.tfvars file:
bucket = "random-tf-state-bucket"
key = "terraform.tfstate"
region = "us-east-2"
dynamodb_table = "random-tf-state-lock-table"
encrypt = true
running terraform init -backend-config=s3.backend.tfvars should work.
Result:
Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Question:
What step in this process am I missing?
Similar issue reported here was helpful in getting a solution.
Solution:
The key to this was realizing that the profile used to configure the S3 backend is its own thing - it is not tied to provider block.
Thus s3.backend.tfvars ends up like this:
bucket = "random-tf-state-bucket"
key = "terraform.tfstate"
region = "us-east-2"
dynamodb_table = "random-tf-state-lock-table"
encrypt = true
profile = "roleA"
and the provider block ended up looking like:
provider "aws" {
region = var.aws_region
profile = var.some_other_profile
assume_role {
role_arn = "some_other_role_to_assume"
}
}
Related
I would like to run a 'terraform plan' for validation that doesn't run against the real aws infrastructure.
My provider configuration for this test is:
provider "aws" {
region = "eu-central-1"
access_key = "mock_access_key"
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_region_validation = true
skip_requesting_account_id = true
skip_metadata_api_check = true
s3_force_path_style = true
endpoints {
apigateway = "http://localhost:4566"
cloudformation = "http://localhost:4566"
cloudwatch = "http://localhost:4566"
dynamodb = "http://localhost:4566"
es = "http://localhost:4566"
firehose = "http://localhost:4566"
iam = "http://localhost:4566"
kinesis = "http://localhost:4566"
lambda = "http://localhost:4566"
route53 = "http://localhost:4566"
redshift = "http://localhost:4566"
s3 = "http://localhost:4566"
secretsmanager = "http://localhost:4566"
ses = "http://localhost:4566"
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
ssm = "http://localhost:4566"
stepfunctions = "http://localhost:4566"
sts = "http://localhost:4566"
}
}
Here is a small terraform code example.
# terraform code
data "aws_caller_identity" "current" {}
resource "aws_iam_role" "iam_for_lambda" {
name = "lambda-role-name"
inline_policy {
name = "sts"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = ["sts:AssumeRole"]
Effect = "Allow"
Resource = "arn:aws:iam::${data.aws_caller_identity.current.id}:role/myrole"
},
]
})
}
}
Everything works as expected except the data resources (from the terraform aws provider). At the moment, terraform is trying to resolve the data resources in the plan and aborts the plan with the following error message.
module.moduletest.data.aws_caller_identity.current: Still reading... [20s elapsed]
╷
│ Error: reading STS Caller Identity
│
│ with module.moduletest.data.aws_caller_identity.current,
│ on ../data.tf line 2, in data "aws_caller_identity" "current":
│ 2: data "aws_caller_identity" "current" {}
│
│ RequestError: send request failed
│ caused by: Post "http://localhost:4566/": dial tcp 127.0.0.1:4566: connect: connection refused
How can you mock the data resources - lookups on aws resources? Are there any additional settings in my provider config necessary?
EDIT
Currently the AWS config are set via environment variables.
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalxxxxEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=eu-central-1
I'm trying to create an EC2 instance as mentioned in Terraform documentation.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
access_key = "Acxxxxxxxxxxxxxxxxx"
secret_key = "UxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxO"
region = "ap-south-1"
}
resource "aws_instance" "app_server" {
ami = "ami-076e3a557efe1aa9c"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
But facing issue error configuring Terraform AWS Provider: loading configuration: credential type source_profile profile default.
I have tried to export cmd and configure the default profile but nothing works for me.
What I'm doing wrong here?
I removed .terraform and and lock.hcl and tried fresh terraform init
Thanks for this question.
I'd rather go with the following:
Configure AWS profile:
aws configure
or
vim ~/.aws/config
and then
vim ~/.aws/credentials
write a new profile name or the default as follows:
~/.aws/credentials
[default]
region = us-east-1
output = json
[profile TERRAFORM]
region=us-east-1
output=json
~/.aws/credentials
# Sitech
[default]
aws_access_key_id = A****
aws_secret_access_key = B*********
[TERRAFORM]
aws_access_key_id = A****
aws_secret_access_key = B*********
Use terraform provider profile term rather than access key and secret access key
main.tf
provider "aws" {
profile = var.aws_profile
region = var.main_aws_region
}
terraform.tfvars
aws_profile = "TERRAFORM"
main_aws_region = "us-east-1"
I have created a module I want to use across multiple providers (just two AWS providers for 2 regions). How can I set a resource's provider value via variable from a calling module? I am calling a module codebuild.tf (which I want to be region agnostic) from a MGMT module named cicd.tf - Folder structure:
main.tf
/MGMT/
-> cicd.tf
/modules/codebuild/
-> codebuild.tf
main.tf:
terraform {
required_version = ">= 1.0.10"
backend "s3" {
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# default AWS provider for MGMT resources in us-east-1 and global
provider "aws" {
region = "us-east-1"
}
# DEV Account resources in us-east-1 and global
provider "aws" {
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::accountid:role/dev-rolename"
}
alias = "dev_us-east-1"
}
# DEV Account resources in us-west-2 and global
provider "aws" {
region = "us-west-2"
assume_role {
role_arn = "arn:aws:iam::accountid:role/dev-rolename"
}
alias = "dev_us-west-2"
}
module "MGMT" {
source = "./MGMT"
count = var.aws_env == "MGMT" ? 1 : 0
aws_env = var.aws_env
}
When I build my TF, its under the MGMT AWS account which uses the the default aws provider that doesn't have an alias - I am then trying to set a provider with an AWS IAM Role (that's cross account) when I am calling the module (I made the resource a module because I want to run it in multiple regions):
/MGMT/cicd.tf:
# DEV in cicd.tf
# create the codebuild resource in the assumed role's us-east-1 region
module "dev_cicd_iac_us_east_1" {
source = "../modules/codebuild/"
input_aws_provider = "aws.dev_us-east-1"
input_aws_env = var.dev_aws_env
}
# create the codebuild resource in the assumed role's us-west-2 region
module "dev_cicd_iac_us_west_2" {
source = "../modules/codebuild/"
input_aws_provider = "aws.dev_us-west_2"
input_aws_env = var.dev_aws_env
}
/modules/codebuild/codebuild.tf:
# Code Build resource here
variable "input_aws_provider" {}
variable "input_aws_env" {}
resource "aws_codebuild_project" "codebuild-iac" {
provider = tostring(var.input_aws_provider) # trying to make it a string, with just the var there it looks for a var provider
name = "${var.input_aws_env}-CodeBuild-IaC"
# etc...
}
I get the following error when I plan the above:
│ Error: Invalid provider reference
│ On modules/codebuild/codebuild.tf line 25: Provider argument requires
│ a provider name followed by an optional alias, like "aws.foo".
How can I make the provider value a proper reference to the aws provider defined in main.tf while still using a MGMT folder/module file named cicd.tf?
I am doing a small POC with terraform and I am unable to run terraform plan
My code:
terraform {
backend "azurerm" {
storage_account_name = "appngqastorage"
container_name = "terraform"
key = "qa.terraform.tfstate"
access_key = "my access key here"
}
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = ">= 2.77"
}
}
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "qa_resource_group" {
location = "East US"
name = "namehere"
}
My execution:
terraform init = success
terraform validate = configuration is valid
terraform plan = throws exception
Error:
│ Error: Plugin error
│
│ with provider["registry.terraform.io/hashicorp/azurerm"],
│ on main.tf line 15, in provider "azurerm":
│ 15: provider"azurerm"{
│
│ The plugin returned an unexpected error from plugin.(*GRPCProvider).ConfigureProvider: rpc error: code = Internal desc = grpc: error while marshaling: string field contains invalid UTF-8
After digging a little deeper I was able to figure out what was the issue.
Current project I am working on is been utilized within multiple regions. So, when testing work I am swapping my region in order to properly test the data that displays in specific region. This time when running terraform apply my windows configuration was pointing to another region not the US.
This thread helped me understand.
Upgrade azure CLI to the latest version using az upgrade
Log into your azure account using az login
Re-run your terraform commands!
I'm working on an AWS multi-account setup with Terraform. I've got a master account that creates several sub-accounts, and in the sub-accounts I'm referencing the master's remote state to retrieve output values.
The terraform plan command is failing for this configuration in a test main.tf:
terraform {
required_version = ">= 0.12.0"
backend "s3" {
bucket = "bucketname"
key = "statekey.tfstate"
region = "us-east-1"
}
}
provider "aws" {
region = "us-east-1"
version = "~> 2.7"
}
data "aws_region" "current" {}
data "terraform_remote_state" "common" {
backend = "s3"
config {
bucket = "anotherbucket"
key = "master.tfstate"
}
}
With the following error:
➜ test terraform plan
Error: Unsupported block type
on main.tf line 20, in data "terraform_remote_state" "common":
20: config {
Blocks of type "config" are not expected here. Did you mean to define argument
"config"? If so, use the equals sign to assign it a value.
From what I can tell from the documentation, this should be working… what am I doing wrong?
➜ test terraform -v
Terraform v0.12.2
+ provider.aws v2.14.0
Seems the related document isn't updated after upgrade to 0.12.x
As the error prompt, add = after config
data "terraform_remote_state" "common" {
backend = "s3"
config = {
bucket = "anotherbucket"
key = "master.tfstate"
}
}
If the problem is fixed, recommend to raise a PR to update the document, then others can avoid the same issue again.