How to Configure Terraform AWS Provider? - terraform

I'm trying to create an EC2 instance as mentioned in Terraform documentation.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
access_key = "Acxxxxxxxxxxxxxxxxx"
secret_key = "UxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxO"
region = "ap-south-1"
}
resource "aws_instance" "app_server" {
ami = "ami-076e3a557efe1aa9c"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
But facing issue error configuring Terraform AWS Provider: loading configuration: credential type source_profile profile default.
I have tried to export cmd and configure the default profile but nothing works for me.
What I'm doing wrong here?
I removed .terraform and and lock.hcl and tried fresh terraform init

Thanks for this question.
I'd rather go with the following:
Configure AWS profile:
aws configure
or
vim ~/.aws/config
and then
vim ~/.aws/credentials
write a new profile name or the default as follows:
~/.aws/credentials
[default]
region = us-east-1
output = json
[profile TERRAFORM]
region=us-east-1
output=json
~/.aws/credentials
# Sitech
[default]
aws_access_key_id = A****
aws_secret_access_key = B*********
[TERRAFORM]
aws_access_key_id = A****
aws_secret_access_key = B*********
Use terraform provider profile term rather than access key and secret access key
main.tf
provider "aws" {
profile = var.aws_profile
region = var.main_aws_region
}
terraform.tfvars
aws_profile = "TERRAFORM"
main_aws_region = "us-east-1"

Related

Terraform: Set an AWS Resource's provider value via a module variable

I have created a module I want to use across multiple providers (just two AWS providers for 2 regions). How can I set a resource's provider value via variable from a calling module? I am calling a module codebuild.tf (which I want to be region agnostic) from a MGMT module named cicd.tf - Folder structure:
main.tf
/MGMT/
-> cicd.tf
/modules/codebuild/
-> codebuild.tf
main.tf:
terraform {
required_version = ">= 1.0.10"
backend "s3" {
}
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
}
# default AWS provider for MGMT resources in us-east-1 and global
provider "aws" {
region = "us-east-1"
}
# DEV Account resources in us-east-1 and global
provider "aws" {
region = "us-east-1"
assume_role {
role_arn = "arn:aws:iam::accountid:role/dev-rolename"
}
alias = "dev_us-east-1"
}
# DEV Account resources in us-west-2 and global
provider "aws" {
region = "us-west-2"
assume_role {
role_arn = "arn:aws:iam::accountid:role/dev-rolename"
}
alias = "dev_us-west-2"
}
module "MGMT" {
source = "./MGMT"
count = var.aws_env == "MGMT" ? 1 : 0
aws_env = var.aws_env
}
When I build my TF, its under the MGMT AWS account which uses the the default aws provider that doesn't have an alias - I am then trying to set a provider with an AWS IAM Role (that's cross account) when I am calling the module (I made the resource a module because I want to run it in multiple regions):
/MGMT/cicd.tf:
# DEV in cicd.tf
# create the codebuild resource in the assumed role's us-east-1 region
module "dev_cicd_iac_us_east_1" {
source = "../modules/codebuild/"
input_aws_provider = "aws.dev_us-east-1"
input_aws_env = var.dev_aws_env
}
# create the codebuild resource in the assumed role's us-west-2 region
module "dev_cicd_iac_us_west_2" {
source = "../modules/codebuild/"
input_aws_provider = "aws.dev_us-west_2"
input_aws_env = var.dev_aws_env
}
/modules/codebuild/codebuild.tf:
# Code Build resource here
variable "input_aws_provider" {}
variable "input_aws_env" {}
resource "aws_codebuild_project" "codebuild-iac" {
provider = tostring(var.input_aws_provider) # trying to make it a string, with just the var there it looks for a var provider
name = "${var.input_aws_env}-CodeBuild-IaC"
# etc...
}
I get the following error when I plan the above:
│ Error: Invalid provider reference
│ On modules/codebuild/codebuild.tf line 25: Provider argument requires
│ a provider name followed by an optional alias, like "aws.foo".
How can I make the provider value a proper reference to the aws provider defined in main.tf while still using a MGMT folder/module file named cicd.tf?

Terraform and AWS Assume Role

Given existence of first_profile in ~/.aws/credentials
[first_profile]
aws_access_key_id=ACOYHFVDLCHVNOISYGV
aws_secret_access_key=RApidgudsphAFdIK+097dslvxchnv
and a backend_role whose role_arn is arn:aws:iam::123456789101:role/roleA in ~/.aws/config
[profile roleA]
role_arn=arn:aws:iam::123456789101:role/roleA
source_profile=first_profile
using aws cli, I confirm that first_profile can assume backend_role and has permissions to an s3 bucket and dynamodb table by running:
aws s3 ls s3://random-tf-state-bucket --profile backend_role
aws dynamodb describe-table --table-name random-tf-state-lock-table --profile backend_role --region us-east-2
The above commands do not return (AccessDenied) thus conforming access
Expectation:
According to terraform documentation/blog and given a main.tf file set up like the below:
terraform {
required_version = "1.0.4"
required_providers {
aws = {
source = "hashicorp/aws"
version = "3.53.0"
}
}
}
terraform {
backend "s3" {
}
}
provider "aws" {
region = "us-eat-1"
profile = "first_profile"
shared_credentials_file = "~/.aws/credentials"
assume_role {
role_arn = "role_arn=arn:aws:iam::123456789101:role/roleA"
}
}
and s3.backend.tfvars file:
bucket = "random-tf-state-bucket"
key = "terraform.tfstate"
region = "us-east-2"
dynamodb_table = "random-tf-state-lock-table"
encrypt = true
running terraform init -backend-config=s3.backend.tfvars should work.
Result:
Initializing the backend...
╷
│ Error: error configuring S3 Backend: no valid credential sources for S3 Backend found.
│
│ Please see https://www.terraform.io/docs/language/settings/backends/s3.html
│ for more information about providing credentials.
│
│ Error: NoCredentialProviders: no valid providers in chain. Deprecated.
│ For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Question:
What step in this process am I missing?
Similar issue reported here was helpful in getting a solution.
Solution:
The key to this was realizing that the profile used to configure the S3 backend is its own thing - it is not tied to provider block.
Thus s3.backend.tfvars ends up like this:
bucket = "random-tf-state-bucket"
key = "terraform.tfstate"
region = "us-east-2"
dynamodb_table = "random-tf-state-lock-table"
encrypt = true
profile = "roleA"
and the provider block ended up looking like:
provider "aws" {
region = var.aws_region
profile = var.some_other_profile
assume_role {
role_arn = "some_other_role_to_assume"
}
}

Terraform scripts throw " Invalid AWS Region: {var.AWS_REGION}"

when I run "terraform apply" I am getting the following error. I made sure my AMI is in us-west-1 region.
not sure what else could be the problem
PS C:\terraform> terraform apply
Error: Invalid AWS Region: {var.AWS_REGION}
terraform.tfvars file
AWS_ACCESS_KEY="zzz"
AWS_SECRET_KEY="zzz"
provider.tf file
provider "aws"{
access_key = "{var.AWS_ACCESS_KEY}"
secret_key = "{var.AWS_SECRECT_KEY}"
region = "{var.AWS_REGION}"
}
vars.tf file
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION" {
default = "us-west-1"
}
variable "AMIS"{
type = map(string)
default ={
us-west-1 = "ami-0948be9af4ee55d19"
}
}
instance.tf
resource "aws_instance" "example"{
ami = "lookup(var.AMIS,var.AWS_REGION)"
instance_type = "t2.micro"
}
You are literally passing the strings "{var.AWS_ACCESS_KEY}" "{var.AWS_SECRET_KEY}" and "{var.AWS_REGION}" to the provider
Try this if you are using terraform 12+:
provider "aws"{
access_key = var.AWS_ACCESS_KEY
secret_key = var.AWS_SECRET_KEY
region = var.AWS_REGION
}
if you are using terraform older than 0.12 then it should be set like this using the $ sign.
provider "aws"{
access_key = ${var.AWS_ACCESS_KEY}
secret_key = ${var.AWS_SECRET_KEY}
region = ${var.AWS_REGION}
}

How to inherit aws credentials from terraform in local-exec provisioner

I have a resource in terraform that I need to run an AWS command on after it is created. But I want it to run using the same AWS credentials that terraform is using. The AWS provider is using a profile which it then uses to assume a role:
provider "aws" {
profile = "terraform"
assume_role {
role_arn = local.my_arn
}
}
I had hoped that terraform would expose the necessary environment variables, but that doesn't seem to be the case. What is the best way to do this?
Could you use role assumption via the AWS configuration? Doc: Using an IAM Role in the AWS CLI
~/.aws/config:
[user1]
aws_access_key_id = ACCESS_KEY
aws_secret_access_key = SECRET_KEY
[test-assume]
role_arn = arn:aws:iam::123456789012:role/test-assume
source_profile = user1
main.tf:
provider "aws" {
profile = var.aws_profile
version = "~> 2.0"
region = "us-east-1"
}
variable "aws_profile" {
default = "test-assume"
}
resource "aws_instance" "instances" {
ami = "ami-009d6802948d06e52"
instance_type = "t2.micro"
subnet_id = "subnet-002df68a36948517c"
provisioner "local-exec" {
command = "aws sts get-caller-identity --profile ${var.aws_profile}"
}
}
If you can't, here's a really messy way of doing it. I don't particularly recommend this method, but it will work. This has a dependency on jq but you could also use something else to parse the output from the aws sts assume-role command
main.tf:
provider "aws" {
profile = var.aws_profile
version = "~> 2.0"
region = "us-east-1"
assume_role {
role_arn = var.assume_role
}
}
variable "aws_profile" {
default = "default"
}
variable "assume_role" {
default = "arn:aws:iam::123456789012:role/test-assume"
}
resource "aws_instance" "instances" {
ami = "ami-009d6802948d06e52"
instance_type = "t2.micro"
subnet_id = "subnet-002df68a36948517c"
provisioner "local-exec" {
command = "aws sts assume-role --role-arn ${var.assume_role} --role-session-name Testing --profile ${var.aws_profile} --output json > test.json && export AWS_ACCESS_KEY_ID=`jq -r '.Credentials.AccessKeyId' test.json` && export AWS_SECRET_ACCESS_KEY=`jq -r '.Credentials.SecretAccessKey' test.json` && export AWS_SESSION_TOKEN=`jq -r '.Credentials.SessionToken' test.json` && aws sts get-caller-identity && rm test.json && unset AWS_ACCESS_KEY_ID && unset AWS_SECRET_ACCESS_KEY && unset AWS_SESSION_TOKEN"
}
}

Add an `aws_acm_certificate` resource to a terraform file causes terraform to ignore vars

Using the aws_acm_certificate resources makes terraform ignore provided variables.
Here's a simple terraform file:
variable "aws_access_key_id" {}
variable "aws_secret_key" {}
variable "region" { default = "us-west-1" }
provider "aws" {
alias = "prod"
region = "${var.region}"
access_key = "${var.aws_access_key_id}"
secret_key = "${var.aws_secret_key}"
}
resource "aws_acm_certificate" "cert" {
domain_name = "foo.example.com"
validation_method = "DNS"
tags {
project = "foo"
}
lifecycle {
create_before_destroy = true
}
}
Running validate, plan, or apply fails:
$ terraform validate -var-file=my.tfvars
$ cat my.tfvars
region = "us-west-2"
aws_secret_key = "secret"
aws_access_key_id = "not as secret"
There is nothing wrong in your codes.
Please do some cleans and run again (only run the rm command when you fully understand what you are doing)
rm -rf .terraform
rm terraform.tfstate*
terraform fmt
terraform get -update=true
terraform init
terraform plan

Resources