How to inherit aws credentials from terraform in local-exec provisioner - terraform

I have a resource in terraform that I need to run an AWS command on after it is created. But I want it to run using the same AWS credentials that terraform is using. The AWS provider is using a profile which it then uses to assume a role:
provider "aws" {
profile = "terraform"
assume_role {
role_arn = local.my_arn
}
}
I had hoped that terraform would expose the necessary environment variables, but that doesn't seem to be the case. What is the best way to do this?

Could you use role assumption via the AWS configuration? Doc: Using an IAM Role in the AWS CLI
~/.aws/config:
[user1]
aws_access_key_id = ACCESS_KEY
aws_secret_access_key = SECRET_KEY
[test-assume]
role_arn = arn:aws:iam::123456789012:role/test-assume
source_profile = user1
main.tf:
provider "aws" {
profile = var.aws_profile
version = "~> 2.0"
region = "us-east-1"
}
variable "aws_profile" {
default = "test-assume"
}
resource "aws_instance" "instances" {
ami = "ami-009d6802948d06e52"
instance_type = "t2.micro"
subnet_id = "subnet-002df68a36948517c"
provisioner "local-exec" {
command = "aws sts get-caller-identity --profile ${var.aws_profile}"
}
}
If you can't, here's a really messy way of doing it. I don't particularly recommend this method, but it will work. This has a dependency on jq but you could also use something else to parse the output from the aws sts assume-role command
main.tf:
provider "aws" {
profile = var.aws_profile
version = "~> 2.0"
region = "us-east-1"
assume_role {
role_arn = var.assume_role
}
}
variable "aws_profile" {
default = "default"
}
variable "assume_role" {
default = "arn:aws:iam::123456789012:role/test-assume"
}
resource "aws_instance" "instances" {
ami = "ami-009d6802948d06e52"
instance_type = "t2.micro"
subnet_id = "subnet-002df68a36948517c"
provisioner "local-exec" {
command = "aws sts assume-role --role-arn ${var.assume_role} --role-session-name Testing --profile ${var.aws_profile} --output json > test.json && export AWS_ACCESS_KEY_ID=`jq -r '.Credentials.AccessKeyId' test.json` && export AWS_SECRET_ACCESS_KEY=`jq -r '.Credentials.SecretAccessKey' test.json` && export AWS_SESSION_TOKEN=`jq -r '.Credentials.SessionToken' test.json` && aws sts get-caller-identity && rm test.json && unset AWS_ACCESS_KEY_ID && unset AWS_SECRET_ACCESS_KEY && unset AWS_SESSION_TOKEN"
}
}

Related

How to Configure Terraform AWS Provider?

I'm trying to create an EC2 instance as mentioned in Terraform documentation.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
provider "aws" {
access_key = "Acxxxxxxxxxxxxxxxxx"
secret_key = "UxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxO"
region = "ap-south-1"
}
resource "aws_instance" "app_server" {
ami = "ami-076e3a557efe1aa9c"
instance_type = "t2.micro"
tags = {
Name = "ExampleAppServerInstance"
}
}
But facing issue error configuring Terraform AWS Provider: loading configuration: credential type source_profile profile default.
I have tried to export cmd and configure the default profile but nothing works for me.
What I'm doing wrong here?
I removed .terraform and and lock.hcl and tried fresh terraform init
Thanks for this question.
I'd rather go with the following:
Configure AWS profile:
aws configure
or
vim ~/.aws/config
and then
vim ~/.aws/credentials
write a new profile name or the default as follows:
~/.aws/credentials
[default]
region = us-east-1
output = json
[profile TERRAFORM]
region=us-east-1
output=json
~/.aws/credentials
# Sitech
[default]
aws_access_key_id = A****
aws_secret_access_key = B*********
[TERRAFORM]
aws_access_key_id = A****
aws_secret_access_key = B*********
Use terraform provider profile term rather than access key and secret access key
main.tf
provider "aws" {
profile = var.aws_profile
region = var.main_aws_region
}
terraform.tfvars
aws_profile = "TERRAFORM"
main_aws_region = "us-east-1"

Terraform scripts throw " Invalid AWS Region: {var.AWS_REGION}"

when I run "terraform apply" I am getting the following error. I made sure my AMI is in us-west-1 region.
not sure what else could be the problem
PS C:\terraform> terraform apply
Error: Invalid AWS Region: {var.AWS_REGION}
terraform.tfvars file
AWS_ACCESS_KEY="zzz"
AWS_SECRET_KEY="zzz"
provider.tf file
provider "aws"{
access_key = "{var.AWS_ACCESS_KEY}"
secret_key = "{var.AWS_SECRECT_KEY}"
region = "{var.AWS_REGION}"
}
vars.tf file
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
variable "AWS_REGION" {
default = "us-west-1"
}
variable "AMIS"{
type = map(string)
default ={
us-west-1 = "ami-0948be9af4ee55d19"
}
}
instance.tf
resource "aws_instance" "example"{
ami = "lookup(var.AMIS,var.AWS_REGION)"
instance_type = "t2.micro"
}
You are literally passing the strings "{var.AWS_ACCESS_KEY}" "{var.AWS_SECRET_KEY}" and "{var.AWS_REGION}" to the provider
Try this if you are using terraform 12+:
provider "aws"{
access_key = var.AWS_ACCESS_KEY
secret_key = var.AWS_SECRET_KEY
region = var.AWS_REGION
}
if you are using terraform older than 0.12 then it should be set like this using the $ sign.
provider "aws"{
access_key = ${var.AWS_ACCESS_KEY}
secret_key = ${var.AWS_SECRET_KEY}
region = ${var.AWS_REGION}
}

How to send local files using Terraform Cloud as remote backend?

I am creating AWS EC2 instance and I am using Terraform Cloud as backend.
in ./main.tf:
terraform {
required_version = "~> 0.12"
backend "remote" {
hostname = "app.terraform.io"
organization = "organization"
workspaces { prefix = "test-dev-" }
}
in ./modules/instances/function.tf
resource "aws_instance" "test" {
ami = "${var.ami_id}"
instance_type = "${var.instance_type}"
subnet_id = "${var.private_subnet_id}"
vpc_security_group_ids = ["${aws_security_group.test_sg.id}"]
key_name = "${var.test_key}"
tags = {
Name = "name"
Function = "function"
}
provisioner "remote-exec" {
inline = [
"sudo useradd someuser"
]
connection {
host = "${self.public_ip}"
type = "ssh"
user = "ubuntu"
private_key = "${file("~/.ssh/mykey.pem")}"
}
}
}
and as a result, I got the following error:
Call to function "file" failed: no file exists at /home/terraform/.ssh/...
so what is happening here, is that terraform trying to find the file in Terraform Cloud instead of my local machine. How can I transfer file from my local machine and still using Terraform Cloud?
There is no straight way to do what I asked in the question. In the end I ended up uploading the keys into AWS with its CLI like this:
aws ec2 import-key-pair --key-name "name_for_the_key" --public-key-material file:///home/user/.ssh/name_for_the_key.pub
and then reference it like that:
resource "aws_instance" "test" {
ami = "${var.ami_id}"
...
key_name = "name_for_the_key"
...
}
Note Yes file:// looks like the "Windowsest" syntax ever but you have to use it on linux too.

provisioner local-exec access to each.key

Terraform v0.12.6, provider.aws v2.23.0
Am trying to create two aws instances using the new for_each construct. Don't actually think this is an aws provider issue, more a terraform/for_each/provisioner issues.
Worked as advertised until I tried to add a local-exec provisioning step.
Don't know how to modify the local-exec example to work with the for.each variable. Am getting a terraform error about a cycle.
locals {
instances = {
s1 = {
private_ip = "192.168.47.191"
},
s2 = {
private_ip = "192.168.47.192"
},
}
}
provider "aws" {
profile = "default"
region = "us-east-1"
}
resource "aws_instance" "example" {
for_each = local.instances
ami = "ami-032138b8a0ee244c9"
instance_type = "t2.micro"
availability_zone = "us-east-1c"
private_ip = each.value["private_ip"]
ebs_block_device {
device_name = "/dev/sda1"
volume_size = 2
}
provisioner "local-exec" {
command = "echo ${aws_instance.example[each.key].public_ip} >> ip_address.txt"
}
}
But get this error.
./terraform apply
Error: Cycle: aws_instance.example["s2"], aws_instance.example["s1"]
Should the for_each each.key variable be expected to work in a provisioning step? There are other ways to get the public_ip later, by either using the testate file or querying aws given the instance ids, but accessing the resource variables within the local-exec provisioning would seem to come in handy in many ways.
Try using the self variable:
provisioner "local-exec" {
command = "echo ${self.public_ip} >> ip_address.txt"
}
Note to readers that resource-level for_each is a relatively new feature in Terraform and requires version >=0.12.6.

Add an `aws_acm_certificate` resource to a terraform file causes terraform to ignore vars

Using the aws_acm_certificate resources makes terraform ignore provided variables.
Here's a simple terraform file:
variable "aws_access_key_id" {}
variable "aws_secret_key" {}
variable "region" { default = "us-west-1" }
provider "aws" {
alias = "prod"
region = "${var.region}"
access_key = "${var.aws_access_key_id}"
secret_key = "${var.aws_secret_key}"
}
resource "aws_acm_certificate" "cert" {
domain_name = "foo.example.com"
validation_method = "DNS"
tags {
project = "foo"
}
lifecycle {
create_before_destroy = true
}
}
Running validate, plan, or apply fails:
$ terraform validate -var-file=my.tfvars
$ cat my.tfvars
region = "us-west-2"
aws_secret_key = "secret"
aws_access_key_id = "not as secret"
There is nothing wrong in your codes.
Please do some cleans and run again (only run the rm command when you fully understand what you are doing)
rm -rf .terraform
rm terraform.tfstate*
terraform fmt
terraform get -update=true
terraform init
terraform plan

Resources