How to pass aws provider credentials to null_resource local-exec provisioner? - terraform

Has anyone come up with a decent way to do this?
In short, you have a provider "aws", configured via env vars or profile, with or without sts, it doesn't matter. Maybe you have several.
Now you want to call out to the aws cli because something isn't well implemented in the aws provider. In my case, I need to generate and upload some sensitive information directly to an S3 bucket that I do not want in the state file. In any case, it was s3 sync, so the action is idempotent.
However, there appears to be no way to pass the provider credentials - permanent, env var, profile to temporary sts - to a null_resource clause:
provider "aws" {
# set using explicit setting or profile or however
alias = "myaws"
}
resource "null_resource" "cli" {
provisioner "local-exec" {
command = "aws <do something>"
environment {
# happy to pass AWS_PROFILE or AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY here...
# if there were a way to retrieve it from the "myaws" provider
}
}
}

You can pass an AWS role_arn into a local-exec script. For example:
variable "aws_role" {
type = string
description = "AWS role for local exec to assume"
default = "arn:aws:iam::123456789012:role/DBMigrateRole"
}
resource "null_resource" "call-db-migrate" {
provisioner "local-exec" {
interpreter = ["/bin/bash", "-c"]
command = <<EOF
set -e
CREDENTIALS=(`aws sts assume-role \
--role-arn ${var.aws_role} \
--role-session-name "db-migration-cli" \
--query "[Credentials.AccessKeyId,Credentials.SecretAccessKey,Credentials.SessionToken]" \
--output text`)
unset AWS_PROFILE
export AWS_DEFAULT_REGION=us-east-1
export AWS_ACCESS_KEY_ID="$${CREDENTIALS[0]}"
export AWS_SECRET_ACCESS_KEY="$${CREDENTIALS[1]}"
export AWS_SESSION_TOKEN="$${CREDENTIALS[2]}"
aws sts get-caller-identity
EOF
}
}
Credit to https://github.com/hashicorp/terraform-provider-aws/issues/8242#issuecomment-586687360 .

Related

how to securly supply rds secret retrieved from vault (using null_resource) to the terraform code to make db connection?

Goal - Get db username/pwd from vault and supply it to terraform code securely to make db connection without exposing any credential:
I have a null resource
resource "null_resource" "read_cred" {
triggers = {
trigger_condition = timestamp()
}
}
provisioner "local-exec" {
interpreter = ["bash", "-c"]
command = <<EOT
"***********code to get vault token******************"
user = username
pwd = password
EOT
How to use the above user/pwd in my terraform code ? I am not able to get those in the outputs.tf ? I need to make db connection using the above credentials securely

Local-exec destroy triggers - ignore changes to google access token

I have a null_resource that has a local-exec block making a curl with a google access token.
Since that's executed during a destroy, I am forced to define that as a triggers var.
Each time I do a terraform apply that null_resource is having to be replaced because google access token keeps changing.
resource "null_resource" "env_to_group" {
for_each = local.map_env_group
triggers = {
env_id = google_apigee_environment.apigee[each.value.env].id
group_id = google_apigee_envgroup.apigee[each.value.group].id
access_token = data.google_client_config.current.access_token
project = var.project
group = each.value.group
env = each.value.env
}
provisioner "local-exec" {
when = destroy
command = <<EOF
curl -o /dev/null -s -w "%%{http_code}" -H "Authorization: Bearer ${self.triggers.access_token}"\
"https://apigee.googleapis.com/v1/organizations/${self.triggers.project}/envgroups/${self.triggers.group}/attachments/${self.triggers.env}" \
-X DELETE -H "content-type:application/json"
EOF
}
}
Is there a way to ignore changes to google access token, or is there a way not having to specify access token var within the triggers block?
I think you should still be able to accomplish this using the depends_on meta-argument and a separate resource for making the ephemeral access token available to the command during the destroy lifecycle.
resource "local_file" "access_token" {
content = data.google_client_config.current.access_token
filename = "/var/share/access-token"
}
resource "null_resource" "env_to_group" {
for_each = local.map_env_group
triggers = {
env_id = google_apigee_environment.apigee[each.value.env].id
group_id = google_apigee_envgroup.apigee[each.value.group].id
project = var.project
group = each.value.group
env = each.value.env
}
depends_on = [local_file.access_token]
provisioner "local-exec" {
when = destroy
command = <<EOF
curl -o /dev/null -s -w "%%{http_code}" -H "Authorization: Bearer $(cat /var/share/access-token)"\
"https://apigee.googleapis.com/v1/organizations/${self.triggers.project}/envgroups/${self.triggers.group}/attachments/${self.triggers.env}" \
-X DELETE -H "content-type:application/json"
EOF
}
}
I guess another solution would be to pass some kind of credentials to the command through which you could obtain the access token for the related service account through API calls, or use Application Default Credentials if configured.

How to inherit aws credentials from terraform in local-exec provisioner

I have a resource in terraform that I need to run an AWS command on after it is created. But I want it to run using the same AWS credentials that terraform is using. The AWS provider is using a profile which it then uses to assume a role:
provider "aws" {
profile = "terraform"
assume_role {
role_arn = local.my_arn
}
}
I had hoped that terraform would expose the necessary environment variables, but that doesn't seem to be the case. What is the best way to do this?
Could you use role assumption via the AWS configuration? Doc: Using an IAM Role in the AWS CLI
~/.aws/config:
[user1]
aws_access_key_id = ACCESS_KEY
aws_secret_access_key = SECRET_KEY
[test-assume]
role_arn = arn:aws:iam::123456789012:role/test-assume
source_profile = user1
main.tf:
provider "aws" {
profile = var.aws_profile
version = "~> 2.0"
region = "us-east-1"
}
variable "aws_profile" {
default = "test-assume"
}
resource "aws_instance" "instances" {
ami = "ami-009d6802948d06e52"
instance_type = "t2.micro"
subnet_id = "subnet-002df68a36948517c"
provisioner "local-exec" {
command = "aws sts get-caller-identity --profile ${var.aws_profile}"
}
}
If you can't, here's a really messy way of doing it. I don't particularly recommend this method, but it will work. This has a dependency on jq but you could also use something else to parse the output from the aws sts assume-role command
main.tf:
provider "aws" {
profile = var.aws_profile
version = "~> 2.0"
region = "us-east-1"
assume_role {
role_arn = var.assume_role
}
}
variable "aws_profile" {
default = "default"
}
variable "assume_role" {
default = "arn:aws:iam::123456789012:role/test-assume"
}
resource "aws_instance" "instances" {
ami = "ami-009d6802948d06e52"
instance_type = "t2.micro"
subnet_id = "subnet-002df68a36948517c"
provisioner "local-exec" {
command = "aws sts assume-role --role-arn ${var.assume_role} --role-session-name Testing --profile ${var.aws_profile} --output json > test.json && export AWS_ACCESS_KEY_ID=`jq -r '.Credentials.AccessKeyId' test.json` && export AWS_SECRET_ACCESS_KEY=`jq -r '.Credentials.SecretAccessKey' test.json` && export AWS_SESSION_TOKEN=`jq -r '.Credentials.SessionToken' test.json` && aws sts get-caller-identity && rm test.json && unset AWS_ACCESS_KEY_ID && unset AWS_SECRET_ACCESS_KEY && unset AWS_SESSION_TOKEN"
}
}

How to restart EC2 instance using terraform without destroying them?

I am wondering how can we stop and restart the AWS ec2 instance created using terraform. is there any way to do that?
As you asked, for example, there is a limit on the comment, so posting as the answer using local-exec.
I assume that you already configure aws configure | aws configure --profile test using aws-cli.
Here is the complete example to reboot an instance, change VPC SG ID, subnet and key name etc
provider "aws" {
region = "us-west-2"
profile = "test"
}
resource "aws_instance" "ec2" {
ami = "ami-0f2176987ee50226e"
instance_type = "t2.micro"
associate_public_ip_address = false
subnet_id = "subnet-45454566645"
vpc_security_group_ids = ["sg-45454545454"]
key_name = "mytest-ec2key"
tags = {
Name = "Test EC2 Instance"
}
}
resource "null_resource" "reboo_instance" {
provisioner "local-exec" {
on_failure = "fail"
interpreter = ["/bin/bash", "-c"]
command = <<EOT
echo -e "\x1B[31m Warning! Restarting instance having id ${aws_instance.ec2.id}.................. \x1B[0m"
# aws ec2 reboot-instances --instance-ids ${aws_instance.ec2.id} --profile test
# To stop instance
aws ec2 stop-instances --instance-ids ${aws_instance.ec2.id} --profile test
echo "***************************************Rebooted****************************************************"
EOT
}
# this setting will trigger script every time,change it something needed
triggers = {
always_run = "${timestamp()}"
}
}
Now Run terraform apply
Once created and you want later to reboot or stop just call
terraform apply -target null_resource.reboo_instance
See the logs
I have found simpler way to do it.
provisioner "local-exec" {
command = "ssh -tt -o StrictHostKeyChecking=no
someuser#${aws_eip.ec2_public_ip.public_ip} sudo 'shutdown -r'"
}
Using remote-exec:
provisioner "remote-exec" {
inline = [
"sudo /usr/sbin/shutdown -r 1"
]
}
-r 1 is to delay the reboot and prevent remote-exec command exiting with non-zero code.

Passing in variables assigned in Shell script

I am trying to run a Terraform deployment via a Shell script where within the Shell script I first dynamically collect the access key for my Azure storage account and assign it to a variable. I then want to use the variable in a -var assignment on the terraform command line. This method works great when configuring the backend for remote state but it is not working for doing a deployment. The other variables used in the template are being pulled from a terraform.tfvars file. Below is my Shell script and Terraform template:
Shell script:
#!/bin/bash
set -eo pipefail
subscription_name="Visual Studio Enterprise with MSDN"
tfstate_storage_resource_group="terraform-state-rg"
tfstate_storage_account="terraformtfstatesa"
az account set --subscription "$subscription_name"
tfstate_storage_access_key=$(
az storage account keys list \
--resource-group "$tfstate_storage_resource_group" \
--account-name "$tfstate_storage_account" \
--query '[0].value' -o tsv
)
echo $tfstate_storage_access_key
terraform apply \
-var "access_key=$tfstate_storage_access_key"
Deployment template:
provider "azurerm" {
subscription_id = "${var.sub_id}"
}
data "terraform_remote_state" "rg" {
backend = "azurerm"
config {
storage_account_name = "terraformtfstatesa"
container_name = "terraform-state"
key = "rg.stage.project.terraform.tfstate"
access_key = "${var.access_key}"
}
}
resource "azurerm_storage_account" "my_table" {
name = "${var.storage_account}"
resource_group_name = "${data.terraform_remote_state.rg.rgname}"
location = "${var.region}"
account_tier = "Standard"
account_replication_type = "LRS"
}
I have tried defining the variable in my terraform.tfvars file:
storage_account = "appastagesa"
les_table_name = "appatable
region = "eastus"
sub_id = "abc12345-099c-1234-1234-998899889988"
access_key = ""
The access_key definition appears to get ignored.
I then tried not using a terraform.tfvars file, and created the variables.tf file below:
variable storage_account {
description = "Name of the storage account to create"
default = "appastagesa"
}
variable les_table_name {
description = "Name of the App table to create"
default = "appatable"
}
variable region {
description = "The region where resources will be deployed (ex. eastus, eastus2, etc.)"
default = "eastus"
}
variable sub_id {
description = "The ID of the subscription to deploy into"
default = "abc12345-099c-1234-1234-998899889988"
}
variable access_key {}
I then modified my deploy.sh script to use the line below to run my terraform deployment:
terraform apply \
-var "access_key=$tfstate_storage_access_key" \
-var-file="variables.tf"
This results in the error invalid value "variables.tf" for flag -var-file: multiple map declarations not supported for variables Usage: terraform apply [options] [DIR-OR-PLAN] being thrown.
After playing with this for hours...I am almost embarrassed as to what the problem was but I am also frustrated with Terraform because of the time I wasted on this issue.
I had all of my variables defined in my variables.tf file with all but one having default values. For the one without a default value, I was passing it in as part of the command line. My command line was where the problem was. Because of all of the documentation I read, I thought I had to tell terraform what my variables file was by using the -var-file option. Turns out you don't and when I did it threw the error. Turns out all I had to do was use the -var option for the variable that had no defined default and terraform just automagically saw the variables.tf file. Frustrating. I am in love with Terraform but the one negative I would give it is that the documentation is lacking.

Resources