How to encrypt password or sensitive data in terraform? - terraform

I want to create rds instance and entire required infrastructure for its in aws. I couldn't understand the security part of terraform. I want to encrypt the sensitive data in .tfstate at least. e.g: password/username for rds instance etc. What will be best way to store sensitive data for .tfstate? If not supported then please suggest other ways to do that? Thank you.

Use AWS provider as sample.
Recommend to save the .tfstate file to s3 bucket and set the policy on it that only nominates roles have the permission to access this bucket and the related kms key.
terraform {
required_version = "~> 0.10"
backend "s3" {
bucket = "<global_unique_bucket_name>"
key = "development/vpc.tfstate"
region = "ap-southeast-2"
kms_key_id = "alias/terraform"
encrypt = true
}
}
Always enable version control on that s3 bucket.

Related

Where is the storage account access key set in Terraform for access to the state file?

We use Terraform to deploy to our Microsoft Azure tenants.
One of our environments connects to a storage account for it's state file. For that storage account we need to rotate the keys, but I don't understand where I am meant to set the new key. I have been looking at the documentation and it sounds like he backend setting does this, but I don't understand how it is done exactly.
The code we have looks like:
terraform {
backend "azurerm" {
resource_group_name = "ourresourcegroup"
storage_account_name = "ourstorageaccount"
container_name = "tstate"
key = "terraform.tfstate"
}
}
Please can someone help me understand where I set the storage account keys please.

How can I connect to terraform state file in Azure with Terraform

I'm trying to set up a connection to azure using terraform. I have read you need to use the following code if using a storage account key for the state file , however what goes in the key field and where would you put the below code in. Do I have to save it in a file somewhere?
I got the info below from https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs
terraform {
backend "azurerm" {
storage_account_name = "tf-sa"
container_name = "tfstate"
**key = "???????"**
access_key = "12345678examplekey"
}
}
The state can be kept locally or in a remote location. If stored locally, the name of the file is terraform.tfstate. You can consider that as being a key by which you can fetch the state from a local filesystem. In the example you posted, Azure Blob Storage [1] is used. Since that is a remote state and you might want to use the same storage for more than one state file, you need to define a unique key which will be used for the state file. If that was not the case, then your state files would get overridden all the time, so usually it is a good practice to name it something meaningful, e.g., key = mysupercoolproject.tfstate. For example, in AWS S3 you can even define a key similar to a "path", e.g., key = /path/to/my/supercool/state/file/something.tfstate. That might work with Azure Blob Storage as well.
This part of code can be added in any file with a .tf extension within the directory you are running Terraform from. The usual convention is to call it backend.tf.
For more detailed explanations, check [2] and [3].
[1] https://www.terraform.io/language/settings/backends/azurerm
[2] https://www.terraform.io/language/settings/backends
[3] https://www.terraform.io/language/settings/backends/configuration

How can I upload terraform state to s3 bucket?

I have a project whose infra is managed by terraform. I'd like to push the state to a s3 bucket so other teams can use the state. Below is the backend configuration:
terraform {
backend "s3" {
bucket = "MY_BUCKET"
key = "tfstate"
region = "ap-southeast-2"
}
}
when I run terraform init I got below error:
AccessDenied: Access Denied
status code: 403, request id: 107E6007C9C64805, host id: kWASxeq1msxvGPZIKdi+7htg3yncMFxW9PQuXdC8ouwsEHMhx8ZPu6dKGUGWzDtblC6WRg1P1ew=
Terraform failed to load the default state from the "s3" backend.
State migration cannot occur unless the state can be loaded. Backend
modification and state migration has been aborted. The state in both the
source and the destination remain unmodified. Please resolve the
above error and try again.
It seems that terraform tries to load state from s3 bucket rather than push to s3 bucket. How can I configure terraform to push state to s3?
I have configured aws profile on a tf file:
provider "aws" {
region = "ap-southeast-2"
profile = "me"
}
The credential for the current user has admin permission on the bucket.
I was facing the same issue and the found like the bucket mentioned in the backend.tf file was not created in my aws console. Hence I create the bucket with the same bucket name mentioned in the backend.tf file and it worked for me.
For further readers:
AWS credentials can be provided As #Thiago Arrais mentioned
Another way to provide credentials in backend block is to define profile:
terraform {
backend "s3" {
profile = "me" <-- aws profile
bucket = "MY_BUCKET"
key = "tfstate"
region = "ap-southeast-2"
}
}
And your ~/.aws/credentails file has profile me with AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY defined in it as follows:
[me]
AWS_ACCESS_KEY_ID = access_key_value
AWS_SECRET_ACCESS_KEY = secret_key_value
I had exact same problem. When terraform {backend "s3" {}} is defined then that block is evaluated before provider "aws" {} block. That's why backend cannot find credentials info defined in provider block.
You're not providing the S3 credentials in the backend block. You'll need to set them there (access_key and secret_key parameters) or via environment variables (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).
You'll also need to make sure that the bucket exists and that these credentials do have access to it.
By the way, you don't need an AWS provider block. The S3 backend is usable even if you don't manage AWS resources in your Terraform config.
For me I was having different aws region in backend.tf than where my bucket was.

How can I hide AWS credentials from external program?

In my case I'm trying to hide the aws access keys and secret access keys that are printed through outputs.
I tried to implement a solution but unfortunately its printing the credentials in the plan. So whenever i push the code/commits to GITHUB we have terraform running in Jenkins it spits the plan in GITHUB exposing the credentials in terraform plan.
Although I have hidden in outputs but now I'm printing it in plan and exposing in GitHub. I also tried to use sensitive:true in outputs which will easily solve this problem. But my team wants to implement this solution :(
resource "aws_iam_access_key" "key" {
user = "${aws_iam_user.user.name}"
}
resource "null_resource" "access_key_shell" {
triggers = {
aws_user = "${aws_iam_user.user.name}" // triggering an alert on the user, since if we pass aws_iam_access_key, access key is visible in plan.
}
}
data "external" "stdout" {
depends_on = ["null_resource.access_key_shell"]
program = ["sh", "${path.module}/read.sh"]
query {
access_id = "${aws_iam_access_key.key.id}"
secret_id = "${aws_iam_access_key.key.secret}"
}
}
resource "null_resource" "contents_access" {
triggers = {
stdout = "${lookup(data.external.logstash_stdout.result, "access_key")}"
value = "${aws_iam_access_key.key.id}"
}
}
output "aws_iam_podcast_logstash_access_key" {
value = "${chomp(null_resource.contents_access.triggers["stdout"])}"
}
read.sh
#!/bin/bash
set -eux
echo {\"access_key\":\"$(aws kms encrypt --key-id alias/amp_key --plaintext ${access_id} --output text --query CiphertextBlob)\", > sample.json && echo \"secret_key\": \"$(aws kms encrypt --key-id alias/amp_key --plaintext ${secret_id} --output text --query CiphertextBlob)\"} >> sample.json
cat sample.json | jq -r '.access_key'
cat sample.json | jq -r '.secret_key'
My terraform plan :
<= data.external.stdout
id: <computed>
program.#: "2"
program.0: "sh"
program.1: "/Users/xxxx/projects/tf_iam_stage/read.sh"
query.%: "2"
query.access_id: "xxxxxxxx" ----> I want to hide these values from the plan
query.secret_id: "xxxxxxxxxxxxxxxxxxxxxx/x" ----> I want to hide these values from the plan
result.%: <computed>
Any help !
Thanks in advance!
There are a couple of things going on here.
First, you are leaking your credentials because you are storing your .tfstate in GitHub. This one has an easy solution. First, add *.tfstate to your .gitignore, then set a remote backend, and if you use S3, then checkout policies and ACLs to prevent public access.
Second, your other problem is that you are fetching the credentials on runtime, and during runtime Terraform displays everything unless you add the sensitive flag. So, basically if you want to follow this approach, you are forced to use sensitive: true, no matter what you team says. However, why get the credentials that way? Why don't you add a new provider with those credentials, set an alias for this provider, and just use it for the resources where you those keys?
in your scenario you will be great if you will go with: Remote State approach.
Remote State allows Terraform to store the state in a remote store. Terraform supports storing state in places like Terraform Enterprise, Consul, S3, and more.
The setup is to create a bucket on AWS S3, it should not be readable or writeable by anyone, except the user who will be using for Terraform.
The code I added was;
terraform {
backend "s3" {
bucket = "my-new-bucket"
key = "state/key"
region = "eu-west-1"
}
}
This simply tells Terraform to use S3 as the backend provider for doing things like storing tfstate files.
Don't forget to run terraform init because it's a requirement, Terraform will notice that you changed from storing locally to storing in S3.
Once that is done You could delete the local tfstate files safe in the knowledge your details were safely stored on S3.
Here is some useful docs: Click docs
The second approach is to use a Terraform plugin more info here: Terraform plugin
Good luck!

Terraform profile field usage in AWS provider

I have a $HOME/.aws/credentials file like this:
[config1]
aws_access_key_id=accessKeyId1
aws_secret_access_key=secretAccesskey1
[config2]
aws_access_key_id=accessKeyId2
aws_secret_access_key=secretAccesskey2
So I was expecting that with this configuration, terraform will choose the second credentials:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
But when I try terraform init it says it hasn't found any valid credentials:
Initializing the backend...
Error: No valid credential sources found for AWS Provider.
Please see https://terraform.io/docs/providers/aws/index.html for more information on
providing credentials for the AWS Provider
As as workaround, I changed config2 by default in my credentials file and I removed the profile field from the provider block so it works but I really need to use something like the first approach. What am I missing here?
Unfortunately you also need to provide the IAM credential configuration to the backend configuration as well as your AWS provider configuration.
The S3 backend configuration takes the same parameters here as the AWS provider so you can specify the backend configuration like this:
terraform {
backend "s3" {
bucket = "myBucket"
region = "eu-central-1"
key = "path/to/terraform.tfstate"
encrypt = true
profile = "config2"
}
}
provider "aws" {
profile = "config2"
region = "eu-central-1"
}
There's a few reasons behind this needing to be done separately. One of the reasons would be that you can independently use different IAM credentials, accounts and regions for the S3 bucket and the resources you will be managing with the AWS provider. You might also want to use S3 as a backend even if you are creating resources in another cloud provider or not using a cloud provider at all, Terraform can manage resources in a lot of places that don't have a way to store Terraform state. The main reason though is that the backends are actually managed by the core Terraform binary rather than the provider binaries and the backend initialisation happens before pretty much anything else.

Resources