Encrypt Terraform values in Gitlab - azure

I have the following flow:
You work on Terraform code locally
You push the code to Gitlab
Gitlab pipeline automatically run terraform init, terraform plan and terraform apply (manually)
The question is about secrets. I need to place them to Azure KeyVault, thus I have "value" field in Terraform code, which cannot be placed to Gitlab as plain text. If I place secrets to file and decrypt a file using "git-crypt" - it gets to Gitlab encrypted and Terraform sees it as encrypted already and creates an encrypted secret.
Any ideas how to do it?
I'm creating the secret via Terraform this way:
resource "azurerm_key_vault_secret" "example" {
name = "secret-sauce"
value = "szechuan"
key_vault_id = azurerm_key_vault.example.id
}

Can you pls provide TF code here. If you fetch keys from keyvault using 'data' block, then it should not be placed in plain text format in GitLab. Still would request you to put codes here, so that I can understand better.

Related

Best way to store Terraform variable values without having them in source control

We have a code repo with our IaC in Terraform. This is in Github, and we're going to pull the code, build it, etc. However, we don't want the values of our variables in Github itself. So this may be a dumb question, but where do we store the values we need for our variables? If my Terraform requires an Azure subscription id, where would I store the subscription id? The vars files won't be in source control. The goal is that we'll be pulling the code into an Azure Devops pipeline so the pipeline will have to know where to go to get the input variable values. I hope that makes sense?
You can store your secrets in Azure Key Vault and retrieve them in Terraform using azurerm_key_vault_secret.
data "azurerm_key_vault_secret" "example" {
name = "secret-sauce"
key_vault_id = data.azurerm_key_vault.existing.id
}
output "secret_value" {
value = data.azurerm_key_vault_secret.example.value
}
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/key_vault_secret
There has to be a source of truth eventually.
You can store your values in the pipeline definitions as variables themselves and pass them into the Terraform configuration.
Usually it's a combination of tfvar files (dependent on target environment) and some variables from the pipeline. If you do have vars in your pipelines though, the pipelines should be in code.
If the variables are sensitive then you need to connect to a secret management tool to get those variables.
If you have many environments, say 20 environments and the infra is all the same with exception of a single ID you could have the same pipeline definition (normally JSON or YAML) and reference it for the 20 pipelines you build, each of those 20 would have that unique value baked in for use at execution. That var is passed through to Terraform as the missing piece.
There are other key-value property tracking systems out there but Git definitely works well for this purpose.
You can use Azure DevOps Secure files (pipelines -> library) for storing your credentials for each environment. You can create a tfvar file for each environment with all your credentials, upload it as a secure file in Azure DevOps and then download it in the pipeline with a DownloadSecureFile#1 task.

Accessing existing resource info from new resources

My header might not have summed up correctly my question.
So I have a terraform stack that creates a resource group, and a keyvault, amongst other things. This has already been ran and the resources exist.
I am now adding another resource to this same terraform stack. Namely a mysql server. Now I know if I just re-run the stack it will check the state file and just add my mysql server.
However as part of this mysql server creation I am providing a password and I want to write this password to the keyvault that already exists.
if I was doing this from the start my terraform would look like:
resource "azurerm_key_vault_secret" "sqlpassword" {
name = "flagr-mysql-password"
value = random_password.sqlpassword.result
key_vault_id = azurerm_key_vault.shared_kv.id
depends_on = [
azurerm_key_vault.shared_kv
]
}
however I believe as the keyvault already exists this would error as it wouldn't know this value azurerm_key_vault.shared_kv.id unless I destroy the keyvault and allow terraform to recreate it. is that correct?
I could replace azurerm_key_vault.shared_kv.id with the actual resource ID from azure, but then if I were to ever run this stack to create a new environment it would be writing the value into my old keyvault I presume?
I have done this recently for AWS deployment, you would do terraform import on azurerm_key_vault.shared_kv resource to bring it under terraform management and then you would be able to able to deploy azurerm_key_vault_secret
To import: you will need to build the resource azurerm_key_vault.shared_kv (this will require a few iterations).

How to update an existing cloudflare_record in terraform and github actions

I creaed my project with code from Hashicorp tutorial "Host a static website with S3 and Cloudflare", but the tutorial didn't mention github actions. So, when I put my project in github actions, even though terraform plan and terraform apply output successfully locally, I get errors on terraform apply:
Error: expected DNS record to not already be present but already exists
with cloudflare_record.site_cname ...
with cloudflare_record.www
I have two resources in my main.tf, one for the site domain and one for www, like the following:
resource "cloudflare_record" "site_cname" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = var.site_domain
value = aws_s3_bucket.site.website_endpoint
type = "CNAME"
ttl = 1
proxied = true
}
resource "cloudflare_record" "www" {
zone_id = data.cloudflare_zones.domain.zones[0].id
name = "www"
value = var.site_domain
type = "CNAME"
ttl = 1
proxied = true
}
If I remove these lines of code from my main.tf and then run terraform apply locally, I get the warning that this will destroy my resource.
Which should I do?
add an allow_overwrite somewhere (don't see examples of how to use this in the docs) and the ways I've tried to add it generated errors.
remove the lines of code from main.tf knowing the github actions run will destroy my cloudflare_record.www and cloudflare_record.site_cname knowing I can see my zone id and CNAME if I log into cloudflare so maybe this code isn't necessary after the initial set up
run terrform import somewhere? If so, where do I find the zone ID and record ID
or something else?
Where is your terraform state? Did you store it locally or in a remote location?
Because it would explain why you don't have any problems locally and why it's trying to recreate the resources in Github actions.
More information about terraform backend (where the state is stored) -> https://www.terraform.io/docs/language/settings/backends/index.html
And how to create one with S3 for example ->
https://www.terraform.io/docs/language/settings/backends/s3.html
It shouldn't be a problem if Terraform would drop and re-create DNS records, but for better result, you need to ensure that GitHub Actions has access to the (current) workspace state.
Since Terraform Cloud provides a free plan, there is no reason not to take advantage of it. Just create a workspace through their dashboard, add "remote" backend configuration to your project and ensure that GitHub Actions uses Terraform API Token at runtime (you would set it via GitHub repository settings > Secrets).
You may want to check this example — Terraform Starter Kit
infra/backend.tf
infra/dns-records.tf
scripts/tf.js
Here is how you can pass Terraform API Token from secrets.TERRAFORM_API_TOKEN GitHub secret to Terraform CLI:
- env: { TERRAFORM_API_TOKEN: "${{ secrets.TERRAFORM_API_TOKEN }}" }
run: |
echo "credentials \"app.terraform.io\" { token = \"$TERRAFORM_API_TOKEN\" }" > ./.terraformrc

Secure way to store API token in gitlab

I am working on this small terraform project that uses gcp (google cloud platform) token.json which contains secure credentials to create resources.
Terraform files are executed by the Gitlab CI/CD.
My concern is, this token.json is used by the one of the terraform files (main.tf) as below.
# Configure the backend
terraform {
backend "gcs" {
bucket = "tf_backend_gcp_banuka_jana_jayarathna_k8s"
prefix = "terraform/gcp/boilerplate"
credentials = "./token.json" ----> file I need to keep securely
}
}
This token.json is in the root folder with the main.tf file above.
This file is needed for that file and I can't think of any other way how I can store this. This token cannot even put to gitlab variables as there is no way to pass the values of the token to this main.tf file when the pipeline runs.
I don't want to expose the token.json to public too. Is there a way I can achieve this in Gitlab?
Can't even use a tool like git-crypt because then how can I decrypt this token.json and feed it to the main.tf...
Is there a way to inject variables to terraform files ?
Thank you
You can base64 encode the file and put it into a GitLab CI variable.
cat token.json | base64
And then you can decode it and create the file before you run terraform apply
terraform-apply:
stage: deploy
script:
- echo $GCE_TOKEN | base64 -d > token.json
- terraform apply
Store the token as a variable in GitLab (Settings -> CI/CD -> Variables), make sure to enable masking.
Define a variable for the token in your terraform manifests.
Supply the token variable to terraform when you run apply.
variables.tf
variable "gce_token" {
type = string
}
main.tf
terraform {
backend "gcs" {
bucket = "tf_backend_gcp_banuka_jana_jayarathna_k8s"
prefix = "terraform/gcp/boilerplate"
credentials = var.gce_token
}
}
.gitlab-ci.yml
terraform apply -var="gce_token=$GCE_TOKEN"
After a while, when I was working on the same thing, I encounter the same problem and I found a solution.
We can put AWS access-key and secret-key in Gitlab CI Variables and we can export them before terraform init or terraform apply or any terraform command:
before_script:
- export AWS_ACCESS_KEY_ID=${access_key}
- export AWS_SECRET_ACCESS_KEY=${secrect_key}

How can I hide AWS credentials from external program?

In my case I'm trying to hide the aws access keys and secret access keys that are printed through outputs.
I tried to implement a solution but unfortunately its printing the credentials in the plan. So whenever i push the code/commits to GITHUB we have terraform running in Jenkins it spits the plan in GITHUB exposing the credentials in terraform plan.
Although I have hidden in outputs but now I'm printing it in plan and exposing in GitHub. I also tried to use sensitive:true in outputs which will easily solve this problem. But my team wants to implement this solution :(
resource "aws_iam_access_key" "key" {
user = "${aws_iam_user.user.name}"
}
resource "null_resource" "access_key_shell" {
triggers = {
aws_user = "${aws_iam_user.user.name}" // triggering an alert on the user, since if we pass aws_iam_access_key, access key is visible in plan.
}
}
data "external" "stdout" {
depends_on = ["null_resource.access_key_shell"]
program = ["sh", "${path.module}/read.sh"]
query {
access_id = "${aws_iam_access_key.key.id}"
secret_id = "${aws_iam_access_key.key.secret}"
}
}
resource "null_resource" "contents_access" {
triggers = {
stdout = "${lookup(data.external.logstash_stdout.result, "access_key")}"
value = "${aws_iam_access_key.key.id}"
}
}
output "aws_iam_podcast_logstash_access_key" {
value = "${chomp(null_resource.contents_access.triggers["stdout"])}"
}
read.sh
#!/bin/bash
set -eux
echo {\"access_key\":\"$(aws kms encrypt --key-id alias/amp_key --plaintext ${access_id} --output text --query CiphertextBlob)\", > sample.json && echo \"secret_key\": \"$(aws kms encrypt --key-id alias/amp_key --plaintext ${secret_id} --output text --query CiphertextBlob)\"} >> sample.json
cat sample.json | jq -r '.access_key'
cat sample.json | jq -r '.secret_key'
My terraform plan :
<= data.external.stdout
id: <computed>
program.#: "2"
program.0: "sh"
program.1: "/Users/xxxx/projects/tf_iam_stage/read.sh"
query.%: "2"
query.access_id: "xxxxxxxx" ----> I want to hide these values from the plan
query.secret_id: "xxxxxxxxxxxxxxxxxxxxxx/x" ----> I want to hide these values from the plan
result.%: <computed>
Any help !
Thanks in advance!
There are a couple of things going on here.
First, you are leaking your credentials because you are storing your .tfstate in GitHub. This one has an easy solution. First, add *.tfstate to your .gitignore, then set a remote backend, and if you use S3, then checkout policies and ACLs to prevent public access.
Second, your other problem is that you are fetching the credentials on runtime, and during runtime Terraform displays everything unless you add the sensitive flag. So, basically if you want to follow this approach, you are forced to use sensitive: true, no matter what you team says. However, why get the credentials that way? Why don't you add a new provider with those credentials, set an alias for this provider, and just use it for the resources where you those keys?
in your scenario you will be great if you will go with: Remote State approach.
Remote State allows Terraform to store the state in a remote store. Terraform supports storing state in places like Terraform Enterprise, Consul, S3, and more.
The setup is to create a bucket on AWS S3, it should not be readable or writeable by anyone, except the user who will be using for Terraform.
The code I added was;
terraform {
backend "s3" {
bucket = "my-new-bucket"
key = "state/key"
region = "eu-west-1"
}
}
This simply tells Terraform to use S3 as the backend provider for doing things like storing tfstate files.
Don't forget to run terraform init because it's a requirement, Terraform will notice that you changed from storing locally to storing in S3.
Once that is done You could delete the local tfstate files safe in the knowledge your details were safely stored on S3.
Here is some useful docs: Click docs
The second approach is to use a Terraform plugin more info here: Terraform plugin
Good luck!

Resources