I am working on this small terraform project that uses gcp (google cloud platform) token.json which contains secure credentials to create resources.
Terraform files are executed by the Gitlab CI/CD.
My concern is, this token.json is used by the one of the terraform files (main.tf) as below.
# Configure the backend
terraform {
backend "gcs" {
bucket = "tf_backend_gcp_banuka_jana_jayarathna_k8s"
prefix = "terraform/gcp/boilerplate"
credentials = "./token.json" ----> file I need to keep securely
}
}
This token.json is in the root folder with the main.tf file above.
This file is needed for that file and I can't think of any other way how I can store this. This token cannot even put to gitlab variables as there is no way to pass the values of the token to this main.tf file when the pipeline runs.
I don't want to expose the token.json to public too. Is there a way I can achieve this in Gitlab?
Can't even use a tool like git-crypt because then how can I decrypt this token.json and feed it to the main.tf...
Is there a way to inject variables to terraform files ?
Thank you
You can base64 encode the file and put it into a GitLab CI variable.
cat token.json | base64
And then you can decode it and create the file before you run terraform apply
terraform-apply:
stage: deploy
script:
- echo $GCE_TOKEN | base64 -d > token.json
- terraform apply
Store the token as a variable in GitLab (Settings -> CI/CD -> Variables), make sure to enable masking.
Define a variable for the token in your terraform manifests.
Supply the token variable to terraform when you run apply.
variables.tf
variable "gce_token" {
type = string
}
main.tf
terraform {
backend "gcs" {
bucket = "tf_backend_gcp_banuka_jana_jayarathna_k8s"
prefix = "terraform/gcp/boilerplate"
credentials = var.gce_token
}
}
.gitlab-ci.yml
terraform apply -var="gce_token=$GCE_TOKEN"
After a while, when I was working on the same thing, I encounter the same problem and I found a solution.
We can put AWS access-key and secret-key in Gitlab CI Variables and we can export them before terraform init or terraform apply or any terraform command:
before_script:
- export AWS_ACCESS_KEY_ID=${access_key}
- export AWS_SECRET_ACCESS_KEY=${secrect_key}
Related
I have the following flow:
You work on Terraform code locally
You push the code to Gitlab
Gitlab pipeline automatically run terraform init, terraform plan and terraform apply (manually)
The question is about secrets. I need to place them to Azure KeyVault, thus I have "value" field in Terraform code, which cannot be placed to Gitlab as plain text. If I place secrets to file and decrypt a file using "git-crypt" - it gets to Gitlab encrypted and Terraform sees it as encrypted already and creates an encrypted secret.
Any ideas how to do it?
I'm creating the secret via Terraform this way:
resource "azurerm_key_vault_secret" "example" {
name = "secret-sauce"
value = "szechuan"
key_vault_id = azurerm_key_vault.example.id
}
Can you pls provide TF code here. If you fetch keys from keyvault using 'data' block, then it should not be placed in plain text format in GitLab. Still would request you to put codes here, so that I can understand better.
I have my infrastructure as a code folder with distinct terraform files stored on Azure in a storage account on a resource group that is only used for storing state or secrets used for automation.
How can I place the folder in a docker container and further use it so that secrets remain private?
Never put secrets in a docker image. They are easily reversible and aren't treated as secret.
You would normally store your Terraform files (without secrets) in a source repository that has a pipeline attached. The pipeline could have the secrets defined as "secret variables" (different pipeline tools have different terms for the same thing).
For example, say you need to provide a particular API key to talk to a service with Terraform. Often the provider supports an environment variable out for the credential by default (check their docs), in cases where it doesn't you can create a Terraform variable to do so and set the secret on the pipeline as mentioned earlier.
e.g.
In terraform:
variable "key" {
type = "string"
sensitive = true
}
provider "someprovider" {
project = "..."
region = "..."
key = var.key
}
Then in the pipeline you would define something like:
TF_VAR_key=xxxx-xxxx-xxxx-xxxx
Normally within the pipeline tools you can provide variables to the various steps or docker images (such as Terraform image).
I am starting to learn terraform/github actions. Is it possible to get TF to read Github secrets as part of the Github action ? For example ..
My main.tf file creates an AWS EC2 instance, and, needs to install nginx using a provisioner. in order to do that i need to provide my private/public key information to the provisoner for it to authentiate to the EC2 instance to install the app. I have created a github secret that contains my private key.
At the moment the workflow keeps failing becuase i cannot get it to read the github secret that contains the private key info.
How can i achieve this ?
any advise would be most welcome ! thanks
The simplest way is to use an environment variable.
Terraform reads the value for its variables from environment.
The next piece is to translate the GitHub secret in an environment variable.
In practice, if your Terraform script has a variable declaration like
variable "my_public_key" {}
and you have a GitHub secret NGINX_PUBKEY, then you can use this syntax in your workflow
steps:
- run: terraform apply -auto-approve
env:
TF_VAR_my_public_key: ${{ secrets.NGINX_PUBKEY }}
This said I would not recommend using GitHub secrets for this kind of data: they are better managed in a secret-management store like AWS Secrets Manager, Azure KeyVault, Hashicorp Vault, etc.
In my case I'm trying to hide the aws access keys and secret access keys that are printed through outputs.
I tried to implement a solution but unfortunately its printing the credentials in the plan. So whenever i push the code/commits to GITHUB we have terraform running in Jenkins it spits the plan in GITHUB exposing the credentials in terraform plan.
Although I have hidden in outputs but now I'm printing it in plan and exposing in GitHub. I also tried to use sensitive:true in outputs which will easily solve this problem. But my team wants to implement this solution :(
resource "aws_iam_access_key" "key" {
user = "${aws_iam_user.user.name}"
}
resource "null_resource" "access_key_shell" {
triggers = {
aws_user = "${aws_iam_user.user.name}" // triggering an alert on the user, since if we pass aws_iam_access_key, access key is visible in plan.
}
}
data "external" "stdout" {
depends_on = ["null_resource.access_key_shell"]
program = ["sh", "${path.module}/read.sh"]
query {
access_id = "${aws_iam_access_key.key.id}"
secret_id = "${aws_iam_access_key.key.secret}"
}
}
resource "null_resource" "contents_access" {
triggers = {
stdout = "${lookup(data.external.logstash_stdout.result, "access_key")}"
value = "${aws_iam_access_key.key.id}"
}
}
output "aws_iam_podcast_logstash_access_key" {
value = "${chomp(null_resource.contents_access.triggers["stdout"])}"
}
read.sh
#!/bin/bash
set -eux
echo {\"access_key\":\"$(aws kms encrypt --key-id alias/amp_key --plaintext ${access_id} --output text --query CiphertextBlob)\", > sample.json && echo \"secret_key\": \"$(aws kms encrypt --key-id alias/amp_key --plaintext ${secret_id} --output text --query CiphertextBlob)\"} >> sample.json
cat sample.json | jq -r '.access_key'
cat sample.json | jq -r '.secret_key'
My terraform plan :
<= data.external.stdout
id: <computed>
program.#: "2"
program.0: "sh"
program.1: "/Users/xxxx/projects/tf_iam_stage/read.sh"
query.%: "2"
query.access_id: "xxxxxxxx" ----> I want to hide these values from the plan
query.secret_id: "xxxxxxxxxxxxxxxxxxxxxx/x" ----> I want to hide these values from the plan
result.%: <computed>
Any help !
Thanks in advance!
There are a couple of things going on here.
First, you are leaking your credentials because you are storing your .tfstate in GitHub. This one has an easy solution. First, add *.tfstate to your .gitignore, then set a remote backend, and if you use S3, then checkout policies and ACLs to prevent public access.
Second, your other problem is that you are fetching the credentials on runtime, and during runtime Terraform displays everything unless you add the sensitive flag. So, basically if you want to follow this approach, you are forced to use sensitive: true, no matter what you team says. However, why get the credentials that way? Why don't you add a new provider with those credentials, set an alias for this provider, and just use it for the resources where you those keys?
in your scenario you will be great if you will go with: Remote State approach.
Remote State allows Terraform to store the state in a remote store. Terraform supports storing state in places like Terraform Enterprise, Consul, S3, and more.
The setup is to create a bucket on AWS S3, it should not be readable or writeable by anyone, except the user who will be using for Terraform.
The code I added was;
terraform {
backend "s3" {
bucket = "my-new-bucket"
key = "state/key"
region = "eu-west-1"
}
}
This simply tells Terraform to use S3 as the backend provider for doing things like storing tfstate files.
Don't forget to run terraform init because it's a requirement, Terraform will notice that you changed from storing locally to storing in S3.
Once that is done You could delete the local tfstate files safe in the knowledge your details were safely stored on S3.
Here is some useful docs: Click docs
The second approach is to use a Terraform plugin more info here: Terraform plugin
Good luck!
I am trying to configure a Terraform enterprise workspace in Jenkins on the fly. To do this, I need to be able to set the remote backend workspace name in my main.tf dynamically. Like this:
# Using a single workspace:
terraform {
backend "remote" {
hostname = "app.xxx.xxx.com"
organization = "YYYY"
# new workspace variable
workspaces {
name = "${var.workspace_name}"
}
}
}
Now when I run:
terraform init -backend-config="workspace_name=testtest"
I get:
Error loading backend config: 1 error(s) occurred:
* terraform.backend: configuration cannot contain interpolations
The backend configuration is loaded by Terraform extremely early, before
the core of Terraform can be initialized. This is necessary because the backend
dictates the behavior of that core. The core is what handles interpolation
processing. Because of this, interpolations cannot be used in backend
configuration.
If you'd like to parameterize backend configuration, we recommend using
partial configuration with the "-backend-config" flag to "terraform init".
Is what I want to do possible with terraform?
You cann't put any variables "${var.workspace_name}" or interpolations into the Backend Remote State Store.
However, you can create a file beside with your Backend values, it could look like this into the main.tf file:
# Terraform backend State-Sotre
terraform {
backend "s3" {}
}
and into a dev.backend.tfvars for instance:
bucket = "BUCKET_NAME"
encrypt = true
key = "BUCKET_KEY"
dynamodb_table = "DYNAMODB_NAME"
region = "AWS_REGION"
role_arn = "IAM_ROLE_ARN"
You can use partial configuration for s3 Backend as well.
Hope it'll help.
Hey I found the correct way to do this:
While the syntax is a little tricky, the remote backend supports partial backend initialization. What this means is that the configuration can contain a backend block like this:
terraform {
backend "remote" { }
}
And then Terraform can be initialized with a dynamically set backend configuration like this (replacing ORG and WORKSPACE with appropriate values):
terraform init -backend-config "organization=ORG" -backend-config 'workspaces=[{name="WORKSPACE"}]'