question re terraform and github actions / secrets - terraform

I am starting to learn terraform/github actions. Is it possible to get TF to read Github secrets as part of the Github action ? For example ..
My main.tf file creates an AWS EC2 instance, and, needs to install nginx using a provisioner. in order to do that i need to provide my private/public key information to the provisoner for it to authentiate to the EC2 instance to install the app. I have created a github secret that contains my private key.
At the moment the workflow keeps failing becuase i cannot get it to read the github secret that contains the private key info.
How can i achieve this ?
any advise would be most welcome ! thanks

The simplest way is to use an environment variable.
Terraform reads the value for its variables from environment.
The next piece is to translate the GitHub secret in an environment variable.
In practice, if your Terraform script has a variable declaration like
variable "my_public_key" {}
and you have a GitHub secret NGINX_PUBKEY, then you can use this syntax in your workflow
steps:
- run: terraform apply -auto-approve
env:
TF_VAR_my_public_key: ${{ secrets.NGINX_PUBKEY }}
This said I would not recommend using GitHub secrets for this kind of data: they are better managed in a secret-management store like AWS Secrets Manager, Azure KeyVault, Hashicorp Vault, etc.

Related

How do I pass resources that were created by Terraform to Kustomize

Am using a combination of these tools
Terraform - To deploy the Application specific AWS resources I need
(For instance a secret)
Skaffold - To help with the inner
development loop, surrounding the deployment of K8s resources to
local and remote cluster
Kustomize - To help with templating of
different configurations for different environment
My github action steps are as follows
Terraform to create the AWS resources. At this point it creates a AWS
secrets arn.
Skaffold to deploy the k8s manifests. Skaffold in-turn delegates K8s manifest generation to Kustomize. Within the Kustomize overlay files i need to be able to access the Secrets arn that was created earlier, this arn needs to be injected into the container that is being deployed. How do I achieve this?
Rephrasing the question: How do I pass resources that were created by terraform to be consumed by something like Kustomize (Which is used by skaffold)
(p.s, I really like the choice of my tools thus far as each one excels at one thing. I realize that terraform can possibly do all of it, but that is a choice that I dont want to make unless there are no easier options)
Here is what I have learnt:
I don't think there are any industry standards in terms of how to share this data between the tools across different steps within github actions. That being said here are some of the options
Have the Terraform store the secrets arn in a parameter store. Retrieve the arn from the parameter store in later steps. This means that the steps have to share a static key
Have Terraform update the kustomize files directly (or use kustomize_overlays as datasource)
There could be other similar approaches, but none of these tools have a native way of passing/sharing data

Best way to store Terraform variable values without having them in source control

We have a code repo with our IaC in Terraform. This is in Github, and we're going to pull the code, build it, etc. However, we don't want the values of our variables in Github itself. So this may be a dumb question, but where do we store the values we need for our variables? If my Terraform requires an Azure subscription id, where would I store the subscription id? The vars files won't be in source control. The goal is that we'll be pulling the code into an Azure Devops pipeline so the pipeline will have to know where to go to get the input variable values. I hope that makes sense?
You can store your secrets in Azure Key Vault and retrieve them in Terraform using azurerm_key_vault_secret.
data "azurerm_key_vault_secret" "example" {
name = "secret-sauce"
key_vault_id = data.azurerm_key_vault.existing.id
}
output "secret_value" {
value = data.azurerm_key_vault_secret.example.value
}
https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/data-sources/key_vault_secret
There has to be a source of truth eventually.
You can store your values in the pipeline definitions as variables themselves and pass them into the Terraform configuration.
Usually it's a combination of tfvar files (dependent on target environment) and some variables from the pipeline. If you do have vars in your pipelines though, the pipelines should be in code.
If the variables are sensitive then you need to connect to a secret management tool to get those variables.
If you have many environments, say 20 environments and the infra is all the same with exception of a single ID you could have the same pipeline definition (normally JSON or YAML) and reference it for the 20 pipelines you build, each of those 20 would have that unique value baked in for use at execution. That var is passed through to Terraform as the missing piece.
There are other key-value property tracking systems out there but Git definitely works well for this purpose.
You can use Azure DevOps Secure files (pipelines -> library) for storing your credentials for each environment. You can create a tfvar file for each environment with all your credentials, upload it as a secure file in Azure DevOps and then download it in the pipeline with a DownloadSecureFile#1 task.

How safe/protect Azure service principal secret

My deploy task using PowerShell script, which use Service Principal for connection to Azure KeyVault for pull secret. Secret (password) store in PowerShell script's code as plain text. Maybe there is another solution how to minimize token viewing.
And also i use powershell inline mode (not separate script) with Azure DevOps Secret Variable in deploy task, but this solution difficult to support (script has several different operations, so you have to keep many versions of the script).
Script is store in Git repository, anyone who has access to it will be able to see the secret and gain access to other keys. Perhaps I don't understand this concept correctly, but if keys cannot be stored in the code, then what should I do?
I devops you can use variable groups and define that the variables is pulled directly from a selected keyvault (if the service principal you have selected have read/list access to the KV) LINK.
This means that you can define all secrets in keyvault, and they would be pulled before any tasks happens in your yaml. To be able to use them in the script you can define them as a env variable or parameter to your script and just reference $env:variable or just $variable, instead of having the secret hardcoded in your script.

GitHub Actions for Terraform - How to provide "terraform.tfvars" file with aws credentials

I am trying to setup GitHub Actions for execute a terraform template.
My confusion is - how do I provide *.tfvars file which has aws credentials. (I can't check-in these files).
Whats the best practice to share the variable's values expected by terraform commands like plan or apply where they need aws_access_key and aws_secret_key.
Here is my GitHub project - https://github.com/samtiku/terraform-ec2
Any guidance here...
You don't need to provide all variables through *.tfvars file. Apart from -var-file option, terraform command provides also -var parameter, which you can use for passing secrets.
In general, secrets are passed to scripts through environment variables. CI tools give you an option to define environment variables in project configuration. It's a manual step, because as you have already noticed, secrets cannot be stored in the repository.
I haven't used Github Actions in particular, but after setting environment variables, all you need to do is run terraform with secrets read from them:
$ terraform -var-file=some.tfvars -var "aws-secret=${AWS_SECRET_ENVIRONMENT_VARIABLE}
This way no secrets are ever stored in the repository code. If you'd like to run terraform locally, you'll need first to export these variables in your shell :
$ export AWS_SECRET_ENVIRONMENT_VARIABLE="..."
Although Terraform allows providing credentials to some providers via their configuration arguments for flexibility in complex situations, the recommended way to pass credentials to providers is via some method that is standard for the vendor in question.
For AWS in particular, the main standard mechanisms are either a credentials file or via environment variables. If you configure the action to follow what is described in one of those guides then Terraform's AWS provider will automatically find those credentials and use them in the same way that the AWS CLI does.
It sounds like environment variables will be the easier way to go within GitHub actions, in which case you can just set the necessary environment variables directly and the AWS provider should use them automatically. If you are using the S3 state storage backend then it will also automatically use the standard AWS environment variables.
If your system includes multiple AWS accounts then you may wish to review the Terraform documentation guide Multi-account AWS Architecture for some ideas on how to model that. The summary of what that guide recommends is to have a special account set aside only for your AWS users and their associated credentials, and then configure your other accounts to allow cross-account access via roles, and then you can use a single set of credentials to run Terraform but configure each instance of the AWS provider to assume the appropriate role for whatever account that provider instance should interact with.

What is the best way to use Hashicorp Vault with GitLab pipelines?

Let's say I want to make a variable with the value from Vault.
variables:
$SSH_PRIVATE_KEY: `vault kv get -field=private_key project/production`
before_script:
- echo "$SSH_PRIVATE_KEY"
Is it possible?
Is there another way to use Vault secrets inside pipelines?
Original answer Jul 2019:
You can see it used in before/after script steps, with a revoked token at the end.
See gitlab.eng.cleardata.com pub/pipelines/gcp-ci.yml as an example:
# Obtains credentials via vault (the gitlab-runner authenticates to vault using its AWS credentials)
# Configures the `gcloud` sdk and `kubectl` to authenticate to our *production* cluster
#
# Note: Do not override the before_script or the after_script in your job
#
.auth-prod: &auth-prod
image: cleardata/bionic
before_script:
- |
export CLUSTER_NAME=production
export CLUSTER_LOCATION=us-central1
export CLUSTER_PROJECT_ID=cleardata-production-cluster
- vault login -method=aws -path=gitlab-ci -no-print header_value=gitlab.eng.cleardata.com
- GCP_CREDS=$(vault read -field=private_key_data gitlab-ci/gcp/cleardata-production-cluster/key/deployment-key)
- gcloud auth activate-service-account --key-file=<(base64 -d <<<$GCP_CREDS)
- gcloud auth configure-docker
- gcloud beta container clusters get-credentials $CLUSTER_NAME --region $CLUSTER_LOCATION --project $CLUSTER_PROJECT_ID
after_script:
- vault token revoke -self
Update March 2020: This is supported with GitLab 12.9
HashiCorp Vault GitLab CI/CD Managed Application
GitLab wants to make it easy for users to have modern secrets management. We are now offering users the ability to install Vault within a Kubernetes cluster as part of the GitLab CI managed application process.
This will support the secure management of keys, tokens, and other secrets at the project level in a Helm chart installation.
See documentation and issue.
April 2020: GitLab 12.10:
Retrieve CI/CD secrets from HashiCorp Vault
In this release, GitLab adds support for lightweight JSON Web Token (JWT) authentication to integrate with your existing HashiCorp Vault.
Now, you can seamlessly provide secrets to CI/CD jobs by taking advantage of HashiCorp’s JWT authentication method rather than manually having to provide secrets as a variable in GitLab.
See documentation and issue.
See GitLab 13.4 (September 2020)
For Premium/Silver only:
Use HashiCorp Vault secrets in CI jobs
In GitLab 12.10, GitLab introduced functionality for GitLab Runner to fetch and inject secrets into CI jobs. GitLab is now expanding the JWT Vault Authentication method by building a new secrets syntax in the .gitlab-ci.yml file. This makes it easier for you to configure and use HashiCorp Vault with GitLab.
https://about.gitlab.com/images/13_4/vault_ci.png -- Use HashiCorp Vault secrets in CI jobs
See Documentation and Issue.
See GitLab 13.9 (February 2021)
Vault JWT (JSON Web Token) supports GitLab environments.
To simplify integrations with HashiCorp Vault, we’ve shipped
Vault JWT token support. From the launch, you could restrict access based on
data in the JWT. This release gives you a new dimension for restricting
access to credentials: the environment a job targets.
This release extends the existing Vault JWT token to support environment-based
restrictions too. As the environment name could be supplied by the user running
the pipeline, we recommend you use the new environment-based restrictions with the
already-existing ref_type values for maximum security.
See Documentation and Issue.
We have a helper script baked into our builder images that can convert GitLab CI/CD job variables pointing to secrets into job env vars containing Vault secrets. In our case, we're also using the appRole auth method to limit the validity of the temporary Vault access token.
An example use case would be:
I want a job env var "MY_SECRET_TOKEN" with a value from a Vault secret.
So I add a CI/CD variable called V_MY_SECRET_TOKEN="secret/<path>/<key>"
Then I insert a job step to retrieve the secret value and populate
the MY_SECRET_TOKEN with the value associated with the key.
Variables added to the CICD job setup in GitLab.
VAULT_ADDR=https://vault.example.com:8200
VAULT_ROLE_ID=db02de05-fa39-4855-059b-67221c5c2f63
VAULT_SECRET_ID=6a174c20-f6de-a53c-74d2-6018fcceff64
VAULT_VAR_FILE=/var/tmp/vault-vars.sh
Steps added to the .gitlab-ci.yml job definition.
script:
- get-vault-secrets-by-approle > ${VAULT_VAR_FILE}
- source ${VAULT_VAR_FILE} && rm ${VAULT_VAR_FILE}
Here is a reference to the get-vault-secrets-by-approle helper script we use. Here is a writeup of the thinking behind the design.
The 'before_script' option didn't fit our workflows as we define a combination of privileged and non-privledged stages in our gitlab-ci.yml definition. The non-privileged jobs build and QA code while the privileged jobs package and release code. The VAULT_ROLE_ID and VAULT_SECRET_ID job variables should only be visible to the privileged package and release jobs.
I also experimented with using include's, extend's, and yaml anchors but I wanted to merge items into existing yaml maps (script: {} or before_script: {}) as opposed to replacing all the items in a map with the template.

Resources