How to use environment variables set in Terraform Cloud? - terraform

I am using Terraform to integrate with GitHub Actions and AWS.
I want Terraform to setup AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY in GitHub repository secrets for GitHub Actions using the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY that I've set in Terraform Cloud.
I want to write something like below.
resource "github_actions_secret" "example_secret" {
repository = "example_repository"
secret_name = "AWS_SECRET_ACCESS_KEY"
plaintext_value = env.AWS_SECRET_ACCESS_KEY // <-- this will not work
}

Terraform Cloud will automatically export any environment variables to populate your shell environment before running any Terraform.
This is equivalent to running export TF_VAR_foo=bar if you wanted to set foo to bar.
To then actually use these variables you must define a variable block as normal and then access it as usual as well.
So in your case if you have defined an AWS_SECRET_ACCESS_KEY environment variable in Terraform Cloud then you could do this:
variable "AWS_SECRET_ACCESS_KEY" {}
resource "github_actions_secret" "example_secret" {
repository = "example_repository"
secret_name = "AWS_SECRET_ACCESS_KEY"
plaintext_value = var.aws_secret_access_key
}

You have to export those env variables yourself before you run your TF apply, as shown here. This is what TF cloud is doing:
Terraform Cloud performs Terraform runs on disposable Linux worker VMs using a POSIX-compatible shell. Before running Terraform, Terraform Cloud populates the shell with environment variables using the export command.

Related

Terraform cloud config dynamic workspace name

I'm building CI/CD pipeline using GitHub Actions and Terraform. I have a main.tf file like below, which I'm calling from GitHub action for multiple environments. I'm using https://github.com/hashicorp/setup-terraform to interact with Terraform in GitHub actions. I have MyService component and I'm deploying to DEV, UAT and PROD environments. I would like to reuse main.tf for all of the environments and dynamically set workspace name like so: MyService-DEV, MyService-UAT, MyService-PROD. Usage of variables is not allowed in the terraform/cloud block. I'm using HashiCorp cloud to store state.
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.0"
}
}
cloud {
organization = "tf-organization"
workspaces {
name = "MyService-${env.envname}" #<==not allowed to use variables
}
}
}
Update
I finally managed to get this up and running with helpful comments. Here are my findings:
TF_WORKSPACE needs to be defined upfront like: service-dev
I didn't get tags to work the way I want when running in automation. If I define a tag in cloud.workspaces.tags as 'service' then there is no way to set a second tag like 'dev' dynamically. Both of the tags are needed to during init ['service', 'dev'] in order for TF to select workspace service-dev automatically.
I ended up using tfe provider in order to set up workspaces(with tags) automatically. In the end I still needed to set TF_WORKSPACE=service-dev
It doesn't make sense to refer to terraform.workspace as part of the workspaces block inside a cloud block, because that block defines which remote workspaces Terraform will use and therefore dictates what final value terraform.workspace will have in the rest of your configuration.
To declare that your Terraform configuration belongs to more than one workspace in Terraform Cloud, you can assign each of those workspaces the tag "MyService" and then use the tags argument instead of the name argument:
cloud {
organization = "tf-organization"
workspaces {
tags = ["MyService"]
}
}
If you assign that tag to hypothetical MyService-dev and MyService-prod workspaces in Terraform Cloud and then initialize with the configuration above, Terraform will present those two workspaces for selection using the terraform workspace commands when working in this directory.
terraform.workspace will then appear as either MyService-dev or MyService-prod, depending on which one you have selected.

Terraform: How can I reference Terraform cloud environmental variables?

I'm using Terraform cloud. I would like to take advantage of using AWS Tags with my resources. I want to tag each resource defined in Terraform with the current GIT Branch Name. That way I can separate dev from production.
Terraform has a list of Environmental Variables that do reference the GIT Branch Name with their service in the cloud as:
TFC_CONFIGURATION_VERSION_GIT_BRANCH - This is the name of the branch that the associated Terraform configuration version was ingressed from (e.g. master).
How can I reference the TFC_CONFIGURATION_VERSION_GIT_BRANCH environmental variable in the following resource for an example VPC?
resource "aws_vpc" "example_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
product = var.product
stage = var.TFC_CONFIGURATION_VERSION_GIT_BRANCH
}
}
reference: https://www.terraform.io/docs/language/values/variables.html#environment-variables
I figured it out! Wish the documentation was clearer on this with the cloud.
You will have to set a empty variable. I defined mine in variables.tf as:
variable "TFC_CONFIGURATION_VERSION_GIT_BRANCH" {
type = string
default = ""
}
Per the documentation I linked in question. TFC_CONFIGURATION_VERSION_GIT_BRANCH is injected automatically into the environmental variables with each cloud run. Defining the full name of the environmental variable as the variable worked.
resource "aws_vpc" "example_vpc" {
cidr_block = "10.0.0.0/16"
tags = {
product = var.product
stage = var.TFC_CONFIGURATION_VERSION_GIT_BRANCH
}
}
Then the plan output was successful in the cloud:
Terraform will perform the following actions:
# aws_vpc.example_vpc will be updated in-place
~ resource "aws_vpc" "example_vpc" {
id = "vpc-0b19679e6464b8481"
~ tags = {
~ "stage" = "None" -> "develop"
# (1 unchanged element hidden)
}
# (14 unchanged attributes hidden)
}
Terraform Cloud serves as a remote execution environment for Terraform CLI (amongst other things) and so when you configure a workspace in Terraform Cloud many of the settings are about the context where Terraform CLI will run, and how Terraform Cloud will run it.
Part of that configuration model is the idea of environment variables, which correspond with the same environment variables you might set in your shell when running Terraform CLI locally. As with local Terraform, those environment variables are not directly usable from your configuration but are instead settings for other systems that Terraform and providers will interact with, such as the AWS_ACCESS_KEY_ID environment variable conventionally used by AWS software as a way to statically configure the access key identifier to use.
Terraform Cloud also allows you to set "Terraform Variables", and those correspond with Input Variables in the Terraform language. These are the settings for your Terraform configuration itself, as opposed to other software it will interact with, and so you can refer to these by declaring them using variable blocks and then using expressions like var.example elsewhere in the root module. Internally, Terraform Cloud is passing the configured values to Terraform CLI by generating a file called terraform.tfvars, which Terraform CLI looks for as a default source of variable values.
Both of these types of variables are useful for different purposes, and so most Terraform workspaces include a mixture of environment variables for configuring external systems and "Terraform variables" for configuring the current Terraform configuration itself.
For the benefit of folks using Terraform CLI outside of Terraform Cloud, Terraform CLI actually also offers a way to set Input Variables using environment variables, and technically you can do that within Terraform Cloud too because it's ultimately just running Terraform with environment variables set in the same way as you might locally. That's not the intended way to use Terraform Cloud, and so I'm mentioning it only for completeness because the terminology overlap here might be confusing.
Terraform cloud workspace variables can be set as category "terraform" or as category "env". In a remote execution setup, you are unable to reference workspace vars defined as "env" from your terraform code. Instead, those vars will be automatically injected into the execution environment. To be able to reference a workspace variable in terraform code, set the variable type to "terraform" but do not check the "HCL" tick. Defining a blank variable is still needed. Hope this helps someone! Unfortunately, the documentation is very unclear about this.
Terraform Cloud: Create a workspace variable
key: example_variable
value: example_value
category: terraform
Implementation: Configuration (.tf) file
Declare a blank variable
Reference your variable
# Declaring a blank variable
variable "example_variable" {}
# Referencing a variable
block "name" {
input = var.example_variable
Snowflake CI/CD Example
CI/CD pipeline for Snowflake with GitHub Actions and Terraform configured similar to Snowflake's quick start guide
Implemented a private key instead of a user password
variable "snowflake_private_key" {}
terraform {
required_providers {
snowflake = {
source = "chanzuckerberg/snowflake"
version = "0.25.17"
}
}
backend "remote" {
organization = "terraform-snowflake"
workspaces {
name = "gh-actions"
}
}
}
provider "snowflake" {
username = "snowflake-username"
account = "snowflake-account"
region = "snowflake-region"
private_key = var.snowflake_private_key
role = "SYSADMIN"
}

Terraform cloud : Import existing resource

I am using terraform cloud to manage the state of the infrastructure provisioned in AWS.
I am trying to use terraform import to import an existing resource that is currently not managed by terraform.
I understand terraform import is a local only command. I have set up a workspace reference as follows:
terraform {
required_version = "~> 0.12.0"
backend "remote" {
hostname = "app.terraform.io"
organization = "foo"
workspaces {
name = "bar"
}
}
}
The AWS credentials are configured in the remote cloud workspace but terraform does not appear to be referencing the AWS credentials from the workspace but instead falls back trying to using the local credentials which points to a different AWS account. I would like Terraform to use the credentials by referencing the variables in the workspace when I run terraform import.
When I comment out the locally configured credentials, I get the error:
Error: No valid credential sources found for AWS Provider.
I would have expected terraform to use the credentials configured in the workspace.
Note that terraform is able to use the credentials correctly, when I run the plan/apply command directly from the cloud console.
Per the backends section of the import docs, plan and apply run in Terraform Cloud whereas import runs locally. Therefore, the import command will not have access to workspace credentials set in Terraform Cloud. From the docs:
In order to use Terraform import with a remote state backend, you may need to set local variables equivalent to the remote workspace variables.
So instead of running the following locally (assuming you've provided access keys to Terraform Cloud):
terraform import aws_instance.myserver i-12345
we should run for example:
export AWS_ACCESS_KEY_ID=abc
export AWS_SECRET_ACCESS_KEY=1234
terraform import aws_instance.myserver i-12345
where the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY have the same permissions as those configured in Terraform Cloud.
Note for AWS SSO users
If you are using AWS SSO and CLI v2, functionality for terraform to be able to use the credential cache for sso was added per this AWS provider issue. The steps for importing with an SSO profile are:
Ensure you've performed a login and have an active session with e.g. aws sso login --profile my-profile
Make the profile name available to terraform as an environment variable with e.g. AWS_PROFILE=my-profile terraform import aws_instance.myserver i-12345
If the following error is displayed, ensure you are using a version of the cli > 2.1.23:
Error: SSOProviderInvalidToken: the SSO session has expired or is invalid
│ caused by: expected RFC3339 timestamp: parsing time "2021-07-18T23:10:46UTC" as "2006-01-02T15:04:05Z07:00": cannot parse "UTC" as "Z07:00"
Use the data provider, for Example:-
data "terraform_remote_state" "test" {
backend = "s3"
config = {
bucket = "BUCKET_NAME"
key = "BUCKET_KEY WHERE YOUR TERRAFORM.TFSTATE FILE IS PRESENT"
region = "CLOUD REGION"
}
}
Now you can call your provisioned resources
Example :-
For getting the VPC ID:-
data.terraform_remote_state.test.*.outputs.vpc_id
Just make the cloud resource property you want to refer should be in exported as output and stored in terraform.tfstate file

How to share Terraform variables across workpaces/modules?

Terraform Cloud Workspaces allow me to define variables, but I'm unable to find a way to share variables across more than one workspace.
In my example I have, lets say, two workspaces:
Database
Application
In both cases I'll be using the same AzureRM credentials for connectivity. The following are common values used by the workspaces to connect to my Azure subscription:
provider "azurerm" {
subscription_id = "00000000-0000-0000-0000-000000000000"
client_id = "00000000-0000-0000-0000-000000000000"
client_secret = "00000000000000000000000000000000"
tenant_id = "00000000-0000-0000-0000-000000000000"
}
It wouldn't make sense to duplicate values (in my case I'll have probably 10 workspaces).
Is there a way to do this?
Or the correct approach is to define "database" and "application" as a Module, and then use Workspaces (DEV, QA, PROD) to orchestrate them?
In Terraform Cloud, the Workspace object is currently the least granular location where you can specify variable values directly. There is no built in mechanism to share variable values between workspaces.
However, one way to approach this would be to manage Terraform Cloud with Terraform itself. The tfe provider (named after Terraform Enterprise for historical reasons, since it was built before Terraform Cloud launched) will allow Terraform to manage Terraform Cloud workspaces and their associated variables.
variable "workspaces" {
type = set(string)
}
variable "common_environment_variables" {
type = map(string)
}
provider "tfe" {
hostname = "app.terraform.io" # Terraform Cloud
}
resource "tfe_workspace" "example" {
for_each = var.workspaces
organization = "your-organization-name"
name = each.key
}
resource "tfe_variable" "example" {
# We'll need one tfe_variable instance for each
# combination of workspace and environment variable,
# so this one has a more complicated for_each expression.
for_each = {
for pair in setproduct(var.workspaces, keys(var.common_environment_variables)) : "${pair[0]}/${pair[1]}" => {
workspace_name = pair[0]
workspace_id = tfe_workspace.example[pair[0]].id
name = pair[1]
value = var.common_environment_variables[pair[1]]
}
}
workspace_id = each.value.workspace_id
category = "env"
key = each.value.name
value = each.value.value
sensitive = true
}
With the above configuration, you can set var.workspaces to contain the names of the workspaces you want Terraform to manage and var.common_environment_variables to the environment variables you want to set for all of them.
Note that for setting credentials on a provider the recommended approach is to set them in environment variables rather than Terraform variables, because that then makes the Terraform configuration itself agnostic to how those credentials are obtained. You could potentially apply the same Terraform configuration locally (outside of Terraform Cloud) using the integration with Azure CLI auth, while the Terraform Cloud execution environment would often use a service principal.
Therefore to provide the credentials in the Terraform Cloud environment you'd put the following environment variables in var.common_environment_variables:
ARM_CLIENT_ID
ARM_TENANT_ID
ARM_SUBSCRIPTION_ID
ARM_CLIENT_SECRET
If you use Terraform Cloud itself to run operations on this workspace managing Terraform Cloud (naturally, you'd need to set this one up manually to bootstrap, rather than having it self-manage) then you can configure var.common_environment_variables as a sensitive variable on that workspace.
If you instead set it via Terraform variables passed into the provider "azurerm" block (as you indicated in your example) then you force any person or system running the configuration to directly populate those variables, forcing them to use a service principal vs. one of the other mechanisms and preventing Terraform from automatically picking up credentials set using az login. The Terraform configuration should generally only describe what Terraform is managing, not settings related to who is running Terraform or where Terraform is being run.
Note though that the state for the Terraform Cloud self-management workspace will include
a copy of those credentials as is normal for objects Terraform is managing, so the permissions on this workspace should be set appropriately to restrict access to it.
You can now use variable sets to reuse variable across multiple workspaces

How to use environment variables for secrets in Terraform

I am trying to configure Terraform so it uses environment variables for AWS Secrets.
terraform.tfvars:
access_key = "${var.TF_VAR_AWS_AK}"
secret_key = "${var.TF_VAR_AWS_SK}"
aws_region = "eu-north-1"
main.tf:
provider "aws" {
region = "${var.aws_region}"
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
}
In console(it's on Windows 10):
set TF_VAR_AWS_AK = asd12345asd12345
set TF_VAR_AWS_SK = asd12345asd12345
terraform plan
Error messages:
Error: Variables not allowed
on terraform.tfvars line 1:
1: access_key = "${var.TF_VAR_AWS_AK}"
Variables may not be used here.
Error: Variables not allowed
on terraform.tfvars line 2:
2: secret_key = "${var.TF_VAR_AWS_SK}"
Variables may not be used here.
Not sure where the problem is. TF docs say it is possible to use env vars for secrets.
To configure providers and backends with environment variables, you don't need to write anything special in the configuration at all. Instead, you can just set the conventional environment variables related to the provider in question.
For example, you seem to be using AWS in which case you can use either the AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY environment variables or you can populate a credentials file, the same as for the AWS SDK. You can then skip all of the declaration of variables and just reduce your provider block as follows:
provider "aws" {
region = "${var.aws_region}"
}
Terraform's AWS provider supports the same set of credentials sources that the AWS CLI does without any Terraform-specific configuration. That is the recommended way to configure credentials for the AWS provider, because then you only need to set up your AWS credentials once and you can use both AWS SDK, Terraform, and any other software that interacts with AWS and supports its conventions.
There's more information on the AWS provider authentication options in the AWS provider documentation.
As described here in the documentation : https://www.terraform.io/docs/configuration/variables.html#environment-variables
The environment variable names must be TF_VAR_<yourtfvariablename>.
With a terraform variable like this :
variable "aws_region" {
type = string
}
Your environment variable name must be TF_VARS_aws_region
There is actually no way to use environment variables directly in terraform. (ex: region = env.AWS_REGION) you must use TF_VAR to use env vars.

Resources