Using GithHubAction with Terraform - terraform

I am using GitHub Actions to deploy my code using Terraform. Whenever code is pushed to the Testing branch, a GitHub Action is triggered that builds the code and runs terraform apply . This works well.
The problem is that now I want to have a Prod environment too. Whenever code is pushed to the Prod branch, it should be built using its own s3 remote backend and its AWS account. The problem I'm having is that I am not sure how to configure my terraform files so that terraform GitHubAction can use the backend for Prod to store the state file.please anyone who is able to help.Im not sure at this point how to set that up. here is the sample of my code
name: "Terraform-Apply-Action"
on:
push:
branches:
- prod
jobs:
terraform:
name: "Terraform"
runs-on: ubuntu-latest
env:
AWS_DEFAULT_REGION: "us-east-1"
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY}}
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
- name: Terraform Init
id: Init
run: terraform init
- name: Terraform Plan
id: plan
if: github.event_name == 'push'
run: terraform plan -no-color
continue-on-error: true
- name: Terraform Plan Status
if: steps.plan.outcome == 'failure'
run: exit 1
- name: Terraform Apply
run: terraform apply -auto-approve

The s3 backend key for terraform needs to be different across branches if you are doing it like that.
Testing:
terraform {
backend "s3" {
bucket = "mybucket"
key = "testing/path/to/my/key"
region = "us-east-1"
}
}
Prod:
terraform {
backend "s3" {
bucket = "mybucket"
key = "prod/path/to/my/key"
region = "us-east-1"
}
}
There are numerous other ways to solve this - but this is a very quick and easy solution.

Related

gitlab variables not fetching CI_VARIABLES when in multiple accounts/stages

I have a gitlab repository, and currently trying to setup a CICD by following this link:https://www.nvisia.com/insights/terraform-automation-with-gitlab-aws. While am able to complete most of the tasks, setup gitlab-runner in an EC2 instance and is running. In the gitlab-ci.yaml, I have deploy_staging & deploy_production stages. I have also created required CI_VARIABLES in the gitlab console as per the URL.
I have also created local variables in my gitlab-ci.yaml, so as to pass these values to be terraform init backend-config.
The issue am facing here is the pipeline works perfectly fine when deployed to the AWS Staging account, but doesnt show/replace variables with actual values when testing for AWS Production Account. Not sure if its terraform version which am running, v0.11.7.
Appreciate any help please, since am stuck here for a day with no clue.
Console screenshot when deployed to STAGING ACCOUNT showing correct values in echo
Console screenshot when deployed to PRODUCTION ACCOUNT
The gitlab-ci.yaml is as follows:
cache:
paths:
- .terraform/plugins
stages:
- plan
- deploy
deploy_staging:
stage: deploy
tags:
- terraform
variables:
TF_VAR_DEPLOY_INTO_ACCOUNT_ID: ${STAGING_ACCOUNT_ID}
TF_VAR_ASSUME_ROLE_EXTERNAL_ID: ${STAGING_ASSUME_ROLE_EXTERNAL_ID}
TF_VAR_AWS_REGION: ${STAGING_AWS_REGION}
TF_VAR_AWS_BKT_KEY: ${STAGING_TERRAFORM_BUCKET_KEY}
script:
- echo ${TF_VAR_DEPLOY_INTO_ACCOUNT_ID}
- echo ${TF_VAR_AWS_REGION}
- echo ${TF_VAR_ASSUME_ROLE_EXTERNAL_ID}
- aws configure list
- terraform init -backend-config="bucket=${STAGING_TERRAFORM_S3_BUCKET}"
-backend-config="key=${TF_VAR_AWS_BKT_KEY}"
-backend-config="region=${TF_VAR_AWS_REGION}" -backend-config="role_arn=arn:aws:iam::${TF_VAR_DEPLOY_INTO_ACCOUNT_ID}:role/S3BackendRole"
-backend-config="external_id=${TF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
-backend-config="session_name=TerraformBackend" terraform-configuration
- terraform apply -auto-approve -input=false terraform-configuration
environment:
name: staging
url: https://staging.example.com
on_stop: stop_staging
only:
variables:
- $DEPLOY_TO == "staging"
stop_staging:
stage: deploy
tags:
- terraform
variables:
TF_VAR_DEPLOY_INTO_ACCOUNT_ID: ${STAGING_ACCOUNT_ID}
TF_VAR_ASSUME_ROLE_EXTERNAL_ID: ${STAGING_ASSUME_ROLE_EXTERNAL_ID}
TF_VAR_AWS_REGION: ${STAGING_AWS_REGION}
TF_VAR_AWS_BKT_KEY: ${STAGING_TERRAFORM_BUCKET_KEY}
script:
- terraform init -backend-config="bucket=${STAGING_TERRAFORM_S3_BUCKET}"
-backend-config="key=${TF_VAR_AWS_BKT_KEY}"
-backend-config="region=${TF_VAR_AWS_REGION}" -backend-config="role_arn=arn:aws:iam::${TF_VAR_DEPLOY_INTO_ACCOUNT_ID}:role/S3BackendRole"
-backend-config="external_id=${TF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
-backend-config="session_name=TerraformBackend" terraform-configuration
- terraform destroy -input=false -auto-approve terraform-configuration
when: manual
environment:
name: staging
action: stop
only:
variables:
- $DEPLOY_TO == "staging"
plan_production:
stage: plan
tags:
- terraform
variables:
PTF_VAR_DEPLOY_INTO_ACCOUNT_ID: "${PRODUCTION_ACCOUNT_ID}"
PTF_VAR_ASSUME_ROLE_EXTERNAL_ID: "${PRODUCTION_ASSUME_ROLE_EXTERNAL_ID}"
PTF_VAR_AWS_REGION: "${PRODUCTION_AWS_REGION}"
PTF_VAR_AWS_BKT_KEY: "${PRODUCTION_TERRAFORM_BUCKET_KEY}"
artifacts:
paths:
- production_plan.txt
- production_plan.bin
expire_in: 1 week
script:
- echo "${PTF_VAR_DEPLOY_INTO_ACCOUNT_ID}"
- echo "${PTF_VAR_AWS_REGION}"
- echo "${PTF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
- echo "${PTF_VAR_AWS_BKT_KEY}"
- aws configure list
- terraform init -backend-config="bucket=${PRODUCTION_TERRAFORM_S3_BUCKET}"
-backend-config="key=${PTF_VAR_AWS_BKT_KEY}"
-backend-config="region=${PTF_VAR_AWS_REGION}" -backend-config="role_arn=arn:aws:iam::${PTF_VAR_DEPLOY_INTO_ACCOUNT_ID}:role/S3BackendRole"
-backend-config="external_id=${PTF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
-backend-config="session_name=TerraformBackend" terraform-configuration
- terraform plan -input=false -out=production_plan.bin terraform-configuration
- terraform plan -no-color production_plan.bin > production_plan.txt
only:
variables:
- $DEPLOY_TO == "production"
deploy_production:
stage: deploy
when: manual
tags:
- terraform
variables:
TF_VAR_DEPLOY_INTO_ACCOUNT_ID: ${PRODUCTION_ACCOUNT_ID}
TF_VAR_ASSUME_ROLE_EXTERNAL_ID: ${PRODUCTION_ASSUME_ROLE_EXTERNAL_ID}
TF_VAR_AWS_REGION: ${PRODUCTION_AWS_REGION}
TF_VAR_AWS_BKT_KEY: ${PRODUCTION_TERRAFORM_BUCKET_KEY}
script:
- terraform init -backend-config="bucket=${PRODUCTION_TERRAFORM_S3_BUCKET}"
-backend-config="key=${TF_VAR_AWS_BKT_KEY}"
-backend-config="region=${TF_VAR_AWS_REGION}" -backend-config="role_arn=arn:aws:iam::${TF_VAR_DEPLOY_INTO_ACCOUNT_ID}:role/S3BackendRole"
-backend-config="external_id=${TF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
-backend-config="session_name=TerraformBackend" terraform-configuration
- terraform apply -auto-approve -input=false production_plan.bin
environment:
name: production
url: https://production.example.com
on_stop: stop_production
only:
variables:
- $DEPLOY_TO == "production"
stop_production:
stage: deploy
tags:
- terraform
variables:
TF_VAR_DEPLOY_INTO_ACCOUNT_ID: ${PRODUCTION_ACCOUNT_ID}
TF_VAR_ASSUME_ROLE_EXTERNAL_ID: ${PRODUCTION_ASSUME_ROLE_EXTERNAL_ID}
TF_VAR_AWS_REGION: ${PRODUCTION_AWS_REGION}
TF_VAR_AWS_BKT_KEY: ${PRODUCTION_TERRAFORM_BUCKET_KEY}
script:
- terraform init -backend-config="bucket=${PRODUCTION_TERRAFORM_S3_BUCKET}"
-backend-config="key=${TF_VAR_AWS_BKT_KEY}"
-backend-config="region=${TF_VAR_AWS_REGION}" -backend-config="role_arn=arn:aws:iam::${TF_VAR_DEPLOY_INTO_ACCOUNT_ID}:role/S3BackendRole"
-backend-config="external_id=${TF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
-backend-config="session_name=TerraformBackend" terraform-configuration
- terraform destroy -input=false -auto-approve terraform-configuration
when: manual
environment:
name: production
action: stop
only:
variables:
- $DEPLOY_TO == "production"
enter code here

How to use terraform debugging in a github action workflow?

When my terraform code failed locally, I am able to see a detailed error message as to why it failed and with that information able to fix it. However, when the same terraform code failed while using GitHub Action workflow, it doesn't give detailed reason why it failed, except "error exit code 1". How can I use terraform debugging on the workflow level to produce detail log only when a step failed. I don't want to configure debugging at the repository level.
name: Testing push to branches
on:
push:
branches-ignore:
- main
env:
TERRAFORM_VERSION: "latest"
TERRAGRUNT_VERSION: "latest"
TERRAFORM_WORKING_DIR: './test'
jobs:
plan:
name: "Terragrunt Plan"
runs-on: ubuntu-20.04
defaults:
run:
working-directory: ${{ env.TERRAFORM_WORKING_DIR }}
steps:
- name: 'Checkout'
uses: actions/checkout#v2
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1.3.2
with:
terraform_version: ${{ env.TERRAFORM_VERSION }}
terraform_wrapper: true
- name: Setup Terragrunt
uses: autero1/action-terragrunt#v1.1.0
with:
terragrunt_version: ${{ env.TERRAGRUNT_VERSION }}
- name: configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1.6.1
with:
aws-region: us-east-1
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Terragrunt Init
id: init
run: terragrunt run-all init -no-color --terragrunt-non-interactive
- name: Terragrunt Validate
id: validate
run: terragrunt run-all validate -no-color --terragrunt-non-interactive
- name: Terragrunt Plan
id: plan
run: terragrunt run-all plan -no-color --terragrunt-non-interactive

Github Actions Terraform Init initialized empty directory

I am new to using Github actions and coding into YAML file.
Currently, I setup Terraform Cloud - Github actions for my Datadog POC.
I arrived on the issue:
terraform init
/home/runner/work/_temp/85297372-6fed-4b1d-88f8-3c6b5527569f/terraform-bin init
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
and the current github actions yaml file is:
I use the terraform github actions yaml file
name: 'Terraform'
on:
push:
branches:
- "main"
pull_request:
permissions:
contents: read
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
environment: production
# Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
defaults:
run:
shell: bash
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout#v3
# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init
# Checks that all Terraform configuration files adhere to a canonical format
- name: Terraform Format
run: terraform fmt -check
# Generates an execution plan for Terraform
- name: Terraform Plan
run: terraform plan -input=false
# On push to "main", build or change infrastructure according to Terraform configuration files
# Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
- name: Terraform Apply
if: github.ref == 'refs/heads/"main"' && github.event_name == 'push'
run: terraform apply -auto-approve -input=false
What remedy should I do here?
I was trying to change the directory included in the terraform init to
run:
working-directory: ./DataDog-Demo/terraform
but I received also error.
Thank you
You will have to cd into the directory which has all the terraform files. Something like this
- name: Build Docker image
run: |
cd dir
Alternatively, you can also set working directory like this
steps:
- uses: actions/checkout#v1
- name: Setup and run tests
working-directory: ./app
run: |
cp .env .env
OR
jobs:
unit:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./app
steps:
- uses: actions/checkout#v1
- name: Do stuff
https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstepsrun

How to set KUBECONFIG from terraform cloud generate file in github actions

I am trying to set up github actions to run a CI with terraform and kubernetes. I am connecting to terraform cloud to run the terraform commands and it appears to be generating the kubeconfig during the apply process. I get this in the outout:
local_file.kubeconfig: Creation complete after 0s
In the next step, I try to run kubectl to see the resources that were built, but the command fails because it can't find the the configuration file. Specifically:
error: Missing or incomplete configuration info.
So my question is, how do I use the newly generated local_file.kubeconfig in my kubectl commands?
My first attempt was to expose the KUBECONFIG as an environment variable in the github action step, but I didn't know how I would get the value from terraform cloud into the github actions. So instead, I tried to set the variable in my terraform file with a provisioner definition. But this doesn't seem to work.
Is there an easier way to load that value?
Github Actions Steps
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: Terraform Plan
run: terraform plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
- name: Terraform Apply
run: terraform apply -auto-approve
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
# this step fails because kubectl can't find the token
- name: List kube nodes
run: kubectl get nodes
and my main.tf file has this definition:
provider "kubernetes" {
kubeconfig = "${local_file.kubeconfig.content}"
}

Error: Apply not allowed for workspaces with a VCS connection

Error: Apply not allowed for workspaces with a VCS connection
I am getting this error when trying to apply a terraform plan via Github Actions.
Github Action (terraform apply)
- name: Terraform Apply Dev
id: apply_dev
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
run: TF_WORKSPACE=dev terraform apply -auto-approve deployment/
Terraform workspace
The workspace was created on Terraform Cloud as a Version control workflow and is called app-infra-dev
Terraform backend
# The configuration for the `remote` backend.
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "my-org-name"
workspaces {
prefix = "app-infra-"
}
}
}
So because I called my workspace app-infra-dev, my workspace prefix in the backend file is app-infra- and TF_WORKSPACE=dev is set in my GH Action. I would have hoped that would have been enough to make it work.
Thanks for any help!
Your workspace type must be "API driven workflow".
https://learn.hashicorp.com/tutorials/terraform/github-actions
I had the same issue because I initially created it as "Version control workflow", which makes sense, but it doesn't work as expected.
Extracted from the documentation:
In the UI and VCS workflow, every workspace is associated with a
specific branch of a VCS repo of Terraform configurations. Terraform
Cloud registers webhooks with your VCS provider when you create a
workspace, then automatically queues a Terraform run whenever new
commits are merged to that branch of workspace's linked repository.
https://www.terraform.io/docs/cloud/run/ui.html#summary
Instead of if: github.ref == 'refs/heads/master' && github.event_name == 'push', you might consider trigger the apply on the GitHub event itself, as in this example
name: terraform apply
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the main branch
push:
branches: [ master ]
In that example, you can see the terraform apply used at the end of a terraform command sequence:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
apply:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- uses: hashicorp/setup-terraform#v1
with:
terraform_wrapper: true
terraform_version: 0.14.0
# Runs a single command using the runners shell
- name: create credentials
run: echo "$GOOGLE_APPLICATION_CREDENTIALS" > credentials.json
env:
GOOGLE_APPLICATION_CREDENTIALS: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS }}
- name: export GOOGLE_APPLICATION_CREDENTIALS
run: |
echo "GOOGLE_APPLICATION_CREDENTIALS=`pwd`/credentials.json" >> $GITHUB_ENV
- name: terraform init
run: terraform init
- name: terraform workspace new
run: terraform workspace new dev-tominaga
continue-on-error: true
- name: terraform workspace select
run: terraform workspace select dev-tominaga
continue-on-error: true
- name: terraform init
run: terraform init
- name: terraform workspace show
run: terraform workspace show
- name: terraform apply
id: apply
run: terraform apply -auto-approve
Check if you can adapt that to your workflow.

Resources