I am new to using Github actions and coding into YAML file.
Currently, I setup Terraform Cloud - Github actions for my Datadog POC.
I arrived on the issue:
terraform init
/home/runner/work/_temp/85297372-6fed-4b1d-88f8-3c6b5527569f/terraform-bin init
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
and the current github actions yaml file is:
I use the terraform github actions yaml file
name: 'Terraform'
on:
push:
branches:
- "main"
pull_request:
permissions:
contents: read
jobs:
terraform:
name: 'Terraform'
runs-on: ubuntu-latest
environment: production
# Use the Bash shell regardless whether the GitHub Actions runner is ubuntu-latest, macos-latest, or windows-latest
defaults:
run:
shell: bash
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout
uses: actions/checkout#v3
# Install the latest version of Terraform CLI and configure the Terraform CLI configuration file with a Terraform Cloud user API token
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
# Initialize a new or existing Terraform working directory by creating initial files, loading any remote state, downloading modules, etc.
- name: Terraform Init
run: terraform init
# Checks that all Terraform configuration files adhere to a canonical format
- name: Terraform Format
run: terraform fmt -check
# Generates an execution plan for Terraform
- name: Terraform Plan
run: terraform plan -input=false
# On push to "main", build or change infrastructure according to Terraform configuration files
# Note: It is recommended to set up a required "strict" status check in your repository for "Terraform Cloud". See the documentation on "strict" required status checks for more information: https://help.github.com/en/github/administering-a-repository/types-of-required-status-checks
- name: Terraform Apply
if: github.ref == 'refs/heads/"main"' && github.event_name == 'push'
run: terraform apply -auto-approve -input=false
What remedy should I do here?
I was trying to change the directory included in the terraform init to
run:
working-directory: ./DataDog-Demo/terraform
but I received also error.
Thank you
You will have to cd into the directory which has all the terraform files. Something like this
- name: Build Docker image
run: |
cd dir
Alternatively, you can also set working directory like this
steps:
- uses: actions/checkout#v1
- name: Setup and run tests
working-directory: ./app
run: |
cp .env .env
OR
jobs:
unit:
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./app
steps:
- uses: actions/checkout#v1
- name: Do stuff
https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idstepsrun
Related
I am trying to set up github actions to run a CI with terraform and kubernetes. I am connecting to terraform cloud to run the terraform commands and it appears to be generating the kubeconfig during the apply process. I get this in the outout:
local_file.kubeconfig: Creation complete after 0s
In the next step, I try to run kubectl to see the resources that were built, but the command fails because it can't find the the configuration file. Specifically:
error: Missing or incomplete configuration info.
So my question is, how do I use the newly generated local_file.kubeconfig in my kubectl commands?
My first attempt was to expose the KUBECONFIG as an environment variable in the github action step, but I didn't know how I would get the value from terraform cloud into the github actions. So instead, I tried to set the variable in my terraform file with a provisioner definition. But this doesn't seem to work.
Is there an easier way to load that value?
Github Actions Steps
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: Terraform Plan
run: terraform plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
- name: Terraform Apply
run: terraform apply -auto-approve
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
# this step fails because kubectl can't find the token
- name: List kube nodes
run: kubectl get nodes
and my main.tf file has this definition:
provider "kubernetes" {
kubeconfig = "${local_file.kubeconfig.content}"
}
I am trying to get terraform to perform terraform init in a specific root directory, but somehow the pipeline doesn't recognize it. Might there be something wrong with the structure of my gitlab-ci.yml file? I have tried moving everything to the root directory, which works fine, but I'd like to have a bit of a folder structure in the repository, in order to make it more readable for future developers.
default:
tags:
- aws
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
variables:
# If not using GitLab's HTTP backend, remove this line and specify TF_HTTP_* variables
TF_STATE_NAME: default
TF_CACHE_KEY: default
# If your terraform files are in a subdirectory, set TF_ROOT accordingly
TF_ROOT: ./src/envs/infrastruktur
before_script:
- rm -rf .terraform
- terraform --version
- export AWS_ACCESS_KEY_ID
- export AWS_ROLE_ARN
- export AWS_DEFAULT_REGION
- export AWS_ROLE_ARN
stages:
- init
- validate
- plan
- pre-apply
init:
stage: init
script:
- terraform init
Everything is fine until the validate stage, but as soon as the pipeline comes to the plan stage, it says that it cannot find any config files.
validate:
stage: validate
script:
- terraform validate
plan:
stage: plan
script:
- terraform plan -out "planfile"
dependencies:
- validate
artifacts:
paths:
- planfile
apply:
stage: pre-apply
script:
- terraform pre-apply -input=false "planfile"
dependencies:
- plan
when: manual
You need to cd in your configration folder in every job and after each job you need to pass the content of /src/envs/infrastruktur where terraform is operating on to the next job via artifacts. I omitted the remainder of your pipeline for brevity.
before_script:
- rm -rf .terraform
- terraform --version
- cd $TF_ROOT
- export AWS_ACCESS_KEY_ID
- export AWS_ROLE_ARN
- export AWS_DEFAULT_REGION
- export AWS_ROLE_ARN
stages:
- init
- validate
- plan
- pre-apply
init:
stage: init
script:
- terraform init
artifacts:
paths:
- $TF_ROOT
validate:
stage: validate
script:
- terraform validate
artifacts:
paths:
- $TF_ROOT
plan:
stage: plan
script:
- terraform plan -out "planfile"
dependencies:
- validate
artifacts:
paths:
- planfile
- $TF_ROOT
I'm trying to set up Terrafom validation on Gitlab CI.
However a build fails with an error: "Terraform has no command named "sh". Did you mean "show"?"
Why does it happen? How could it be fixed?
My .gitlab-ci.yml
image: hashicorp/terraform:light
before_script:
- terraform init
validate:
script:
- terraform validate
You need to override the entrypoint in the terraform image so you have access to the shell.
image:
name: hashicorp/terraform:light
entrypoint: [""]
before_script:
- terraform init
validate:
script:
- terraform validate
You can also take a look at the official gitlab documentation how to integrate terraform with gitlab, as the have a template for that.
Error: Apply not allowed for workspaces with a VCS connection
I am getting this error when trying to apply a terraform plan via Github Actions.
Github Action (terraform apply)
- name: Terraform Apply Dev
id: apply_dev
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
run: TF_WORKSPACE=dev terraform apply -auto-approve deployment/
Terraform workspace
The workspace was created on Terraform Cloud as a Version control workflow and is called app-infra-dev
Terraform backend
# The configuration for the `remote` backend.
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "my-org-name"
workspaces {
prefix = "app-infra-"
}
}
}
So because I called my workspace app-infra-dev, my workspace prefix in the backend file is app-infra- and TF_WORKSPACE=dev is set in my GH Action. I would have hoped that would have been enough to make it work.
Thanks for any help!
Your workspace type must be "API driven workflow".
https://learn.hashicorp.com/tutorials/terraform/github-actions
I had the same issue because I initially created it as "Version control workflow", which makes sense, but it doesn't work as expected.
Extracted from the documentation:
In the UI and VCS workflow, every workspace is associated with a
specific branch of a VCS repo of Terraform configurations. Terraform
Cloud registers webhooks with your VCS provider when you create a
workspace, then automatically queues a Terraform run whenever new
commits are merged to that branch of workspace's linked repository.
https://www.terraform.io/docs/cloud/run/ui.html#summary
Instead of if: github.ref == 'refs/heads/master' && github.event_name == 'push', you might consider trigger the apply on the GitHub event itself, as in this example
name: terraform apply
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the main branch
push:
branches: [ master ]
In that example, you can see the terraform apply used at the end of a terraform command sequence:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
apply:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- uses: hashicorp/setup-terraform#v1
with:
terraform_wrapper: true
terraform_version: 0.14.0
# Runs a single command using the runners shell
- name: create credentials
run: echo "$GOOGLE_APPLICATION_CREDENTIALS" > credentials.json
env:
GOOGLE_APPLICATION_CREDENTIALS: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS }}
- name: export GOOGLE_APPLICATION_CREDENTIALS
run: |
echo "GOOGLE_APPLICATION_CREDENTIALS=`pwd`/credentials.json" >> $GITHUB_ENV
- name: terraform init
run: terraform init
- name: terraform workspace new
run: terraform workspace new dev-tominaga
continue-on-error: true
- name: terraform workspace select
run: terraform workspace select dev-tominaga
continue-on-error: true
- name: terraform init
run: terraform init
- name: terraform workspace show
run: terraform workspace show
- name: terraform apply
id: apply
run: terraform apply -auto-approve
Check if you can adapt that to your workflow.
I have defined the following, simple pipeline:
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
variables:
PLAN: dbrest.tfplan
STATE: dbrest.tfstate
cache:
paths:
- .terraform
before_script:
- terraform --version
- terraform init
stages:
- validate
- build
- deploy
- destroy
validate:
stage: validate
script:
- terraform validate
plan:
stage: build
script:
- terraform plan -state=$STATE -out=$PLAN
artifacts:
name: plan
paths:
- $PLAN
- $STATE
apply:
stage: deploy
environment:
name: production
script:
- terraform apply -state=$STATE -input=false $PLAN
- terraform state show aws_instance.bastion
dependencies:
- plan
when: manual
only:
- master
destroy:
stage: destroy
environment:
name: production
script:
- terraform destroy -state=$STATE -auto-approve
dependencies:
- apply
when: manual
only:
- master
When I run it, everything succeeds wonderfully - but the destroy stage doesn't in fact destroy the environment I've created in the apply stage. This is what I see:
Running with gitlab-runner 10.5.0 (80b03db9)
on ip-10-74-163-110 5cf66672
Using Docker executor with image hashicorp/terraform:light ...
Pulling docker image hashicorp/terraform:light ...
Using docker image sha256:5d5c9faad78b96bb84555a584fe729260d7ff7d3fb973e105690ddc0dab48fb5 for hashicorp/terraform:light ...
Running on runner-5cf66672-project-1136-concurrent-0 via ip-10-197-79-116...
Fetching changes...
Removing .terraform/
Removing dbrest.tfplan
Removing dbrest.tfstate
HEAD is now at f798b05 Update .gitlab-ci.yml
Checking out f798b05a as master...
Skipping Git submodules setup
Checking cache for default-1...
Successfully extracted cache
$ terraform --version
Terraform v0.12.13
+ provider.aws v2.34.0
$ terraform init
Initializing the backend...
Initializing provider plugins...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 2.34"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
$ terraform destroy -state=$STATE -auto-approve
Destroy complete! Resources: 0 destroyed.
Creating cache default-1...
.terraform: found 5 matching files
Created cache
Job succeeded
It seems obvious that something is missing in the way I call terraform destroy, but I don't know what - can somebody shed some light on this, please?
You aren't correctly passing the state from the apply job because you haven't set the artifacts up like you did for plan -> apply. Your apply job should look like this:
apply:
stage: deploy
environment:
name: production
script:
- terraform apply -state=$STATE -input=false $PLAN
- terraform state show aws_instance.bastion
artifacts:
name: apply
paths:
- $STATE
dependencies:
- plan
when: manual
only:
- master
A better solution, however, would be to not use file based state here and instead use proper remote state (eg S3 if you're using AWS) or you're going to have a ton of problems later on when multiple users (including CI as a potentially self concurrent user) are running Terraform. This allows you to take advantage of state locking and also allow for versioning the state file in case things go wrong during a Terraform operation such as moving state as part of a refactor.