I am trying to get terraform to perform terraform init in a specific root directory, but somehow the pipeline doesn't recognize it. Might there be something wrong with the structure of my gitlab-ci.yml file? I have tried moving everything to the root directory, which works fine, but I'd like to have a bit of a folder structure in the repository, in order to make it more readable for future developers.
default:
tags:
- aws
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
variables:
# If not using GitLab's HTTP backend, remove this line and specify TF_HTTP_* variables
TF_STATE_NAME: default
TF_CACHE_KEY: default
# If your terraform files are in a subdirectory, set TF_ROOT accordingly
TF_ROOT: ./src/envs/infrastruktur
before_script:
- rm -rf .terraform
- terraform --version
- export AWS_ACCESS_KEY_ID
- export AWS_ROLE_ARN
- export AWS_DEFAULT_REGION
- export AWS_ROLE_ARN
stages:
- init
- validate
- plan
- pre-apply
init:
stage: init
script:
- terraform init
Everything is fine until the validate stage, but as soon as the pipeline comes to the plan stage, it says that it cannot find any config files.
validate:
stage: validate
script:
- terraform validate
plan:
stage: plan
script:
- terraform plan -out "planfile"
dependencies:
- validate
artifacts:
paths:
- planfile
apply:
stage: pre-apply
script:
- terraform pre-apply -input=false "planfile"
dependencies:
- plan
when: manual
You need to cd in your configration folder in every job and after each job you need to pass the content of /src/envs/infrastruktur where terraform is operating on to the next job via artifacts. I omitted the remainder of your pipeline for brevity.
before_script:
- rm -rf .terraform
- terraform --version
- cd $TF_ROOT
- export AWS_ACCESS_KEY_ID
- export AWS_ROLE_ARN
- export AWS_DEFAULT_REGION
- export AWS_ROLE_ARN
stages:
- init
- validate
- plan
- pre-apply
init:
stage: init
script:
- terraform init
artifacts:
paths:
- $TF_ROOT
validate:
stage: validate
script:
- terraform validate
artifacts:
paths:
- $TF_ROOT
plan:
stage: plan
script:
- terraform plan -out "planfile"
dependencies:
- validate
artifacts:
paths:
- planfile
- $TF_ROOT
Related
I Tried to run a manual apply (i.e plan) of a pipeline that I had run a build for a few days earlier. But the manual apply failed...it said this
I'm pretty sure that this happened because I originally ran a build...then someone else created a build for the same repo and then applied it in the time between me running the build and actually doing a manual apply....So I think there were 2 plans....but I still can't get in my head what had changed to the plan in the meantime to make my dependency lock file inconsistent..
can anybody help? I hope I've explained myself clearly
ci file is like this
image:
name: $<redacted>
variables:
PLAN: plan.tfplan
cache:
paths:
- .terraform
before_script:
- export <redacted>
- export <redacted>
- export <redacted>
- terraform --version
- terraform init -upgrade
stages:
- validate
- plan
- apply
- destroy
validate:
stage: validate
only:
- branches
except:
- master
- main
script:
- terraform validate
plan:
stage: plan
script:
- terraform plan -out=$PLAN
- echo \`\`\`diff > plan.txt
- terraform show -no-color ${PLAN} | tee -a plan.txt
- echo \`\`\` >> plan.txt
- sed -i -e 's/ +/+/g' plan.txt
- sed -i -e 's/ ~/~/g' plan.txt
- sed -i -e 's/ -/-/g' plan.txt
- MESSAGE=$(cat plan.txt)
- >-
curl -X POST -g -H <token stuff>
--data-urlencode "body=${MESSAGE}"
"$<ci url project stuff>"
artifacts:
name: plan
paths:
- $PLAN
only:
- merge_requests
build:
stage: plan
script:
- terraform plan -out=$PLAN
artifacts:
name: plan
paths:
- $PLAN
only:
- branches
.apply-template:
script:
- terraform apply -input=false $PLAN
Manual Apply:
extends: .apply-template
stage: apply
dependencies:
- build
when: manual
except:
- schedules
only:
- master
- main
Scheduled Apply:
extends: .apply-template
stage: apply
dependencies:
- build
only:
- schedules
destroy:
stage: destroy
script:
- terraform destroy -auto-approve
when: manual
only:
- master
- main
I have a gitlab repository, and currently trying to setup a CICD by following this link:https://www.nvisia.com/insights/terraform-automation-with-gitlab-aws. While am able to complete most of the tasks, setup gitlab-runner in an EC2 instance and is running. In the gitlab-ci.yaml, I have deploy_staging & deploy_production stages. I have also created required CI_VARIABLES in the gitlab console as per the URL.
I have also created local variables in my gitlab-ci.yaml, so as to pass these values to be terraform init backend-config.
The issue am facing here is the pipeline works perfectly fine when deployed to the AWS Staging account, but doesnt show/replace variables with actual values when testing for AWS Production Account. Not sure if its terraform version which am running, v0.11.7.
Appreciate any help please, since am stuck here for a day with no clue.
Console screenshot when deployed to STAGING ACCOUNT showing correct values in echo
Console screenshot when deployed to PRODUCTION ACCOUNT
The gitlab-ci.yaml is as follows:
cache:
paths:
- .terraform/plugins
stages:
- plan
- deploy
deploy_staging:
stage: deploy
tags:
- terraform
variables:
TF_VAR_DEPLOY_INTO_ACCOUNT_ID: ${STAGING_ACCOUNT_ID}
TF_VAR_ASSUME_ROLE_EXTERNAL_ID: ${STAGING_ASSUME_ROLE_EXTERNAL_ID}
TF_VAR_AWS_REGION: ${STAGING_AWS_REGION}
TF_VAR_AWS_BKT_KEY: ${STAGING_TERRAFORM_BUCKET_KEY}
script:
- echo ${TF_VAR_DEPLOY_INTO_ACCOUNT_ID}
- echo ${TF_VAR_AWS_REGION}
- echo ${TF_VAR_ASSUME_ROLE_EXTERNAL_ID}
- aws configure list
- terraform init -backend-config="bucket=${STAGING_TERRAFORM_S3_BUCKET}"
-backend-config="key=${TF_VAR_AWS_BKT_KEY}"
-backend-config="region=${TF_VAR_AWS_REGION}" -backend-config="role_arn=arn:aws:iam::${TF_VAR_DEPLOY_INTO_ACCOUNT_ID}:role/S3BackendRole"
-backend-config="external_id=${TF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
-backend-config="session_name=TerraformBackend" terraform-configuration
- terraform apply -auto-approve -input=false terraform-configuration
environment:
name: staging
url: https://staging.example.com
on_stop: stop_staging
only:
variables:
- $DEPLOY_TO == "staging"
stop_staging:
stage: deploy
tags:
- terraform
variables:
TF_VAR_DEPLOY_INTO_ACCOUNT_ID: ${STAGING_ACCOUNT_ID}
TF_VAR_ASSUME_ROLE_EXTERNAL_ID: ${STAGING_ASSUME_ROLE_EXTERNAL_ID}
TF_VAR_AWS_REGION: ${STAGING_AWS_REGION}
TF_VAR_AWS_BKT_KEY: ${STAGING_TERRAFORM_BUCKET_KEY}
script:
- terraform init -backend-config="bucket=${STAGING_TERRAFORM_S3_BUCKET}"
-backend-config="key=${TF_VAR_AWS_BKT_KEY}"
-backend-config="region=${TF_VAR_AWS_REGION}" -backend-config="role_arn=arn:aws:iam::${TF_VAR_DEPLOY_INTO_ACCOUNT_ID}:role/S3BackendRole"
-backend-config="external_id=${TF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
-backend-config="session_name=TerraformBackend" terraform-configuration
- terraform destroy -input=false -auto-approve terraform-configuration
when: manual
environment:
name: staging
action: stop
only:
variables:
- $DEPLOY_TO == "staging"
plan_production:
stage: plan
tags:
- terraform
variables:
PTF_VAR_DEPLOY_INTO_ACCOUNT_ID: "${PRODUCTION_ACCOUNT_ID}"
PTF_VAR_ASSUME_ROLE_EXTERNAL_ID: "${PRODUCTION_ASSUME_ROLE_EXTERNAL_ID}"
PTF_VAR_AWS_REGION: "${PRODUCTION_AWS_REGION}"
PTF_VAR_AWS_BKT_KEY: "${PRODUCTION_TERRAFORM_BUCKET_KEY}"
artifacts:
paths:
- production_plan.txt
- production_plan.bin
expire_in: 1 week
script:
- echo "${PTF_VAR_DEPLOY_INTO_ACCOUNT_ID}"
- echo "${PTF_VAR_AWS_REGION}"
- echo "${PTF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
- echo "${PTF_VAR_AWS_BKT_KEY}"
- aws configure list
- terraform init -backend-config="bucket=${PRODUCTION_TERRAFORM_S3_BUCKET}"
-backend-config="key=${PTF_VAR_AWS_BKT_KEY}"
-backend-config="region=${PTF_VAR_AWS_REGION}" -backend-config="role_arn=arn:aws:iam::${PTF_VAR_DEPLOY_INTO_ACCOUNT_ID}:role/S3BackendRole"
-backend-config="external_id=${PTF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
-backend-config="session_name=TerraformBackend" terraform-configuration
- terraform plan -input=false -out=production_plan.bin terraform-configuration
- terraform plan -no-color production_plan.bin > production_plan.txt
only:
variables:
- $DEPLOY_TO == "production"
deploy_production:
stage: deploy
when: manual
tags:
- terraform
variables:
TF_VAR_DEPLOY_INTO_ACCOUNT_ID: ${PRODUCTION_ACCOUNT_ID}
TF_VAR_ASSUME_ROLE_EXTERNAL_ID: ${PRODUCTION_ASSUME_ROLE_EXTERNAL_ID}
TF_VAR_AWS_REGION: ${PRODUCTION_AWS_REGION}
TF_VAR_AWS_BKT_KEY: ${PRODUCTION_TERRAFORM_BUCKET_KEY}
script:
- terraform init -backend-config="bucket=${PRODUCTION_TERRAFORM_S3_BUCKET}"
-backend-config="key=${TF_VAR_AWS_BKT_KEY}"
-backend-config="region=${TF_VAR_AWS_REGION}" -backend-config="role_arn=arn:aws:iam::${TF_VAR_DEPLOY_INTO_ACCOUNT_ID}:role/S3BackendRole"
-backend-config="external_id=${TF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
-backend-config="session_name=TerraformBackend" terraform-configuration
- terraform apply -auto-approve -input=false production_plan.bin
environment:
name: production
url: https://production.example.com
on_stop: stop_production
only:
variables:
- $DEPLOY_TO == "production"
stop_production:
stage: deploy
tags:
- terraform
variables:
TF_VAR_DEPLOY_INTO_ACCOUNT_ID: ${PRODUCTION_ACCOUNT_ID}
TF_VAR_ASSUME_ROLE_EXTERNAL_ID: ${PRODUCTION_ASSUME_ROLE_EXTERNAL_ID}
TF_VAR_AWS_REGION: ${PRODUCTION_AWS_REGION}
TF_VAR_AWS_BKT_KEY: ${PRODUCTION_TERRAFORM_BUCKET_KEY}
script:
- terraform init -backend-config="bucket=${PRODUCTION_TERRAFORM_S3_BUCKET}"
-backend-config="key=${TF_VAR_AWS_BKT_KEY}"
-backend-config="region=${TF_VAR_AWS_REGION}" -backend-config="role_arn=arn:aws:iam::${TF_VAR_DEPLOY_INTO_ACCOUNT_ID}:role/S3BackendRole"
-backend-config="external_id=${TF_VAR_ASSUME_ROLE_EXTERNAL_ID}"
-backend-config="session_name=TerraformBackend" terraform-configuration
- terraform destroy -input=false -auto-approve terraform-configuration
when: manual
environment:
name: production
action: stop
only:
variables:
- $DEPLOY_TO == "production"
enter code here
I'm trying to deploy a Kubernetes cluster on Azure using the following GitLab pipeline
image:
name: hashicorp/terraform:1.2.3
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
variables:
TF_ROOT: ${CI_PROJECT_DIR}/infrastructure
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}
cache:
key: my-services
paths:
- ${TF_ROOT}/.terraform
before_script:
- cd ${TF_ROOT}
- rm -rf .terraform
- terraform --version
- terraform init
stages:
- terraform_validate
- terraform_plan
- terraform_apply
terraform_validate_dev:
stage: terraform_validate
environment:
name: development
script:
- terraform validate
rules:
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH
terraform_plan_dev:
stage: terraform_plan
environment:
name: development
script:
- terraform plan
- terraform plan-json
dependencies:
- terraform_validate_dev
artifacts:
name: plan deployment
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
rules:
- if: $CI_COMMIT_BRANCH == "development"
terraform_apply_dev:
stage: terraform_apply
environment:
name: development
script:
- terraform apply
dependencies:
- terraform_plan_dev
rules:
- if: $CI_COMMIT_BRANCH == "development"
when: manual
but during the terraform_plan stage, I receive the following error:
"Error: building AzureRM Client: please ensure you have installed Azure CLI version 2.0.79 or newer. Error parsing json result from the Azure CLI: launching Azure CLI: exec: "az": executable file not found in $PATH."
Any idea?
Finally, I was able to find the problem.
Unfortunately, the solution proposed by #sytech did not solve the problem but helped me discover the real problem.
As a good practice, automated tools that want to deploy or use Azure services should always use service principals. For that reason, I created a service principal in Azure and was trying to use it with my Terraform code.
As the documentation says, to use the service principal we need to create the following environment variables:
ARM_CLIENT_ID
ARM_CLIENT_SECRET
ARM_SUBSCRIPTION_ID
ARM_TENANT_ID
Once I added these environment variables the terraform_plan stage was able to complete its work.
As the error message explains, you must install azure CLI.
For example:
# ...
before_script:
- curl -L https://aka.ms/InstallAzureCli | bash
# ...
I try to pass build_backend stage before build_djangoapp with Dockerfile on GitLab, but it fails with this error.
/busybox/sh: eval: line 111: apk: not found
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 127
GitLab CI/CD project
.gitlab-ci.yml
# Official image for Hashicorp's Terraform. It uses light image which is Alpine
# based as it is much lighter.
#
# Entrypoint is also needed as image by default set `terraform` binary as an
# entrypoint.
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
# Default output file for Terraform plan
variables:
GITLAB_TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}
PLAN: plan.tfplan
PLAN_JSON: tfplan.json
TF_ROOT: ${CI_PROJECT_DIR}
GITLAB_TF_PASSWORD: ${CI_JOB_TOKEN}
cache:
paths:
- .terraform
before_script:
- apk --no-cache add jq
- alias convert_report="jq -r '([.resource_changes[]?.change.actions?]|flatten)|{\"create\":(map(select(.==\"create\"))|length),\"update\":(map(select(.==\"update\"))|length),\"delete\":(map(select(.==\"delete\"))|length)}'"
- cd ${TF_ROOT}
- terraform --version
- echo ${GITLAB_TF_ADDRESS}
- terraform init -backend-config="address=${GITLAB_TF_ADDRESS}" -backend-config="lock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="unlock_address=${GITLAB_TF_ADDRESS}/lock" -backend-config="username=${MY_GITLAB_USERNAME}" -backend-config="password=${MY_GITLAB_ACCESS_TOKEN}" -backend-config="lock_method=POST" -backend-config="unlock_method=DELETE" -backend-config="retry_wait_min=5"
stages:
- validate
- build
- test
- deploy
- app_deploy
validate:
stage: validate
script:
- terraform validate
plan:
stage: build
script:
- terraform plan -out=$PLAN
- terraform show --json $PLAN | convert_report > $PLAN_JSON
artifacts:
name: plan
paths:
- ${TF_ROOT}/plan.tfplan
reports:
terraform: ${TF_ROOT}/tfplan.json
# Separate apply job for manual launching Terraform as it can be destructive
# action.
apply:
stage: deploy
environment:
name: production
script:
- terraform apply -input=false $PLAN
dependencies:
- plan
when: manual
only:
- master
build_backend:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- echo "{\"auths\":{\"https://gitlab.amixr.io:4567\":{\"username\":\"gitlab-ci-token\",\"password\":\"$CI_JOB_TOKEN\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --cache=true --context ./djangoapp --dockerfile ./djangoapp/Dockerfile --destination $CONTAINER_IMAGE:$CI_COMMIT_REF_NAME
# https://github.com/GoogleContainerTools/kaniko#pushing-to-google-gcr
build_djangoapp:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
before_script:
- echo 1
script:
- export GOOGLE_APPLICATION_CREDENTIALS=$TF_VAR_gcp_creds_file
- /kaniko/executor --cache=true --context ./djangoapp --dockerfile ./djangoapp/Dockerfile --destination gcr.io/{TF_VAR_gcp_project_name}/djangoapp:$CI_COMMIT_REF_NAME
when: manual
only:
- master
needs: []
app_deploy:
image: google/cloud-sdk
stage: app_deploy
before_script:
- echo 1
environment:
name: production
script:
- gcloud auth activate-service-account --key-file=${TF_VAR_gcp_creds_file}
- gcloud container clusters get-credentials my-cluster --region us-central1 --project ${TF_VAR_gcp_project_name}
- kubectl apply -f hello-kubernetes.yaml
when: manual
only:
- master
needs: []
I looked at your project and it appears you figured out this one already.
Your .gitlab-ci.yml has a global before_script block, that is trying to install packages with apk. But your build_backend is based on the kaniko-image, which is using Busybox, which doesn't have apk (nor apt-get for that matter). You overrided in a later commit the before_script for build_backend, and this took away the problem.
I have created a pipeline in gitlab, with
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
variables:
PLAN: dbrest.tfplan
STATE: dbrest.tfstate
cache:
paths:
- .terraform
before_script:
- terraform --version
- terraform init
stages:
- validate
- build
- deploy
- destroy
validate:
stage: validate
script:
- terraform validate
plan:
stage: build
script:
- terraform plan -state=$STATE -out=$PLAN
artifacts:
name: plan
paths:
- $PLAN
- $STATE
apply:
stage: deploy
environment:
name: production
script:
- terraform apply -state=$STATE -input=false $PLAN
- terraform state show aws_instance.bastion
dependencies:
- plan
when: manual
only:
- master
destroy:
stage: destroy
environment:
name: production
script:
- terraform destroy -state=$STATE -auto-approve
dependencies:
- apply
when: manual
only:
- master
I have also created a variable under 'Settings. -> 'CI/CD' -> 'Variables' - I was under the impression that when I came to the manual stage deploy, gitlab should pause and ask me to input a value for that variable, but this does not happen - what is missing?
You have mixed a job with when: manual to when you trigger a pipeline manually. This is the one you want:
https://docs.gitlab.com/ee/ci/pipelines/#run-a-pipeline-manually
You could use this together with an only for some variable. Something like:
...
apply:
stage: deploy
environment:
name: production
script:
- terraform apply -state=$STATE -input=false $PLAN
- terraform state show aws_instance.bastion
dependencies:
- plan
only:
refs:
- master
variables:
- $RELEASE == "yes"
destroy:
stage: destroy
environment:
name: production
script:
- terraform destroy -state=$STATE -auto-approve
dependencies:
- apply
only:
refs:
- master
variables:
- $RELEASE == "yes"
With something like this, you can have jobs that are never run normally, but only if you manually start a new pipeline on the master branch and set the variable $RELEASE to yes. I haven't tested this, so my apologies if it doesn't work!
I saw your comment and I still think there's more needed for a complete answer, but I do believe this will work for what you're asking for. I included what I've used in the past as well, "MODE" but then you'd need to update the Rules section on your jobs to reflect when you wanted the job to run based on MODE vs RELEASE
variables:
PLAN: dbrest.tfplan
STATE: dbrest.tfstate
RELEASE:
description: "Provide YES or NO to trigger jobs"
default: "NO"
MODE:
description: "Actions for terraform to perform, ie: PLAN, APPLY, DESTROY, PLAN_APPLY, ALL"
default: "PLAN"