The -var and -var-file options cannot be used when applying a saved plan file - gitlab

I am using Gitlab terraform & the yaml file is as below for deploy stage,
build:
extends: .terraform:build
script:
- cd "${TF_ROOT}"
- gitlab-terraform plan --var-file=local.tfvars --var-file=common.tfvars
- gitlab-terraform plan-json --var-file=local.tfvars --var-file=common.tfvars
deploy:
extends: .terraform:deploy
script:
- cd "${TF_ROOT}"
- gitlab-terraform apply --var-file=local.tfvars --var-file=common.tfvars
environment:
name: $TF_STATE_NAME
First time it deployed perfectly, However my Gitlab log shows the following error now for the second time,
│ Error: Can't set variables when applying a saved plan
The -var and -var-file options cannot be used when applying a saved plan
file, because a saved plan includes the variable values that were set when
it was created.
ERROR: Job failed: exit code 1
I am looking through the internet but no help till now.
Can someone please suggest me the terraform command I should use to avoid this error ?

Related

plan.cache: no matching files, unable to attach Terraform report to the MR

I have following plan stage in my GitLab CI, which I took from here but unlike gitlab-terraform, terraform v1.1.0 doesn't have any option of plan-json, so I am trying to reproduce the same with the following. Aim is attached the plan changes as report in GitLab merge requests.
plan:
stage: plan
script:
- terraform plan -out=plan.json
- terraform show -json plan.json
artifacts:
name: plan
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
However, in MR all I see is 1 Terraform report failed to generate and in the complete job log I see following:
plan.cache: no matching files
12ERROR: No files to upload
My question is, what is the difference between artifacts:path and the path given under artifacts:reports:terraform?
And, where can I find/generate plan.cache?
$ terraform -v
Terraform v1.1.0

Array variable in gitlab CI/CD yml

I am writing a CI/CD pipeline for terraform to deploy GCP resources.
In terraform code, I got many functionality and Network is one of them. The folder structure of the Network is
Network
VPC
LoadBalancer
DNS
VPN
So, I want to loop terraform init, plan and apply commands for all the sub-folder of Network folder. The yml file looks like
image:
name: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
variables:
TF_ROOT: ${CI_PROJECT_DIR}
env: 'prod'
network_services: ""
stages:
- init
init:
stage: init
script:
- |
network_services = ['vpc' 'vpn']
for service in $network_services[#]
do
echo The path is var/$env/terraform.tfvars
done
The above gives me the error:
$ network_services = ['vpc' 'vpn'] # collapsed multi-line command
/bin/sh: eval: line 103: network_services: not found
Please suggest a way to declare array variable in gitlab CI/CD yml.
Try changing your job to something like this:
init:
stage: init
script:
- network_services = ('vpc' 'vpn')
- for service in $network_services[#]
do
echo "The path is var/$env/terraform.tfvars
done
I don't believe you can do multi-line commands like that due to the way the the gitlab-runner eval's the entries in the script array, though combining them with ; has worked for me if I want them on one line.

Terraform apply being run twice in gitlab CI - how to prevent?

We have a pipeline that includes "terraform plan" and "terraform apply" as separate CI steps, so that in production we can manually review changes before applying (however in review apps / staging we're happy for them to run automatically). The plan is passed as an artifact between the jobs.
We've had a couple of issues where developers have re-run the "terraform apply" job without re-running "terraform plan". I'm trying to work out how to identify this and prevent it.
I'm surprised that the terraform plan doesn't e.g. include a hash of the terraform state, and so apply could identify that the state has changed and refuse to continue.
Is there a suggested way to fix this? We've tried:
Searching for options in terraform to avoid this (nothing so far)
Searching for options in gitlab to avoid this (nothing so far)
We're currently looking into taking our own checksum of the tfstate file in the plan stage, and then checking that at the start of the apply stage - but I can't help feeling this ought to be out there already.
(State is stored in an S3 bucket. We also use dynamodb for locking)
Cut down .gitlab-ci.yml for illustration:
stages:
- plan
- apply
terraform plan:
image: hashicorp/terraform:0.12.26
stage: plan
script:
- terraform init
- terraform plan -out terraform.plan
artifacts:
paths:
- terraform.plan
terraform apply:
image: hashicorp/terraform:0.12.26
stage: apply
script:
- terraform apply -auto-approve terraform.plan
rules:
- if: $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "master"
when: manual
- when: on_success

Gitlab CI: terraform destroy doesn't destroy?

I have defined the following, simple pipeline:
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
variables:
PLAN: dbrest.tfplan
STATE: dbrest.tfstate
cache:
paths:
- .terraform
before_script:
- terraform --version
- terraform init
stages:
- validate
- build
- deploy
- destroy
validate:
stage: validate
script:
- terraform validate
plan:
stage: build
script:
- terraform plan -state=$STATE -out=$PLAN
artifacts:
name: plan
paths:
- $PLAN
- $STATE
apply:
stage: deploy
environment:
name: production
script:
- terraform apply -state=$STATE -input=false $PLAN
- terraform state show aws_instance.bastion
dependencies:
- plan
when: manual
only:
- master
destroy:
stage: destroy
environment:
name: production
script:
- terraform destroy -state=$STATE -auto-approve
dependencies:
- apply
when: manual
only:
- master
When I run it, everything succeeds wonderfully - but the destroy stage doesn't in fact destroy the environment I've created in the apply stage. This is what I see:
Running with gitlab-runner 10.5.0 (80b03db9)
on ip-10-74-163-110 5cf66672
Using Docker executor with image hashicorp/terraform:light ...
Pulling docker image hashicorp/terraform:light ...
Using docker image sha256:5d5c9faad78b96bb84555a584fe729260d7ff7d3fb973e105690ddc0dab48fb5 for hashicorp/terraform:light ...
Running on runner-5cf66672-project-1136-concurrent-0 via ip-10-197-79-116...
Fetching changes...
Removing .terraform/
Removing dbrest.tfplan
Removing dbrest.tfstate
HEAD is now at f798b05 Update .gitlab-ci.yml
Checking out f798b05a as master...
Skipping Git submodules setup
Checking cache for default-1...
Successfully extracted cache
$ terraform --version
Terraform v0.12.13
+ provider.aws v2.34.0
$ terraform init
Initializing the backend...
Initializing provider plugins...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 2.34"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
$ terraform destroy -state=$STATE -auto-approve
Destroy complete! Resources: 0 destroyed.
Creating cache default-1...
.terraform: found 5 matching files
Created cache
Job succeeded
It seems obvious that something is missing in the way I call terraform destroy, but I don't know what - can somebody shed some light on this, please?
You aren't correctly passing the state from the apply job because you haven't set the artifacts up like you did for plan -> apply. Your apply job should look like this:
apply:
stage: deploy
environment:
name: production
script:
- terraform apply -state=$STATE -input=false $PLAN
- terraform state show aws_instance.bastion
artifacts:
name: apply
paths:
- $STATE
dependencies:
- plan
when: manual
only:
- master
A better solution, however, would be to not use file based state here and instead use proper remote state (eg S3 if you're using AWS) or you're going to have a ton of problems later on when multiple users (including CI as a potentially self concurrent user) are running Terraform. This allows you to take advantage of state locking and also allow for versioning the state file in case things go wrong during a Terraform operation such as moving state as part of a refactor.

Terraform apply command fails when runs twice in Gitlab pipeline and resources were already created [duplicate]

This question already has an answer here:
Terraform destroy fails in CircleCI
(1 answer)
Closed 3 years ago.
I'm quite new with Terraform and I'm trying to replicate in my terraform configuration the stack I have already built for production (basically: Api gateway - Lambda - DynamoDB).
If I run terraform init, terraform plan and then terraform apply from my local host, everything is created as I want.
The problem arises when it comes to my Gitlab CI/CD pipeline, as Terraform complains about the existing resources (the first time runs properly, the second time complains and throws an error).
My Terraform steps in my .gitlab-ci.yml file:
plan:
stage: plan
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
script:
- cd terraform
- rm -rf .terraform
- terraform --version
- terraform init
- terraform plan
deploy:
stage: deploy
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
script:
- cd terraform
- terraform init
- terraform apply -auto-approve
dependencies:
- plan
when: manual
I see in my pipeline console the following error:
So after some Googling I saw that maybe the terraform import command could help.
Then added this import command to my .gitlab-ci.yml:
script:
- cd terraform
- terraform init
- terraform import aws_dynamodb_table.demo-dynamodb-table demo-dynamodb-table
- terraform apply -auto-approve
And the error in the Gitlab console was:
In the meantime I tried also this last change locally, and the error was:
So to summarize: I would need to know how to use Terraform in the right way to be able to run the apply command in my Gitlab CI/CD pipeline without conflicts with the resource that was created in the previous run of this same pipeline.
As others have stated, you need to store the Terraform state.
In my GitLab projects, I use a S3 bucket to store the Terraform state. But, have the CI pipeline fill in the key based on the GitLab project's path by setting the TF_CLI_ARGS_init environment variable.
terraform {
backend "s3" {
bucket = "bucket-name-here"
region = "us-west-2"
# key = $CI_PROJECT_PATH_SLUG
}
}
I also set the Terraform workspace based on the project. This can be modified to support branches. I also set the name variable to the project name, for use in the Terraform configuration. And, set input to false so that the CI job doesn't get hung up on user prompts.
variables:
TF_INPUT: "false"
TF_WORKSPACE: "$CI_PROJECT_NAME"
TF_VAR_name: "$CI_PROJECT_NAME"
TF_CLI_ARGS_init: "-upgrade=true"
For destroys, I also make sure to delete the workspace, so that there isn't stuff left over in the bucket.
.destroy:
extends: .terraform
stage: Cleanup
script:
- terraform init
- terraform destroy
-auto-approve
- export WORKSPACE=$TF_WORKSPACE
- export TF_WORKSPACE=default
- terraform workspace delete "$WORKSPACE"

Resources