GitLab Pipeline is throwing an Azure CLI error - terraform

I'm running a very lightweight GitLab Pipeline which executes a number of Terraform configuration. However, I have hit an absolute roadblock as the pipeline throws an Azure CLI error (screenshot below) when it attempts to run a Terraform init and I simply can't seem to resolve this. Any ideas?
This error happens at the pipeline stage: validate.
Prior to that however, I have another pipeline stage: deploy where I am able to install Azure CLI successfully, using the below commands:
deploy:
stage: deploy
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- curl -sL https://aka.ms/InstallAzureCLIDeb | bash
So after some further investigation, it turns out that this error only occurs when I include my terraform backend.tf file which configures an Azure backend for storing my terraform state file. Exclude this file and everything runs smoothly. I'm at a complete loss, as I definitely require that state file in Azure.
It does appear to me that the Azure CLI successful install at the pipeline deploy stage (above) isn't picked up by the Backend.tf configuration.
Below is the content of my Backend.tf file
terraform {
backend "azurerm" {
resource_group_name = "rg_xxx"
storage_account_name = "stxxxxtfstate"
container_name = "terraform"
key = "terraform.tfstate"
}
}
And below is the YAML snippet from the pipeline deploy stage of my .gitlab-ci.yml file where I call terraform init and apply.
deploy:
stage: deploy
script:
- terraform init
- terraform plan -var-file=dev-settings.tfvars -out=plan.out
- terraform apply -auto-approve plan.out

Thank You Cdub for pointing the OP in the right direction. Posted your valuable discussion as an Answer to help other community members.
If the backend.tf file references the Azure resource manager provider from the Terraform Configuration, then it's possible to move the Azure CLI installation step into the deploy stage where the terraform commands are being executed right before the terraform init.
Please refer to terraform-gitlab-image-no-azure-cli which shows another way to install Azure CLI in a GitHub Pipeline.

Related

Is there a handy way to use Bicep modules in an Azure DevOps pipeline?

I am setting up my Azure DevOps pipeline (an Azure CLI task) with the intention of deploying a resource group and several resources within it. So far I have been able to deploy and validate from my local pc with no issues however when I configure my pipeline in DevOps I get the following error message:
C:\devops_work\11\s\main_v2.bicep(55,29) : Error BCP091: An error occurred reading file. Could not find a part of the path 'C:\devops_work\11\isv-bicep\storage_account.bicep'.
For context, 'main_v2.bicep' is my "main file" where the modules are called, in this case, "storage_account.bicep"
The same error occurs for all other modules. A couple of details regarding my pipeline:
I am using my own agent pool
My code sits in an Azure Repository
I have tried checking 'Checkout submodules' (Any nested submodules within)
The files all sit at the root level of the repository
My pipeline is not a YAML pipeline
Any help or insight into this is duly appreciated
You need to specify the working directory and ensure that your repository is being cloned.
steps:
- checkout: self
- powershell: |
az deployment group create `
-f "your-bicep-file.bicep" `
-g "your-resource-group-name"
workingDirectory: $(Build.SourcesDirectory)
If the pipeline is not in the same repository as your bicep files, on the checkout change self by the name of the repository alias, and complement the working directory with the path (By default it is cloned into $(Build.SourcesDirectory), but if you check out more than one repo it adds an extra directory).
steps:
- checkout: <your repo alias>
- powershell: |
az deployment group create `
-f "your-bicep-file.bicep" `
-g "your-resource-group-name"
workingDirectory: $(Build.SourcesDirectory)/<your repo alias>

GitLab Pipeline Throws Invalid Provider Configuration Error on Terraform Configuration

I have some Terraform configuration which has up until now run successfully without any issues locally and without me explicitly inserting any provider block.
I transferred this very same set of configuration to a GitLab pipeline with 4 pipeline stages of Validate, Plan, Apply and Destroy declared in my .gitlab-ci.yml file and suddenly the 'terraform plan' command fails at the Plan stage with the below "Invalid provider configuration" error.
This is all despite the fact that the 'terraform validate' command runs successfully at the initial pipeline stage - Validate.
Any idea why my configuration will suddenly be throwing this error in my GitLab pipeline when it's been perfectly fine when run locally?

plan.cache: no matching files, unable to attach Terraform report to the MR

I have following plan stage in my GitLab CI, which I took from here but unlike gitlab-terraform, terraform v1.1.0 doesn't have any option of plan-json, so I am trying to reproduce the same with the following. Aim is attached the plan changes as report in GitLab merge requests.
plan:
stage: plan
script:
- terraform plan -out=plan.json
- terraform show -json plan.json
artifacts:
name: plan
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
However, in MR all I see is 1 Terraform report failed to generate and in the complete job log I see following:
plan.cache: no matching files
12ERROR: No files to upload
My question is, what is the difference between artifacts:path and the path given under artifacts:reports:terraform?
And, where can I find/generate plan.cache?
$ terraform -v
Terraform v1.1.0

Terraform apply command fails when runs twice in Gitlab pipeline and resources were already created [duplicate]

This question already has an answer here:
Terraform destroy fails in CircleCI
(1 answer)
Closed 3 years ago.
I'm quite new with Terraform and I'm trying to replicate in my terraform configuration the stack I have already built for production (basically: Api gateway - Lambda - DynamoDB).
If I run terraform init, terraform plan and then terraform apply from my local host, everything is created as I want.
The problem arises when it comes to my Gitlab CI/CD pipeline, as Terraform complains about the existing resources (the first time runs properly, the second time complains and throws an error).
My Terraform steps in my .gitlab-ci.yml file:
plan:
stage: plan
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
script:
- cd terraform
- rm -rf .terraform
- terraform --version
- terraform init
- terraform plan
deploy:
stage: deploy
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
script:
- cd terraform
- terraform init
- terraform apply -auto-approve
dependencies:
- plan
when: manual
I see in my pipeline console the following error:
So after some Googling I saw that maybe the terraform import command could help.
Then added this import command to my .gitlab-ci.yml:
script:
- cd terraform
- terraform init
- terraform import aws_dynamodb_table.demo-dynamodb-table demo-dynamodb-table
- terraform apply -auto-approve
And the error in the Gitlab console was:
In the meantime I tried also this last change locally, and the error was:
So to summarize: I would need to know how to use Terraform in the right way to be able to run the apply command in my Gitlab CI/CD pipeline without conflicts with the resource that was created in the previous run of this same pipeline.
As others have stated, you need to store the Terraform state.
In my GitLab projects, I use a S3 bucket to store the Terraform state. But, have the CI pipeline fill in the key based on the GitLab project's path by setting the TF_CLI_ARGS_init environment variable.
terraform {
backend "s3" {
bucket = "bucket-name-here"
region = "us-west-2"
# key = $CI_PROJECT_PATH_SLUG
}
}
I also set the Terraform workspace based on the project. This can be modified to support branches. I also set the name variable to the project name, for use in the Terraform configuration. And, set input to false so that the CI job doesn't get hung up on user prompts.
variables:
TF_INPUT: "false"
TF_WORKSPACE: "$CI_PROJECT_NAME"
TF_VAR_name: "$CI_PROJECT_NAME"
TF_CLI_ARGS_init: "-upgrade=true"
For destroys, I also make sure to delete the workspace, so that there isn't stuff left over in the bucket.
.destroy:
extends: .terraform
stage: Cleanup
script:
- terraform init
- terraform destroy
-auto-approve
- export WORKSPACE=$TF_WORKSPACE
- export TF_WORKSPACE=default
- terraform workspace delete "$WORKSPACE"

Terraform destroy fails in CircleCI

I am currently using CircleCI as my CI tool to build AWS infrastructure using Terraform
My flow is,
Create an AWS instance using Terraform
Install Docker and run Nginx image on it
Destroy the infrastructure
My CircleCI config is as follows,
version: 2
jobs:
terraform_apply:
working_directory: ~/tmp
docker:
- image: hashicorp/terraform:light
- image: ubuntu:16.04
steps:
- checkout
- run:
name: terraform apply
command: |
terraform init
terraform apply -auto-approve
- store_artifacts:
path: terraform.tfstate
terraform_destroy:
working_directory: ~/tmp
docker:
- image: hashicorp/terraform:light
- image: ubuntu:16.04
steps:
- checkout
- run:
name: terraform destroy
command: |
terraform init
terraform destroy -auto-approve
workflows:
version: 2
terraform:
jobs:
- terraform_apply
- click_here_to_delete:
type: approval
requires:
- terraform_apply
- terraform_destroy:
requires:
- click_here_to_delete
Here I am using 2 jobs, One for the creation and one for Deletion in CircleCI workflow.
My first job is running successfully but when I started second it start from scratch so I could not get previous terraform apply state hence terraform could not destroy my already created infrastructure.
I am looking for some solution where I can somehow save state file and copy it to next job where terraform can destroy my previous architecture
You should be using remote state.
Local state is only ever useful if you are always running from the same machine and don't care about loss of your state file if you accidentally delete something etc.
You can mix and match any of the available state backends but as you're using AWS already it probably makes most sense to use the S3 backend.
You will need to define the state configuration for each location which can be done entirely hardcoded in config, entirely by command line flags or partially with both.
As an example you should have something like this block in each of the directories you would run Terraform in:
terraform {
backend "s3" {}
}
You could then finish configuring this during terraform init:
terraform init -backend-config="bucket=uniquely-named-terraform-state-bucket" \
-backend-config="key=state-key/terraform.tfstate"
Once you have ran terraform init, Terraform will fetch the state from S3 for any plans. Then on a terraform apply or terraform destroy it will update the state file as necessary.
This will then allow you to share the state easily among colleagues and also CI/CD machines. You should also consider looking into state locking using DynamoDB to prevent state from being corrupted by multiple people modifying state at the same time. Equally you should also consider enabling versioning on the S3 bucket used for storing your state so you can always get back to an earlier version of the state in the event of any issues.

Resources