GitLab Pipeline Throws Invalid Provider Configuration Error on Terraform Configuration - terraform

I have some Terraform configuration which has up until now run successfully without any issues locally and without me explicitly inserting any provider block.
I transferred this very same set of configuration to a GitLab pipeline with 4 pipeline stages of Validate, Plan, Apply and Destroy declared in my .gitlab-ci.yml file and suddenly the 'terraform plan' command fails at the Plan stage with the below "Invalid provider configuration" error.
This is all despite the fact that the 'terraform validate' command runs successfully at the initial pipeline stage - Validate.
Any idea why my configuration will suddenly be throwing this error in my GitLab pipeline when it's been perfectly fine when run locally?

Related

GitLab Pipeline is throwing an Azure CLI error

I'm running a very lightweight GitLab Pipeline which executes a number of Terraform configuration. However, I have hit an absolute roadblock as the pipeline throws an Azure CLI error (screenshot below) when it attempts to run a Terraform init and I simply can't seem to resolve this. Any ideas?
This error happens at the pipeline stage: validate.
Prior to that however, I have another pipeline stage: deploy where I am able to install Azure CLI successfully, using the below commands:
deploy:
stage: deploy
image: mcr.microsoft.com/dotnet/core/sdk:3.1
script:
- curl -sL https://aka.ms/InstallAzureCLIDeb | bash
So after some further investigation, it turns out that this error only occurs when I include my terraform backend.tf file which configures an Azure backend for storing my terraform state file. Exclude this file and everything runs smoothly. I'm at a complete loss, as I definitely require that state file in Azure.
It does appear to me that the Azure CLI successful install at the pipeline deploy stage (above) isn't picked up by the Backend.tf configuration.
Below is the content of my Backend.tf file
terraform {
backend "azurerm" {
resource_group_name = "rg_xxx"
storage_account_name = "stxxxxtfstate"
container_name = "terraform"
key = "terraform.tfstate"
}
}
And below is the YAML snippet from the pipeline deploy stage of my .gitlab-ci.yml file where I call terraform init and apply.
deploy:
stage: deploy
script:
- terraform init
- terraform plan -var-file=dev-settings.tfvars -out=plan.out
- terraform apply -auto-approve plan.out
Thank You Cdub for pointing the OP in the right direction. Posted your valuable discussion as an Answer to help other community members.
If the backend.tf file references the Azure resource manager provider from the Terraform Configuration, then it's possible to move the Azure CLI installation step into the deploy stage where the terraform commands are being executed right before the terraform init.
Please refer to terraform-gitlab-image-no-azure-cli which shows another way to install Azure CLI in a GitHub Pipeline.

plan.cache: no matching files, unable to attach Terraform report to the MR

I have following plan stage in my GitLab CI, which I took from here but unlike gitlab-terraform, terraform v1.1.0 doesn't have any option of plan-json, so I am trying to reproduce the same with the following. Aim is attached the plan changes as report in GitLab merge requests.
plan:
stage: plan
script:
- terraform plan -out=plan.json
- terraform show -json plan.json
artifacts:
name: plan
paths:
- ${TF_ROOT}/plan.cache
reports:
terraform: ${TF_ROOT}/plan.json
However, in MR all I see is 1 Terraform report failed to generate and in the complete job log I see following:
plan.cache: no matching files
12ERROR: No files to upload
My question is, what is the difference between artifacts:path and the path given under artifacts:reports:terraform?
And, where can I find/generate plan.cache?
$ terraform -v
Terraform v1.1.0

GitLab, Terraform, and String Interpolation

I want to use a GitLab Runner to deploy to AWS with Terraform. I have setup AWS credentials in GitLab "Variables" (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY). I must be misunderstanding how .gitlab-cy.yml performs string interpolation because I cannot get the credentials to populate.
The stage in question looks like this:
validate:
stage: validate
dependencies:
- lint
- unit
image:
name: hashicorp/terraform:light
entrypoint:
- "/usr/bin/env"
- "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
- "AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}"
- "AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}"
before_script:
- rm -rf terraform/test/.terraform
- terraform --version
- terraform init -input=false -backend-config="access_key=${AWS_ACCESS_KEY_ID}" -backend-config="secret_key=${AWS_SECRET_ACCESS_KEY}" terraform/test
script:
- terraform validate
The pipeline fails without fail on the terraform init command. However, just to confirm I'm not crazy, I did try a pipeline run with the credentials hardcoded and it worked (I also immediately learned about how to permanently delete commits and pipelines).
From the relevant GitLab documentation on variable usage, I don't see anything obviously wrong.
Error message:
Initializing the backend...
Error: error using credentials to get account ID: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.

Terraform apply command fails when runs twice in Gitlab pipeline and resources were already created [duplicate]

This question already has an answer here:
Terraform destroy fails in CircleCI
(1 answer)
Closed 3 years ago.
I'm quite new with Terraform and I'm trying to replicate in my terraform configuration the stack I have already built for production (basically: Api gateway - Lambda - DynamoDB).
If I run terraform init, terraform plan and then terraform apply from my local host, everything is created as I want.
The problem arises when it comes to my Gitlab CI/CD pipeline, as Terraform complains about the existing resources (the first time runs properly, the second time complains and throws an error).
My Terraform steps in my .gitlab-ci.yml file:
plan:
stage: plan
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
script:
- cd terraform
- rm -rf .terraform
- terraform --version
- terraform init
- terraform plan
deploy:
stage: deploy
image:
name: hashicorp/terraform:light
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
script:
- cd terraform
- terraform init
- terraform apply -auto-approve
dependencies:
- plan
when: manual
I see in my pipeline console the following error:
So after some Googling I saw that maybe the terraform import command could help.
Then added this import command to my .gitlab-ci.yml:
script:
- cd terraform
- terraform init
- terraform import aws_dynamodb_table.demo-dynamodb-table demo-dynamodb-table
- terraform apply -auto-approve
And the error in the Gitlab console was:
In the meantime I tried also this last change locally, and the error was:
So to summarize: I would need to know how to use Terraform in the right way to be able to run the apply command in my Gitlab CI/CD pipeline without conflicts with the resource that was created in the previous run of this same pipeline.
As others have stated, you need to store the Terraform state.
In my GitLab projects, I use a S3 bucket to store the Terraform state. But, have the CI pipeline fill in the key based on the GitLab project's path by setting the TF_CLI_ARGS_init environment variable.
terraform {
backend "s3" {
bucket = "bucket-name-here"
region = "us-west-2"
# key = $CI_PROJECT_PATH_SLUG
}
}
I also set the Terraform workspace based on the project. This can be modified to support branches. I also set the name variable to the project name, for use in the Terraform configuration. And, set input to false so that the CI job doesn't get hung up on user prompts.
variables:
TF_INPUT: "false"
TF_WORKSPACE: "$CI_PROJECT_NAME"
TF_VAR_name: "$CI_PROJECT_NAME"
TF_CLI_ARGS_init: "-upgrade=true"
For destroys, I also make sure to delete the workspace, so that there isn't stuff left over in the bucket.
.destroy:
extends: .terraform
stage: Cleanup
script:
- terraform init
- terraform destroy
-auto-approve
- export WORKSPACE=$TF_WORKSPACE
- export TF_WORKSPACE=default
- terraform workspace delete "$WORKSPACE"

Azure DevOps pipeline build locally with YAML

how can I simulate the build process of Azure Devops pipeline on the local machine before pushing it to branch to test the possible errors.
the solution gets build locally correct with no errors and warnings. also from the VS command line MSBuild builds the solution with no errors but on some push tries the pipeline build throws many errors mostly related to preprocessor defenition and precompiled header.
I wanted to know how can test the same process locally on my machine without pushing to repo.
azure-pipelines.yml
-------------------
pool:
vmImage: 'vs2017-win2016'
steps:
- task: MSBuild#1
displayName: 'Build solution'
inputs:
platform: 'Win32'
configuration: 'release'
solution: 'mysolution.sln'
- task: VSTest#2
displayName: 'Run Test'
inputs:
platform: 'Win32'
Configuration: 'release'
testAssemblyVer2: |
**\*.Test.dll
!**\*TestAdapter.dll
!**\obj\**
runSettingsFile: project.Test/test.runsettings
codeCoverageEnabled: true
If you are using a git repsotiory you can create another branch and make a pull request. As long as the pull request is not set to auto complete the code will not get committed to the repository.
If you are using a TFVC respository you can setup a gated build that is configured to fail. The pipeline should be a copy of your original pipeline but add a PowerShell task at the end of the build pipeline that throws a terminating error. Be sure to setup this gated build on a separate branch so it does not block normal development.
Write-Error "Fail here" -ErrorAction 'Stop'
You can now make pull requests or trigger a gated build without the code actually being commited.
You can use AzurePipelinesPS to install an agent on your local machine with the Install-APAgent command if you need another agent.
I'm only a few hours into to development with Azure, but I think I found a solution that would work for you. I happen to already have the solution in place. Use gradle, then the default YML just runs gradle and you don't have to worry too much about it after the first run. In the gradle file you could also spin up a docker image if you want and build on that.
The issue you have is most likely related to the difference between your local environment and the one on build agent where this YAML pipeline execute the build. Testing it locally (even if it would be possible) will not help as it will be executed in your environment, where you know already that every component required for the successful build are already installed. On the other side on the environment where build agent is running the build there seems to be missed components (or different versions) which cause your build to fail. Try to compare list of installed components and environment variables (like PATH) on your local machine and on build agent - there might be some difference between them.

Resources