Error: Missing required provider in next stage even after init - gitlab

I have following CI configurations:
...
cache:
key: ${CI_PROJECT_NAME}
paths:
- ${TF_ROOT}/.terraform
before_script:
- echo -e "credentials \"$CI_SERVER_HOST\" {\n token = \"$CI_JOB_TOKEN\"\n}" > $TF_CLI_CONFIG_FILE
- cd ${TF_ROOT}
- export TF_LOG_CORE=TRACE
- export TF_LOG_PATH=terraform_logs.txt
stages:
- initialize
- validate
init:
stage: initialize
script:
- terraform -v
- terraform init
#- terraform validate
validate:
stage: validate
script:
- terraform validate
My init runs totally fine however i get following in the next stage i.e. validate:
$ terraform validate
╷
│ Error: Missing required provider
│
│ This configuration requires provider registry.terraform.io/datadog/datadog,
│ but that provider isn't available. You may be able to install it
│ automatically by running:
│ terraform init
in provider.tf:
terraform {
required_version = ">= 0.14"
required_providers {
datadog = {
source = "DataDog/datadog"
version = "2.24.0"
}
}
}
in config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "some rummer"
url = "****
token = "***"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
If run the validate as subsequent command in the init stage itself if works fine, but just not in the different stage.
If i do ls -al in the next stage before validate, i can even see .terraform folder present which should be having providers inside?
Second guess was a caching issue, however I believe I have specified caches correctly - ${TF_ROOT}/.terraform?
I am running the gitlab-runner as shell executor.
Any idea what is wrong here?

Related

Failed to create my first OpenStack VM by way of terraform

I am trying to see if I could create OpenStack VMs by terraform for the first time, but so far no luck.
here is what I have in my main.tf file:
...
terraform {
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
}
}
}
provider "openstack" {
cloud = "osp_admin" # cloud defined in cloud.yml file
}
# Variables
variable "keypair" {
type = string
default = "ubuntu" # name of keypair created
}
variable "network" {
type = string
default = "Public_External_1" # default network to be used
}
variable "security_groups" {
type = list(string)
default = ["default"] # Name of default security group
}
# Data sources
## Get flavor id
data "openstack_compute_flavor_v2" "flavor" {
name = "mt.small" # flavor to be used
}
## Get Image ID
data "openstack_images_image_v2" "image" {
name = "Debian-10" # Name of image to be used
most_recent = true
}
And image of "Debian-10" has bee created, as I have verified it from image list. Now If I was running this on my command line.
terraform plan
I have got such message in return:
data.openstack_images_image_v2.image: Reading...
data.openstack_compute_flavor_v2.flavor: Reading...
╷
│ Error: Error creating OpenStack compute client: Post "http://van3-st-vn-01.corp.<domain_name>.com:5000/v3/auth/tokens": OpenStack connection error, retries exhausted. Aborting. Last error was: EOF
│
│ with data.openstack_compute_flavor_v2.flavor,
│ on main.tf line 32, in data "openstack_compute_flavor_v2" "flavor":
│ 32: data "openstack_compute_flavor_v2" "flavor" {
│
╵
╷
│ Error: Error creating OpenStack image client: Post "http://van3-st-vn-01.corp.<domain_name>.com:5000/v3/auth/tokens": OpenStack connection error, retries exhausted. Aborting. Last error was: EOF
│
│ with data.openstack_images_image_v2.image,
│ on main.tf line 37, in data "openstack_images_image_v2" "image":
│ 37: data "openstack_images_image_v2" "image" {
│
╵
I was running terraform on ubuntu 22.04, and here is the terraform version message:
/usr/bin/terraform --version
Terraform v1.2.9
on linux_amd64
+ provider registry.terraform.io/terraform-provider-openstack/openstack v1.48.0
If I was to log-in OpenStack, I was able to create this instance with the same set of parameters.
Any ideas what I did wrong here ?
Thanks,
Chun
update:
curl -v http://van3-st-vn-01.corp.<domain_name>.com:5000/v3/auth/tokens
* Trying 10.95.36.130:5000...
* TCP_NODELAY set
* Connected to van3-st-vn-01.corp.<domain_name>.com (10.95.36.130) port 5000 (#0)
> GET /v3/auth/tokens HTTP/1.1
> Host: van3-st-vn-01.corp.<domain_name>.com:5000
> User-Agent: curl/7.68.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host van3-st-vn-01.corp.<domain_name>.com left intact
curl: (52) Empty reply from server

How to add a label to my vm instance in gcp via terraform/terragrunt

I have an issue in our environment where i cannot add a label to a vm instance in GCP via terraform/terragrunt after creation. We have a google repository that is setup via terraform and we use git to clone and update from a local repository, this will activate a trigger on cloudbuild to push the changes to the repo. We do not use terraform/grunt commands at all. It is all controlled via git. The labels are referenced in our compute module as shown.
variable "labels" {
description = "Labels to add."
type = map(string)
default = {}
}
Ok onto the issue. We have in our environment a mix of lift and shift and native cloud vm instances. We recently decided we wanted to add an additional label in the code to identify if the instance was under terraform control - ie terraform = "true/false"
labels = {
application = "demo-test"
businessunit = "homes"
costcentre = "90imt"
createdby = "ab"
department = "it"
disasterrecovery = "no"
environment = "rnd"
contact = "abriers"
terraform = "false"
}
}
So i add the label and use the usual git commands to add/commit push etc which triggers the cloudbuild as usual. The problem is, the label does not appear in the console when viewing it.
It's as if cloudbuild or terraform/terragrunt isn't recognising it as a change. I can change the value of a label no problem, but i cannot seem to add or remove a label after the vm has been created.
It has been suggested to run terraform/terragrunt plan in vs code but as mentioned, this has all been setup to use git so the above commands do not work.
For example i run terragrunt init in the directory and get this error
PS C:\Cloudrepos\placesforpeople> terragrunt init
time=2022-07-27T09:56:27+01:00 level=error msg=Error reading file at path C:/Cloudrepos/placesforpeople/terragrunt.hcl: open C:/Cloudrepos/placesforpeople/terragrunt.hcl: The system cannot find the
file specified.
time=2022-07-27T09:56:27+01:00 level=error msg=Unable to determine underlying exit code, so Terragrunt will exit with error code 1
PS C:\Cloudrepos\placesforpeople> cd org
PS C:\Cloudrepos\placesforpeople\org> cd rnd
PS C:\Cloudrepos\placesforpeople\org\rnd> cd adam_play_area
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area> ls
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 20/07/2022 14:18 modules
d----- 20/07/2022 14:18 test_project_001
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area> cd test_project_001
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001> cd compute
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute> ls
Directory: C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute
Mode LastWriteTime Length Name
---- ------------- ------ ----
d----- 07/07/2022 15:51 start_stop_schedule
d----- 20/07/2022 14:18 umig
-a---- 07/07/2022 16:09 1308 .terraform.lock.hcl
-a---- 27/07/2022 09:56 2267 terragrunt.hcl
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute> terragrunt init
Initializing modules...
- data_disk in ..\compute_data_disk
Initializing the backend...
Successfully configured the backend "gcs"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Reusing previous version of hashicorp/google from the dependency lock file
- Reusing previous version of hashicorp/google-beta from the dependency lock file
╷
│ Warning: Backend configuration ignored
│
│ on ..\compute_data_disk\backend.tf line 3, in terraform:
│ 3: backend "gcs" {}
│
│ Any selected backend applies to the entire configuration, so Terraform
│ expects provider configurations only in the root module.
│
│ This is a warning rather than an error because it's sometimes convenient to
│ temporarily call a root module as a child module for testing purposes, but
│ this backend configuration block will have no effect.
╵
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/google: could not connect to registry.terraform.io: Failed to
│ request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": Proxy
│ Authorization Required
╵
╷
│ Error: Failed to query available provider packages
│
│ Could not retrieve the list of available versions for provider
│ hashicorp/google-beta: could not connect to registry.terraform.io: Failed
│ to request discovery document: Get
│ "https://registry.terraform.io/.well-known/terraform.json": Proxy
│ Authorization Required
╵
time=2022-07-27T09:57:40+01:00 level=error msg=Hit multiple errors:
Hit multiple errors:
exit status 1
PS C:\Cloudrepos\placesforpeople\org\rnd\adam_play_area\test_project_001\compute>
But as mentioned, we dont use and have never used these commands to push the changes.
I cannot work out why these labels wont add/remove after the vm has already been created.
I have tried making a change to an instance to trigger the change such as increase the disk size.
I have tried to create a block in the module for all the labels needed but this doesn't work as you cannot have labels as a block in this module.
labels {
application = var.labels.application
businessunit = var.labels.businessunit
costcentre = var.labels.costcentre
createdby = var.labels.createdby
department = var.labels.department
disasterrecovery = var.labels.disasterrecovery
environment = var.labels.environment
contact = var.labels.contact
terraform = var.labels.terraform
}
}
Any ideas? I know you cannot add a label to a project post creation, does the same apply to vm instances? Is there any alternative method i can test?
As requested this is the code for the vm instance
terraform {
source = "../../modules//compute_instance_static_ip/"
}
# Include all settings from the root terragrunt.hcl file
include {
path = find_in_parent_folders("org.hcl")
}
dependency "project" {
config_path = "../project"
# Configure mock outputs for the terraform commands that are returned when there are no outputs available (e.g the
# module hasn't been applied yet.
mock_outputs_allowed_terraform_commands = ["plan", "validate"]
mock_outputs = {
project_id = "project-not-created-yet"
}
}
prevent_destroy = false
inputs = {
gcp_instance_sa_email = "testprj-compute#gc-r-prj-testprj-0001-9627.iam.gserviceaccount.com" # This well tell gcp to use the default GCE service account
instance_name = "rnd-demo-test1"
network = "projects/gc-a-prj-vpchost-0001-3312/global/networks/gc-r-vpc-0001"
subnetwork = "projects/gc-a-prj-vpchost-0001-3312/regions/europe-west2/subnetworks/gc-r-snet-middleware-0001"
zone = "europe-west2-c"
region = "europe-west2"
project = dependency.project.outputs.project_id
os_image = "debian-10-buster-v20220118"
machine_type = "n1-standard-4"
boot_disk_size = 100
instance_scope = ["cloud-platform"]
instance_tags = ["demo-test"]
deletion_protection = "false"
metadata = {
windows-startup-script-ps1 = "Set-TimeZone -Id 'GMT Standard Time' -PassThru"
}
ip_address_region = "europe-west2"
ip_address_type = "INTERNAL"
attached_disks = {
data = {
size = 60
type = "pd-standard"
}
}
/*/ instance_schedule_policy = {
name = "start-stop"
#region = "europe-west2"
vm_start_schedule = "30 07 * * *"
vm_stop_schedule = "00 18 * * *"
time_zone = "GMT"
}
*/
labels = {
application = "demo-test"
businessunit = "homes"
costcentre = "90imt"
createdby = "ab"
department = "it"
disasterrecovery = "no"
environment = "rnd"
contact = "abriers"
terraform = "false"
}
}
terragrunt validate-inputs result below
PS C:\Cloudrepos\placesforpeople\org\rnd> terragrunt validate-inputs
time=2022-07-27T14:25:19+01:00 level=warning msg=The following inputs passed in by terragrunt are unused:
prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning msg= - billing_account prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning msg= - host_project_id prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=warning prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=info msg=All required inputs are passed in by terragrunt. prefix=[C:\Cloudrepos\placesforpeople\org\rnd]
time=2022-07-27T14:25:19+01:00 level=error msg=Terragrunt configuration has misaligned inputs
time=2022-07-27T14:25:19+01:00 level=error msg=Unable to determine underlying exit code, so Terragrunt will exit with error code 1
PS C:\Cloudrepos\placesforpeople\org\rnd>
I have found the culprit!
In the compute instance module i discovered this block of code. I removed labels and voila the extra labels now appear. Thanks for the assistance and advice on post formatting.
lifecycle {
ignore_changes = [
boot_disk.0.initialize_params.0.image,
attached_disk, labels
]
}

Issue installing Terratest using Go Task's Yaml Azure pipeline - issue triggering terratest tests in sub-folder

I'm facing this issue while installing terratest by azure yaml pipeline :
C:\hostedtoolcache\windows\go\1.17.1\x64\bin\go.exe install -v github.com/gruntwork-io/terratest#v0.40.6
go: downloading github.com/gruntwork-io/terratest v0.40.6
go install: github.com/gruntwork-io/terratest#v0.40.6: module github.com/gruntwork-io/terratest#v0.40.6 found, but does not contain package github.com/gruntwork-io/terratest
##[error]The Go task failed with an error: Error: The process 'C:\hostedtoolcache\windows\go\1.17.1\x64\bin\go.exe' failed with exit code 1
Finishing: Install Go Terratest module - v0.40.6
My code for installation is bellow :
- task: Go#0
displayName: Install Go Terratest module - v$(TERRATEST_VERSION)
inputs:
command: custom
customCommand: install
arguments: $(TF_LOG) github.com/gruntwork-io/terratest#v$(TERRATEST_VERSION)
workingDirectory: $(pipeline_artefact_folder_extract)/$(pathToTerraformRootModule)
But peharps I made mistakes in the use of terratest.
Bellow is a screenshot of my code tree :
I have terraform code in (for exemple) Terraform\azure_v2_X\ResourceModules sub-directory, and terratest test in Terraform\azure_v2_X\Tests_Unit_ResourceModules subdirectories (in screenshot app_configuration tests for app_configuration resourceModules).
In my terratest module, I'm calling for my resourceModule as in the following code :
######test in a un isolated Resource Group defined in locals
module "app_configuration_tobetested" {
source = "../../ResourceModules/app_configuration"
resource_group_name = local.rg_name
location = local.location
environment = var.ENVIRONMENT
sku = "standard"
// rem : here app_service_shared prefix and app_config_shared prefix are the same !
app_service_prefix = module.app_configuration_list_fortests.settings.frontEnd_prefix
# stage = var.STAGE
app_config_list = module.app_configuration_list_fortests.settings.list_app_config
}
And in my Go file, I test my module result regarding the expected result I want :
package RM_app_configuration_Test
import (
"os"
"testing"
// "github.com/gruntwork-io/terratest/modules/azure"
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
)
var (
globalBackendConf = make(map[string]interface{})
globalEnvVars = make(map[string]string)
)
func TestTerraform_RM_app_configuration(t *testing.T) {
t.Parallel()
// terraform Directory
fixtureFolder := "./"
// backend specification
strlocal := "RMapCfg_"
// input value
inputStage := "sbx_we"
inputEnvironment := "SBX"
inputApplication := "DEMO"
// expected value
expectedRsgName := "z-adf-ftnd-shrd-sbx-ew1-rgp01"
// expectedAppCfgPrefix := "z-adf-ftnd-shrd"
expectedAppConfigReader_ID := "[/subscriptions/f04c8fd5-d013-41c3-9102-43b25880d2e2/resourceGroups/z-adf-ftnd-shrd-sbx-ew1-rgp01/providers/Microsoft.AppConfiguration/configurationStores/z-adf-ftnd-shrd-sbx-ew1-blue-sbx-cfg01 /subscriptions/f04c8fd5-d013-41c3-9102-43b25880d2e2/resourceGroups/z-adf-ftnd-shrd-sbx-ew1-rgp01/providers/Microsoft.AppConfiguration/configurationStores/z-adf-ftnd-shrd-sbx-ew1-green-sbx-cfg01]"
// getting enVars from environment variables
/*
Go and Terraform uses two differents methods for Azure authentification.
** Terraform authentification is explained bellow :
- https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#configuring-the-service-principal-in-terraform
** Go authentification is explained bellow
- https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authorization#use-environment-based-authentication
** Terratest is using both authentification methods regarding the work it has to be done :
- azure existences tests uses Go azure authentification :
- https://github.com/gruntwork-io/terratest/blob/master/modules/azure/authorizer.go#L11
- terraform commands uses terraform authentification :
- https://github.com/gruntwork-io/terratest/blob/0d654bd2ab781a52e495f61230cf892dfba9731b/modules/terraform/cmd.go#L12
- https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#configuring-the-service-principal-in-terraform
so both authentification methods have to be implemented
*/
// getting terraform EnvVars from Azure Go environment variables
ARM_CLIENT_ID := os.Getenv("AZURE_CLIENT_ID")
ARM_CLIENT_SECRET := os.Getenv("AZURE_CLIENT_SECRET")
ARM_TENANT_ID := os.Getenv("AZURE_TENANT_ID")
ARM_SUBSCRIPTION_ID := os.Getenv("ARM_SUBSCRIPTION_ID")
if ARM_CLIENT_ID != "" {
globalEnvVars["ARM_CLIENT_ID"] = ARM_CLIENT_ID
globalEnvVars["ARM_CLIENT_SECRET"] = ARM_CLIENT_SECRET
globalEnvVars["ARM_SUBSCRIPTION_ID"] = ARM_SUBSCRIPTION_ID
globalEnvVars["ARM_TENANT_ID"] = ARM_TENANT_ID
}
// getting terraform backend from environment variables
resource_group_name := os.Getenv("resource_group_name")
storage_account_name := os.Getenv("storage_account_name")
container_name := os.Getenv("container_name")
key := strlocal + os.Getenv("key")
if resource_group_name != "" {
globalBackendConf["resource_group_name"] = resource_group_name
globalBackendConf["storage_account_name"] = storage_account_name
globalBackendConf["container_name"] = container_name
globalBackendConf["key"] = key
}
// User Terratest to deploy the infrastructure
terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{
// website::tag::1::Set the path to the Terraform code that will be tested.
// The path to where our Terraform code is located
TerraformDir: fixtureFolder,
// Variables to pass to our Terraform code using -var options
Vars: map[string]interface{}{
"STAGE": inputStage,
"ENVIRONMENT": inputEnvironment,
"APPLICATION": inputApplication,
},
EnvVars: globalEnvVars,
// backend values to set when initialziing Terraform
BackendConfig: globalBackendConf,
// Disable colors in Terraform commands so its easier to parse stdout/stderr
NoColor: true,
})
// website::tag::4::Clean up resources with "terraform destroy". Using "defer" runs the command at the end of the test, whether the test succeeds or fails.
// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
// website::tag::2::Run "terraform init" and "terraform apply".
// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
terraform.InitAndApply(t, terraformOptions)
// tests the resource_group for the app_configuration
/*
actualAppConfigReaderPrefix := terraform.Output(t, terraformOptions, "app_configuration_tested_prefix")
assert.Equal(t, expectedAppCfgprefix, actualAppConfigReaderPrefix)
*/
actualRSGReaderName := terraform.Output(t, terraformOptions, "app_configuration_tested_RG_name")
assert.Equal(t, expectedRsgName, actualRSGReaderName)
actualAppConfigReader_ID := terraform.Output(t, terraformOptions, "app_configuration_tobetested_id")
assert.Equal(t, expectedAppConfigReader_ID, actualAppConfigReader_ID)
}
The fact is locally, I can do, from My main folder Terraform\Azure_v2_X\Tests_Unit_ResourceModules the following command to trigger all my tests in a raw :
(from Go v1.11)
Go test ./...
With Go version 1.12, I could set GO111MODULE=auto to have the same results.
But with Go 1.17, I have now to set GO111MODULE=off to trigger my tetst.
For now, I have 2 main questions that nagging me :
How can I Go import Terratest (and other) modules from azure Pipeline ?
What I have to do to correctly use Go Modules with terratest ?
I have no Go code in my main folder _Terraform\Azure_v2_X\Tests_Unit_ResourceModules_ and would like to trigger all the sub_folder go tests in a simple command line in my Azure Pipeline.
Thank you for any help you could give.
Best regards,
I will once again answer my own question. :D
so, for now, using the following versions :
-- GOVERSION: 1.17.1
-- TERRAFORM_VERSION : 1.1.7
-- TERRATEST_VERSION: 0.40.6
The folder hierarchy has changed the following, regarding terratest tests :
I do no longer try to Go import my Terratest module. (so point 1) above is ansered, obviously)
I now just have to :
Go mod each of my terratest modules
Trigger each of them individually one by one, using script
so my pipeline just became the following :
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller#0
displayName: Install Terraform $(TERRAFORM_VERSION)
inputs:
terraformVersion: $(TERRAFORM_VERSION)
- task: GoTool#0
displayName: 'Use Go $(GOVERSION)'
inputs:
version: $(GOVERSION)
goPath: $(GOPATH)
goBin: $(GOBIN)
- task: PowerShell#2
displayName: run Terratest for $(pathToTerraformRootModule)
inputs:
targettype : 'filePath'
filePath: $(pipeline_artefact_folder_extract)/$(pathToTerraformRootModule)/$(Run_Terratest_script)
workingDirectory: $(pipeline_artefact_folder_extract)/$(pathToTerraformRootModule)
env:
# see https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authorization#use-environment-based-authentication
# for Azure authentification with Go
ARM_SUBSCRIPTION_ID: $(TF_VAR_ARM_SUBSCRIPTION_ID)
AZURE_CLIENT_ID: $(TF_VAR_ARM_CLIENT_ID)
AZURE_TENANT_ID: $(TF_VAR_ARM_TENANT_ID)
AZURE_CLIENT_SECRET: $(TF_VAR_ARM_CLIENT_SECRET) # set as pipeline secret
resource_group_name: $(storageAccountResourceGroup)
storage_account_name: $(storageAccount)
container_name: $(stateBlobContainer)
key: '$(MODULE)-$(TF_VAR_APPLICATION)-$(TF_VAR_ENVIRONMENT).tfstate'
GO111MODULE: 'auto'
And in my main folder for my terratest sub-folders, I have the run_terratests.ps1 script and the Terratests list file as bellow :
run_terratests.ps1
# this file is based on https://github.com/google/go-cloud/blob/master/internal/testing/runchecks.sh
#
# This script runs all go Terratest suites,
# compatibility checks, consistency checks, Wire, etc.
$moduleListFile = "./Terratests"
# regex to filter : not began with #
$regexFilter = "^[^#]"
# read the ModuleListFile
[object] $arrayFromFile = Get-Content -Path $moduleListFile | Where-Object { $_ -match $regexFilter} | ConvertFrom-String -PropertyNames folder, totest
$result = 0 # set no error by default
# get the actual folder
$main_path = Get-Location | select -ExpandProperty "Path"
#read the array to show if to be tested !
foreach ($line in $arrayFromFile) {
# write-Host $line
if ($line.totest -eq "yes") {
$path = $line.folder
set-location $main_path\$path
$myPath = Get-Location
# Write-Host $myPath
# trigger terratest for files
Go test ./...
}
if ($false -eq $?)
{
$result = 1
}
}
# back to school :D
set-location $main_path
if ($result -eq 1)
{
Write-Error "Msbuild exit code indicate test failure."
Write-Host "##vso[task.logissue type=error]Msbuild exit code indicate test failure."
exit(1)
}
the code
if ($false -eq $?)
{
$result = 1
}
is usefull to make the pipeline fail on test error without escaping the other tests.
Terratests
# this file lists all the modules to be tested in the "Tests_Unit_ConfigHelpers" repository.
# it us used by the "run_terratest.ps1" powershell script to trigger terratest for each test.
#
# Any line that doesn't begin with a '#' character and isn't empty is treated
# as a path relative to the top of the repository that has a module in it.
# The 'tobetested' field specifies whether this is a module that have to be tested.
#
# this file is based on https://github.com/google/go-cloud/blob/master/allmodules
# module-directory tobetested
azure_constants yes
configure_app_srv_etc yes
configure_frontdoor_etc yes
configure_hostnames yes
constants yes
FrontEnd_AppService_slots/_main yes
FrontEnd_AppService_slots/settings yes
merge_maps_of_strings yes
name yes
name_template yes
network/hostname_generator yes
network/hostnames_generator yes
replace_2vars_into_string_etc yes
replace_var_into_string_etc yes
sorting_map_with_an_other_map yes
And the change in each terratest folder is that I will add the go.mod and go.sum files :
$ go mod init mytest
go: creating new go.mod: module mytest
go: to add module requirements and sums:
go mod tidy
and
$ go mod tidy
# link each of the go modules needed for your terratest module
so, with that, the go test ./... from the powershell script will downlaod the needed go modules and run the test for that particulary test.
Thanks for reading and vote if you think that can help :)

Jenkins console shows permission denied error when command : terraform init runs in pipeline

Please find below pipeline script and jenkins console output which is framed after running the jenkins job.Below shared is pipeline code jenkins and jenkins console output:
pipeline {
agent any
parameters {
string(name: 'myvmname', defaultValue: 'az-k8s', description: 'VM name')
string(name: 'mymodulename', defaultValue: 'aks', description: 'Module name')
string(name: 'operation', defaultValue: 'apply', description: 'terraform operation apply/destroy')
booleanParam(name: 'AKS', defaultValue: true, description: 'AKS Cluster Creation')
}
stages {
stage('IAC AKS -Git Clone') {
when
{
expression {params.AKS==true}
}
steps {
git branch: 'main', credentialsId: 'xxxx-xxxx-xxxx-xxx', url: 'http://xxx.xx.xx.x/root/rapidopsiacaks.git'
}
}
stage('AKS Cluster Creation') {
when
{
expression {params.AKS==true}
}
steps {
sh '''echo "AKS cluster creation"
mkdir -p /var/lib/jenkins/workspace/VM_Prov_Terraform/${myvmname}
cp -r /var/lib/jenkins/workspace/TerraformAKS/* /var/lib/jenkins/workspace/VM_Prov_Terraform/${myvmname}
cd /var/lib/jenkins/workspace/VM_Prov_Terraform/${myvmname}/${mymodulename}
terraform init
terraform plan
terraform ${operation} -auto-approve
'''
}
}
}
}
WHen i am running the job / triggering the job via jenkins i am getting below error (sharing complete output of console):
+ cd /var/lib/jenkins/workspace/VM_Prov_Terraform/az-k8s/aks
+ terraform init
[31mThere are some problems with the configuration, described below.
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.[0m[0m
[31m
[1m[31mError: [0m[0m[1mReserved argument name in module block[0m
[0m on main.tf line 24, in module "aks":
24: [4mdepends_on[0m = [azurerm_resource_group.rg]
[0m
The name "depends_on" is reserved for use in a future version of Terraform.
[0m[0m
[31m
[1m[31mError: [0m[0m[1mInvalid version constraint[0m
[0m on provider.tf line 3, in terraform:
3: azurerm = [4m{[0m
[0m 4: [4m source = "hashicorp/azurerm"[0m
[0m 5: [4m version = "~> 2.65"[0m
[0m 6: [4m }[0m
[0m
A string value is required for azurerm.
[0m[0m
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
-------------------------------xoxoxo -------------------------------
Also , when i run ( terraform init command ) in terminal getting below error :
[devopsadmin#jenkins1 aks]$ terraform init
Initializing modules...
╷
│ Error: Failed to update module manifest
│
│ Unable to write the module manifest file: open .terraform/modules/modules.json: permission denied
Please help !
Thanks in advance !

Why is TF build failing with "Error refreshing state: HTTP remote state endpoint requires auth"?

My pipeline builds, plans and applies just fine for my dev branch. When I push to my master branch, I get "Error refreshing state: HTTP remote state endpoint requires auth". See pipeline logs: (and since someone will ask, the token "$onbaord_cloud_account_into_monitoring" has complete read/write access to the scoped project API.)...
Running with gitlab-runner 13.4.1 (REDACTED)
on runner-gitlab-runner-REDACTED-REDACTED REDACTED
Resolving secrets
00:00
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image my-gitlab.io:5005/team-cloud-platform-team/terraform-modules/root-module-deployment/python-terraform14 ...
Preparing environment
00:03
Waiting for pod gitlab-managed-apps/runner-REDACTED-project-REDACTED-concurrent-REDACTED to be running, status is Pending
Running on runner-REDACTED-project-REDACTED-concurrent-REDACTED via runner-gitlab-runner-REDACTED-REDACTED...
Getting source from Git repository
00:02
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/team-cloud-platform-team/teamcloudv2/workers/onboard-cloud-account-into-monitoring/.git/
Created fresh repository.
Checking out REDACTED as master...
Skipping Git submodules setup
Restoring cache
00:00
Checking cache for REDACTEDREDACTEDREDACTEDREDACTED...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
Executing "step_script" stage of the job script
00:05
$ git config --global credential.helper cache
$ git fetch
From https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/workers/onboard-cloud-account-into-monitoring
* [new branch] dev -> origin/dev
$ if [ "$CI_COMMIT_REF_NAME" == "master" ]; then TF_ACCOUNT="REDACTED";fi
$ if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then TF_ACCOUNT="REDACTED";fi
$ if [ "$CI_COMMIT_REF_NAME" == "master" ]; then ORG_ACCOUNT="REDACTED";fi
$ if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then ORG_ACCOUNT="REDACTED";fi
$ echo ${TF_ACCOUNT}
REDACTED
$ echo ${TF_ADDRESS}
https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master
$ echo TF_HTTP_ADDRESS=${TF_ADDRESS}
TF_HTTP_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master
$ echo TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
TF_HTTP_LOCK_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master/lock
$ echo TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
TF_HTTP_UNLOCK_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master/lock
$ echo TF_ADDRESS=${TF_ADDRESS}
TF_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master
$ export TF_HTTP_ADDRESS=${TF_ADDRESS}
$ export TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
$ export TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
$ export TF_HTTP_LOCK_METHOD="POST"
$ export TF_HTTP_UNLOCK_METHOD="DELETE"
$ export TF_HTTP_RETRY_WAIT_MIN='5'
$ export TF_HTTP_USERNAME='JOHN.DOE'
$ export TF_HTTP_PASSWORD=${onbaord_cloud_account_into_monitoring}
$ export TF_VAR_ACCOUNT=${TF_ACCOUNT}
$ export TF_VAR_ORG_ACCOUNT=${ORG_ACCOUNT}
$ export AWS_DEFAULT_REGION='us-east-1'
$ terraform init
Initializing modules...
Downloading git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/lambda-modules.git?ref=dev for lambda...
- lambda in .terraform/modules/lambda
Downloading git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/iam-modules/lambda-iam.git?ref=dev for lambda_iam...
- lambda_iam in .terraform/modules/lambda_iam
Initializing the backend...
Successfully configured the backend "http"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: HTTP remote state endpoint requires auth
Cleaning up file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
A peek at my gitlab-ci.yml:
image:
name: my-gitlab.io:5005/team-cloud-platform-team/terraform-modules/root-module-deployment/python-terraform14
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
stages:
- build
- plan
- apply
variables:
PLAN: plan.tfplan
JSON_PLAN_FILE: tfplan.json
WORKSPACE: "dev"
TF_IN_AUTOMATION: "true"
ZIP_FILE: "lambda_package.zip"
STATE_FILE: ${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}
TF_ACCOUNT: ""
ORG_ACCOUNT: ""
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}
TF_LOCK: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}/lock
TF_UNLOCK: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}/lock
cache:
key: "$CI_COMMIT_SHA"
paths:
- .terraform
before_script:
- git config --global credential.helper cache
- git fetch
- if [ "$CI_COMMIT_REF_NAME" == "master" ]; then TF_ACCOUNT="REDACTED";fi
- if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then TF_ACCOUNT="REDACTED";fi
- if [ "$CI_COMMIT_REF_NAME" == "master" ]; then ORG_ACCOUNT="REDACTED";fi
- if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then ORG_ACCOUNT="REDACTED";fi
- echo ${TF_ACCOUNT}
- echo ${TF_ADDRESS}
- echo TF_HTTP_ADDRESS=${TF_ADDRESS}
- echo TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
- echo TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
- echo TF_ADDRESS=${TF_ADDRESS}
- export TF_HTTP_ADDRESS=${TF_ADDRESS}
- export TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
- export TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
- export TF_HTTP_LOCK_METHOD="POST"
- export TF_HTTP_UNLOCK_METHOD="DELETE"
- export TF_HTTP_RETRY_WAIT_MIN='5'
- export TF_HTTP_USERNAME='MATTHEW.FETHEROLF'
- export TF_HTTP_PASSWORD=${onbaord_cloud_account_into_monitoring}
- export TF_VAR_ACCOUNT=${TF_ACCOUNT}
- export TF_VAR_ORG_ACCOUNT=${ORG_ACCOUNT}
- export AWS_DEFAULT_REGION='us-east-1'
- terraform init
# -----BUILD-----
# 1 build job per lambda function
buildLambda:
stage: build
tags:
- cluster
script:
- echo "Beginning Build"
- cd lambda_code/
- echo "Switched into Lambda dir"
- pip3 install -r requirements.txt --target .
- echo "Requirements installed"
- echo $ZIP_FILE
- zip -r $ZIP_FILE *
- echo "Zip file packaged up"
artifacts:
paths:
- lambda_code/
only:
- dev
- master
- merge_requests
# -----PLAN-----
planMerge:
stage: plan
tags:
- cluster
script:
- terraform plan
dependencies:
- buildLambda
only:
- merge_requests
planCommit:
stage: plan
tags:
- cluster
script:
- terraform plan
dependencies:
- buildLambda
# -----APPLY-----
# dev branch will auto deploy
applyDev:
stage: apply
tags:
- cluster
script:
- echo "Running Terraform Apply"
- terraform apply -auto-approve
dependencies:
- buildLambda
only:
- dev
#when: manual
# prod branch will require manual deployment approval
applyProd:
stage: apply
tags:
- cluster
script:
- echo "Running Terraform Apply"
- terraform apply -auto-approve
when: manual
dependencies:
- buildLambda
only:
- master
backend.tf:
terraform {
backend "http" {
}
}
main.tf:
locals {
common_tags = {
SVC_ACCOUNT_ID = "REDACTED",
CLOUD_PLATFORM_PROD_ID = "REDACTED"
}
}
module "lambda" {
source = "git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/lambda-modules.git?ref=dev"
lambda_name = var.name
lambda_role = "arn:aws:iam::${var.ACCOUNT}:role/${var.lambda_role}"
lambda_handler = var.handler
lambda_runtime = var.runtime
default_lambda_timeout = var.timeout
env = local.common_tags
ACCOUNT = var.ACCOUNT
}
module "lambda_iam" {
source = "git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/iam-modules/lambda-iam.git?ref=dev"
lambda_policy = var.lambda_policy
ACCOUNT = var.ACCOUNT
lambda_role = var.lambda_role
}
inputs.tf:
variable "handler" {
type = string
default = "handler.lambda_handler"
}
variable "runtime" {
type = string
default = "python3.8"
}
variable "name" {
type = string
default = "onboard-cloud-account-into-monitoring"
}
variable "timeout"{
type = string
default = "120"
}
variable "lambda_role" {
type = string
default = "cloud-platform-onboard-to-monitoring"
}
variable "ACCOUNT" {
type = string
}
variable "ORG_ACCOUNT" {
type = string
}
variable "lambda_policy" {
default = "{\"Version\": \"2012-10-17\",\"Statement\": [{\"Sid\": \"VisualEditor0\",\"Effect\": \"Allow\",\"Action\": [\"logs:CreateLogStream\",\"logs:CreateLogGroup\"],\"Resource\": \"*\"},{\"Sid\": \"VisualEditor1\",\"Effect\": \"Allow\",\"Action\": \"logs:PutLogEvents\",\"Resource\": \"*\"},{\"Sid\": \"VisualEditor2\",\"Effect\": \"Allow\",\"Action\": \"sts:AssumeRole\",\"Resource\": \"*\"},{\"Sid\": \"VisualEditor3\",\"Effect\": \"Allow\",\"Action\": \"secretsmanager:GetSecretValue\",\"Resource\": \"*\"}]}"
}
provider.tf:
provider "aws" {
region = "us-east-1"
assume_role {
role_arn ="arn:aws:iam::${var.ACCOUNT}:role/team-platform-onboard-to-monitoring"
}
default_tags {
tags = {
owner = "REDACTED#REDACTED.com"
altowner = "REDACTED#REDACTED.com"
blc = "REDACTED"
costcenter = "REDACTED"
itemid = "PlatformAutomation"
}
}
}
Perhaps I also need to share the terraform modules being invoked?
We found the problem:
The pipeline referenced a protected environment variable. The master branch was not a protected branch. The solution was to unprotect the environment variable or protect the branch. Instantly solved the problem.

Resources