Why is TF build failing with "Error refreshing state: HTTP remote state endpoint requires auth"? - gitlab

My pipeline builds, plans and applies just fine for my dev branch. When I push to my master branch, I get "Error refreshing state: HTTP remote state endpoint requires auth". See pipeline logs: (and since someone will ask, the token "$onbaord_cloud_account_into_monitoring" has complete read/write access to the scoped project API.)...
Running with gitlab-runner 13.4.1 (REDACTED)
on runner-gitlab-runner-REDACTED-REDACTED REDACTED
Resolving secrets
00:00
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image my-gitlab.io:5005/team-cloud-platform-team/terraform-modules/root-module-deployment/python-terraform14 ...
Preparing environment
00:03
Waiting for pod gitlab-managed-apps/runner-REDACTED-project-REDACTED-concurrent-REDACTED to be running, status is Pending
Running on runner-REDACTED-project-REDACTED-concurrent-REDACTED via runner-gitlab-runner-REDACTED-REDACTED...
Getting source from Git repository
00:02
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/team-cloud-platform-team/teamcloudv2/workers/onboard-cloud-account-into-monitoring/.git/
Created fresh repository.
Checking out REDACTED as master...
Skipping Git submodules setup
Restoring cache
00:00
Checking cache for REDACTEDREDACTEDREDACTEDREDACTED...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
Executing "step_script" stage of the job script
00:05
$ git config --global credential.helper cache
$ git fetch
From https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/workers/onboard-cloud-account-into-monitoring
* [new branch] dev -> origin/dev
$ if [ "$CI_COMMIT_REF_NAME" == "master" ]; then TF_ACCOUNT="REDACTED";fi
$ if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then TF_ACCOUNT="REDACTED";fi
$ if [ "$CI_COMMIT_REF_NAME" == "master" ]; then ORG_ACCOUNT="REDACTED";fi
$ if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then ORG_ACCOUNT="REDACTED";fi
$ echo ${TF_ACCOUNT}
REDACTED
$ echo ${TF_ADDRESS}
https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master
$ echo TF_HTTP_ADDRESS=${TF_ADDRESS}
TF_HTTP_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master
$ echo TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
TF_HTTP_LOCK_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master/lock
$ echo TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
TF_HTTP_UNLOCK_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master/lock
$ echo TF_ADDRESS=${TF_ADDRESS}
TF_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master
$ export TF_HTTP_ADDRESS=${TF_ADDRESS}
$ export TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
$ export TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
$ export TF_HTTP_LOCK_METHOD="POST"
$ export TF_HTTP_UNLOCK_METHOD="DELETE"
$ export TF_HTTP_RETRY_WAIT_MIN='5'
$ export TF_HTTP_USERNAME='JOHN.DOE'
$ export TF_HTTP_PASSWORD=${onbaord_cloud_account_into_monitoring}
$ export TF_VAR_ACCOUNT=${TF_ACCOUNT}
$ export TF_VAR_ORG_ACCOUNT=${ORG_ACCOUNT}
$ export AWS_DEFAULT_REGION='us-east-1'
$ terraform init
Initializing modules...
Downloading git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/lambda-modules.git?ref=dev for lambda...
- lambda in .terraform/modules/lambda
Downloading git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/iam-modules/lambda-iam.git?ref=dev for lambda_iam...
- lambda_iam in .terraform/modules/lambda_iam
Initializing the backend...
Successfully configured the backend "http"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: HTTP remote state endpoint requires auth
Cleaning up file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
A peek at my gitlab-ci.yml:
image:
name: my-gitlab.io:5005/team-cloud-platform-team/terraform-modules/root-module-deployment/python-terraform14
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
stages:
- build
- plan
- apply
variables:
PLAN: plan.tfplan
JSON_PLAN_FILE: tfplan.json
WORKSPACE: "dev"
TF_IN_AUTOMATION: "true"
ZIP_FILE: "lambda_package.zip"
STATE_FILE: ${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}
TF_ACCOUNT: ""
ORG_ACCOUNT: ""
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}
TF_LOCK: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}/lock
TF_UNLOCK: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}/lock
cache:
key: "$CI_COMMIT_SHA"
paths:
- .terraform
before_script:
- git config --global credential.helper cache
- git fetch
- if [ "$CI_COMMIT_REF_NAME" == "master" ]; then TF_ACCOUNT="REDACTED";fi
- if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then TF_ACCOUNT="REDACTED";fi
- if [ "$CI_COMMIT_REF_NAME" == "master" ]; then ORG_ACCOUNT="REDACTED";fi
- if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then ORG_ACCOUNT="REDACTED";fi
- echo ${TF_ACCOUNT}
- echo ${TF_ADDRESS}
- echo TF_HTTP_ADDRESS=${TF_ADDRESS}
- echo TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
- echo TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
- echo TF_ADDRESS=${TF_ADDRESS}
- export TF_HTTP_ADDRESS=${TF_ADDRESS}
- export TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
- export TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
- export TF_HTTP_LOCK_METHOD="POST"
- export TF_HTTP_UNLOCK_METHOD="DELETE"
- export TF_HTTP_RETRY_WAIT_MIN='5'
- export TF_HTTP_USERNAME='MATTHEW.FETHEROLF'
- export TF_HTTP_PASSWORD=${onbaord_cloud_account_into_monitoring}
- export TF_VAR_ACCOUNT=${TF_ACCOUNT}
- export TF_VAR_ORG_ACCOUNT=${ORG_ACCOUNT}
- export AWS_DEFAULT_REGION='us-east-1'
- terraform init
# -----BUILD-----
# 1 build job per lambda function
buildLambda:
stage: build
tags:
- cluster
script:
- echo "Beginning Build"
- cd lambda_code/
- echo "Switched into Lambda dir"
- pip3 install -r requirements.txt --target .
- echo "Requirements installed"
- echo $ZIP_FILE
- zip -r $ZIP_FILE *
- echo "Zip file packaged up"
artifacts:
paths:
- lambda_code/
only:
- dev
- master
- merge_requests
# -----PLAN-----
planMerge:
stage: plan
tags:
- cluster
script:
- terraform plan
dependencies:
- buildLambda
only:
- merge_requests
planCommit:
stage: plan
tags:
- cluster
script:
- terraform plan
dependencies:
- buildLambda
# -----APPLY-----
# dev branch will auto deploy
applyDev:
stage: apply
tags:
- cluster
script:
- echo "Running Terraform Apply"
- terraform apply -auto-approve
dependencies:
- buildLambda
only:
- dev
#when: manual
# prod branch will require manual deployment approval
applyProd:
stage: apply
tags:
- cluster
script:
- echo "Running Terraform Apply"
- terraform apply -auto-approve
when: manual
dependencies:
- buildLambda
only:
- master
backend.tf:
terraform {
backend "http" {
}
}
main.tf:
locals {
common_tags = {
SVC_ACCOUNT_ID = "REDACTED",
CLOUD_PLATFORM_PROD_ID = "REDACTED"
}
}
module "lambda" {
source = "git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/lambda-modules.git?ref=dev"
lambda_name = var.name
lambda_role = "arn:aws:iam::${var.ACCOUNT}:role/${var.lambda_role}"
lambda_handler = var.handler
lambda_runtime = var.runtime
default_lambda_timeout = var.timeout
env = local.common_tags
ACCOUNT = var.ACCOUNT
}
module "lambda_iam" {
source = "git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/iam-modules/lambda-iam.git?ref=dev"
lambda_policy = var.lambda_policy
ACCOUNT = var.ACCOUNT
lambda_role = var.lambda_role
}
inputs.tf:
variable "handler" {
type = string
default = "handler.lambda_handler"
}
variable "runtime" {
type = string
default = "python3.8"
}
variable "name" {
type = string
default = "onboard-cloud-account-into-monitoring"
}
variable "timeout"{
type = string
default = "120"
}
variable "lambda_role" {
type = string
default = "cloud-platform-onboard-to-monitoring"
}
variable "ACCOUNT" {
type = string
}
variable "ORG_ACCOUNT" {
type = string
}
variable "lambda_policy" {
default = "{\"Version\": \"2012-10-17\",\"Statement\": [{\"Sid\": \"VisualEditor0\",\"Effect\": \"Allow\",\"Action\": [\"logs:CreateLogStream\",\"logs:CreateLogGroup\"],\"Resource\": \"*\"},{\"Sid\": \"VisualEditor1\",\"Effect\": \"Allow\",\"Action\": \"logs:PutLogEvents\",\"Resource\": \"*\"},{\"Sid\": \"VisualEditor2\",\"Effect\": \"Allow\",\"Action\": \"sts:AssumeRole\",\"Resource\": \"*\"},{\"Sid\": \"VisualEditor3\",\"Effect\": \"Allow\",\"Action\": \"secretsmanager:GetSecretValue\",\"Resource\": \"*\"}]}"
}
provider.tf:
provider "aws" {
region = "us-east-1"
assume_role {
role_arn ="arn:aws:iam::${var.ACCOUNT}:role/team-platform-onboard-to-monitoring"
}
default_tags {
tags = {
owner = "REDACTED#REDACTED.com"
altowner = "REDACTED#REDACTED.com"
blc = "REDACTED"
costcenter = "REDACTED"
itemid = "PlatformAutomation"
}
}
}
Perhaps I also need to share the terraform modules being invoked?

We found the problem:
The pipeline referenced a protected environment variable. The master branch was not a protected branch. The solution was to unprotect the environment variable or protect the branch. Instantly solved the problem.

Related

Everything runs fine, but at the end the following error comes

Code:
#!/usr/bin/env bash
# Exit immediately if command returns bad exit code
set -e
ENABLED_SLACK_REFS=(develop qa main prod)
export CI_COMMIT_AUTHOR=$(git log --format="%an" -n1)
export CI_COMMIT_MESSAGE=$(git log --format="%B" -n1)
export CI_COMMIT_URL="${CI_PROJECT_URL}/commit/${CI_COMMIT_SHA}"
main() {
if fn_exists $1; then
${1}
else
echo "Function $1 doesn't exist" && exit 1
fi
}
#------------------------------------------
# Job Tasks
#------------------------------------------
function build_docker_for_pipeline() {
# Pull latest version of image for branch / tag, for caching purposes (speed up the build)
# docker pull ${CI_REGISTRY_IMAGE}:${CI_BRANCH_SLUG} || true
# Build and tag with this pipeline ID so we can use it later, in test & release & deploy
docker build --build-arg CI=${CI} -t ${CI_REGISTRY_IMAGE}:${CI_PIPELINE_ID} .
# docker push ${CI_REGISTRY_IMAGE}
}
function lint() {
# docker pull ${CI_REGISTRY_IMAGE}:${CI_PIPELINE_ID}
docker run --rm -e CI=${CI} ${CI_REGISTRY_IMAGE}:${CI_PIPELINE_ID} lint
}
function test() {
# Pull image built within this pipeline previously
docker pull ${CI_REGISTRY_IMAGE}:${CI_PIPELINE_ID}
# Run npm test, via entrypoint.sh test action and generate coverage
mkdir coverage && docker run --rm -e CI=${CI} \
--mount type=bind,source="$(pwd)"/coverage,target=/app/coverage \
${CI_REGISTRY_IMAGE}:${CI_PIPELINE_ID} test --coverage
}
function security() {
# Pull image built within this pipeline previously
docker pull ${CI_REGISTRY_IMAGE}:${CI_PIPELINE_ID}
# Audit our dependencies for security vulnerabilities
docker run --rm -e CI=${CI} ${CI_REGISTRY_IMAGE}:${CI_PIPELINE_ID} yarn audit
}
function release_docker_for_branch() {
# Pull image built in this pipeline
docker pull ${CI_REGISTRY_IMAGE}:${CI_PIPELINE_ID}
# Tag as latest image for branch / tag
docker tag ${CI_REGISTRY_IMAGE}:${CI_PIPELINE_ID} ${CI_REGISTRY_IMAGE}:${CI_BRANCH_SLUG}
# Push to GitLab Container Registry
docker push ${CI_REGISTRY_IMAGE}
}
function deploy() {
# Run npm build, setting REACT_APP_* env variables
export REACT_APP_ENVIRONMENT=${CI_ENVIRONMENT_NAME:-dev}
export REACT_APP_SENTRY_VERSION="${CI_PROJECT_NAME}#$(jq -r '.version' package.json)-${CI_JOB_ID}"
echo $REACT_APP_SENTRY_VERSION
echo "Building app via Docker, injecting following environment variables:"
get_react_env_vars_raw_names
docker run --rm -v source=/home/halo-solutions-ltd/actions-runner/_work/halo-web/halo-web/,target=/app \
$(get_react_env_vars_for_docker) \
${CI_REGISTRY_IMAGE}:${CI_PIPELINE_ID} build
# create_sentry_release
# Set up required environment variables for deployment
export_deploy_ci_variables
aws configure set default.s3.max_concurrent_requests 20
# Sync build output with what is currently deployed
# This also deletes files in bucket that aren't present in build output (cleans old releases)
# Set a blanket cache policy to cache for up to 12hrs - used for versioned static assets
aws s3 sync build s3://${CI_DEPLOY_BUCKET} --delete --cache-control "public,max-age=43200" --exclude "*.map"
# Disable caching for specific files. In particular, ones that reference versioned assets
# This means we can cache bust our versioned assets!
export CACHE_DISABLED_PARAMS="--metadata-directive REPLACE --cache-control max-age=0,no-cache,no-store,must-revalidate --acl public-read"
aws s3 cp s3://${CI_DEPLOY_BUCKET}/service-worker.js s3://${CI_DEPLOY_BUCKET}/service-worker.js --content-type application/javascript ${CACHE_DISABLED_PARAMS}
aws s3 cp s3://${CI_DEPLOY_BUCKET}/index.html s3://${CI_DEPLOY_BUCKET}/index.html --content-type text/html ${CACHE_DISABLED_PARAMS}
aws s3 cp s3://${CI_DEPLOY_BUCKET}/manifest.json s3://${CI_DEPLOY_BUCKET}/manifest.json --content-type application/json ${CACHE_DISABLED_PARAMS}
aws s3 cp s3://${CI_DEPLOY_BUCKET}/asset-manifest.json s3://${CI_DEPLOY_BUCKET}/asset-manifest.json --content-type application/json ${CACHE_DISABLED_PARAMS}
# finalize_sentry_release
}
function deploy_qa_if_updated() {
last_qa_release=$(AWS_DEFAULT_REGION=eu-west-1 AWS_ACCESS_KEY_ID=AKIAJ6355LQD3QU4IK3A AWS_SECRET_ACCESS_KEY=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 \
aws dynamodb get-item --table-name gitlab-react-qa-releases --key "{\"ProjectId\": {\"N\": \"${CI_PROJECT_ID}\"}}" \
--output text --query "Item.[LastReleaseSha.S,LastReleaseTimestamp.N]"
)
echo "Last QA Release: ${last_qa_release}"
# If first QA release
if [ "${last_qa_release}" = "None" ]; then
deploy && update_last_qa_release && deploy_success_slack
return 0
fi
last_qa_release_sha=$(echo "${last_qa_release}" | awk '{print $1}')
last_qa_release_timestamp=$(echo "${last_qa_release}" | awk '{print $2}')
# If current commit is same as last release, or last release isn't in history (not sure how)
if [ "${CI_COMMIT_SHA}" = "${last_qa_release_sha}" ] || ! git merge-base --is-ancestor ${last_qa_release_sha} HEAD; then
no_qa_updates_slack ${last_qa_release_sha} ${last_qa_release_timestamp}
return 0
fi
# If last release occurred previously in branch
deploy && update_last_qa_release && deploy_success_slack
return 0
}
function update_last_qa_release() {
AWS_DEFAULT_REGION=eu-west-1 AWS_ACCESS_KEY_ID=AKIAJ6355LQD3QU4IK3A AWS_SECRET_ACCESS_KEY=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 \
aws dynamodb put-item --table-name gitlab-react-qa-releases --item \
"{\
\"ProjectId\": {\"N\": \"${CI_PROJECT_ID}\"}, \
\"LastReleaseSha\": {\"S\": \"${CI_COMMIT_SHA}\"}, \
\"LastReleaseTimestamp\": {\"N\": \"$(date +%s)\"} \
}"
}
#------------------------------------------
# Slack functions
#------------------------------------------
function slack_enabled_for_ref() {
local ref
ref=${CI_COMMIT_REF_NAME:-none}
echo "${ENABLED_SLACK_REFS[*]}" | grep -F -q -w "$ref";
}
function deploy_success_slack() {
if [ ! -z ${CI_SLACK_WEBHOOK_URL+x} ] && slack_enabled_for_ref; then
local text attachments
export_deploy_ci_variables
text="Successful deployment of <${CI_PIPELINE_URL}|${CI_PROJECT_NAME}> to <${CF_URL}|${CI_ENVIRONMENT_NAME}>"
attachments="{\"fallback\":\"${text}\",\"color\":\"good\", \"text\": \"${text}\",\
\"fields\": [\
{\"title\": \"Git Commit\", \"value\": \"${CI_COMMIT_MESSAGE}\"},\
{\"title\": \"Git Author\", \"value\": \"${CI_COMMIT_AUTHOR}\"}\
],\
\"actions\": [\
{\"type\": \"button\", \"text\": \"View Pipeline\", \"style\": \"primary\", \"url\": \"${CI_PIPELINE_URL}\"}\
]\
}"
curl -s -X POST --data-urlencode \
"payload={\"channel\": \"${CI_SLACK_CHANNEL}\", \"username\": \"React GitLab CICD\",\
\"attachments\": [${attachments}], \"icon_emoji\": \":reactjs:\" }" \
"${CI_SLACK_WEBHOOK_URL}"
fi
}
function deploy_failure_slack() {
if [ ! -z ${CI_SLACK_WEBHOOK_URL+x} ] && slack_enabled_for_ref; then
local text attachments
text="Failed to deploy <${CI_PIPELINE_URL}|${CI_PROJECT_NAME}> to ${CI_ENVIRONMENT_NAME}"
attachments="{\"fallback\":\"${text}\",\"color\":\"danger\", \"text\": \"${text}\",\
\"fields\": [\
{\"title\": \"Git Commit\", \"value\": \"${CI_COMMIT_MESSAGE}\"},\
{\"title\": \"Git Author\", \"value\": \"${CI_COMMIT_AUTHOR}\"}\
],\
\"actions\": [\
{\"type\": \"button\", \"text\": \"View Pipeline\", \"style\": \"primary\", \"url\": \"${CI_PIPELINE_URL}\"}\
]\
}"
curl -s -X POST --data-urlencode \
"payload={\"channel\": \"${CI_SLACK_CHANNEL}\", \"username\": \"React GitLab CICD\",\
\"attachments\": [${attachments}], \"icon_emoji\": \":reactjs:\" }" \
"${CI_SLACK_WEBHOOK_URL}"
fi
}
function job_failure_slack() {
if [ ! -z ${CI_SLACK_WEBHOOK_URL+x} ] && slack_enabled_for_ref; then
local text attachments
text="The ${CI_JOB_NAME} job failed in the <${CI_PIPELINE_URL}|${CI_PROJECT_NAME}> pipeline."
attachments="{\"fallback\":\"${text}\",\"color\":\"danger\", \"text\": \"${text}\",\
\"fields\": [\
{\"title\": \"Git Author\", \"value\": \"${CI_COMMIT_AUTHOR}\", \"short\": true},\
{\"title\": \"Git Branch\", \"value\": \"${CI_COMMIT_REF_NAME}\", \"short\": true}\
],\
\"actions\": [\
{\"type\": \"button\", \"text\": \"View Pipeline\", \"style\": \"primary\", \"url\": \"${CI_PIPELINE_URL}\"}\
]\
}"
curl -s -X POST --data-urlencode \
"payload={\"channel\": \"${CI_SLACK_CHANNEL}\", \"username\": \"React GitLab CICD\",\
\"attachments\": [${attachments}], \"icon_emoji\": \":reactjs:\" }" \
"${CI_SLACK_WEBHOOK_URL}"
fi
}
function no_qa_updates_slack() {
# $1 = Last Release Sha, $2 = Last Release Timestamp
if [ ! -z ${CI_SLACK_WEBHOOK_URL+x} ] && slack_enabled_for_ref; then
local text attachments
text="No unreleased changes found to make a scheduled QA release. The last release was at $(date --date="#$2" "+%a %d %b %T UTC")"
attachments="{\"fallback\":\"${text}\",\"color\":\"good\", \"text\": \"${text}\",\
\"fields\": [\
{\"title\": \"Last Commit Message\", \"value\": \"$(git show -s --format=%B $1)\"},\
{\"title\": \"Last Commit Hash\", \"value\": \"<${CI_PROJECT_URL}/commit/$1|$(git rev-parse --short $1)>\"}\
]\
}"
curl -s -X POST --data-urlencode \
"payload={\"channel\": \"${CI_SLACK_CHANNEL}\", \"username\": \"React GitLab CICD\",\
\"attachments\": [${attachments}], \"icon_emoji\": \":reactjs:\" }" \
"${CI_SLACK_WEBHOOK_URL}"
fi
}
#------------------------------------------
# utils
#------------------------------------------
# Create a Sentry release and upload source maps
function create_sentry_release() {
export SENTRY_URL="${SENTRY_URL:-https://sentry.io}"
export SENTRY_AUTH_TOKEN="${SENTRY_AUTH_TOKEN}"
export SENTRY_ORG="${SENTRY_ORG:-halo-solutions-ltd}"
export SENTRY_PROJECT="${SENTRY_PROJECT:-$CI_PROJECT_NAME}"
export SENTRY_DISABLE_UPDATE_CHECK="true"
export SENTRY_LOG_LEVEL="info"
sentry-cli releases new $REACT_APP_SENTRY_VERSION
#TODO: sentry-cli releases set-commits --auto $REACT_APP_SENTRY_VERSION
sentry-cli releases files $REACT_APP_SENTRY_VERSION upload-sourcemaps -x .js -x .map --validate --rewrite --url-prefix '~/static/js/' ./build/static/js/
}
# Finalize a Sentry release (call once deployed)
function finalize_sentry_release() {
sentry-cli releases finalize $REACT_APP_SENTRY_VERSION
}
# Get the REACT_APP env vars for this build.
# That is, env vars beginning with REACT_APP_ or <ENV>_REACT_APP_
function get_react_env_vars() {
env | egrep "(^REACT_APP_)|(^${CI_ENVIRONMENT_NAME}_REACT_APP_)"
}
# Get the raw names of REACT_APP env vars we will be injecting into build
# That is, names of all env vars starting with REACT_APP_ or <ENV>_REACT_APP_
function get_react_env_vars_raw_names() {
get_react_env_vars | egrep -oh "^(.*=)*" | cut -d '=' -f 1
}
# Get react env vars in format ready to pass to Docker as env variables
# all <ENV>_REACT_APP_ vars renamed to REACT_APP_ format
# Prepended with -e so it can be passed to Docker build as env vars
function get_react_env_vars_for_docker() {
get_react_env_vars | sed "s/^${CI_ENVIRONMENT_NAME}_REACT_APP/REACT_APP/" \
| sed "s/^/-e /"
}
# Export CI variables needed for deployment
function export_deploy_ci_variables() {
export AWS_ACCESS_KEY_ID=$(A=${CI_ENVIRONMENT_NAME}_CI_DEPLOY_ACCESS_KEY_ID; echo ${!A})
export AWS_SECRET_ACCESS_KEY=$(A=${CI_ENVIRONMENT_NAME}_CI_DEPLOY_SECRET_ACCESS_KEY; echo ${!A})
export AWS_DEFAULT_REGION=$(A=${CI_ENVIRONMENT_NAME}_CI_DEPLOY_REGION; echo ${!A})
export CI_DEPLOY_BUCKET=$(S3=${CI_ENVIRONMENT_NAME}_CI_DEPLOY_BUCKET; echo ${!S3})
export CI_CF_DISTRIBUTION=$(CFD=${CI_ENVIRONMENT_NAME}_CI_CF_DISTRIBUTION; echo ${!CFD})
export CF_URL=$(aws cloudfront get-distribution --id ${CI_CF_DISTRIBUTION} \
--query "Distribution.[DomainName, DistributionConfig.Aliases.Items[0]]" \
--output text | awk '{if($2 == "None"){print "http://"$1} else {print "http://"$2}}')
}
function fn_exists() {
# appended double quote is an ugly trick to make sure we do get a string -- if $1 is not a known command, type does not output anything
[ `type -t $1`"" == 'function' ]
}
main "$#"
Now here is the working flow of github:
name: halo solutions web-admin deployment
# Modified for Halo post HHL handover 21/05/22 ML
# develop -> dev
# qa -> qa
# main -> uat
# prod -> prod/live
on:
workflow_dispatch:
push:
branches:
- feature/*
- issue/*
- develop
- qa
- main
- prod
pull_request:
branches:
- develop
- qa
- main
- prod
env:
PROJECT: halo-web
CI_REGISTRY_IMAGE: halo-solutions-ltd/halo-web
CI_ENVIRONMENT_NAME: DEV
CI_SLACK_CHANNEL: ${{ secrets.CI_SLACK_CHANNEL }}
CI_SLACK_WEBHOOK_URL: ${{ secrets.CI_SLACK_WEBHOOK_URL }}
CI_JOB_ID: ${{ github.run_id }}
CI_PIPELINE_ID: ${{ github.run_id }}
jobs:
lint_test_build_release:
runs-on: [ubuntu-latest]
container:
image: ghcr.io/halo-solutions-ltd/docker-aws:latest
steps:
- name: Checkout repo content
uses: actions/checkout#v2
- name: Build
uses: ./.github/common/build
with:
command: "./ci build_docker_for_pipeline || { ./ci job_failure_slack; exit 1; }"
# TODO : Need to implement composite action so we can avoid repeating the code
- name: Dev deploy
if: github.ref == 'refs/heads/develop'
run: |
export CI_COMMIT_REF_NAME=$(echo ${GITHUB_REF#refs/heads/} | sed -e 's/[^A-Za-z0-9._-]/_/g')
export CI_ENVIRONMENT_NAME=DEV
export DEV_REACT_APP_ENV_NAME=${{ secrets.DEV_REACT_APP_ENV_NAME }}
export DEV_REACT_APP_JS_KEY=${{ secrets.DEV_REACT_APP_JS_KEY }}
export DEV_REACT_APP_KEY=${{ secrets.DEV_REACT_APP_KEY }}
export DEV_REACT_APP_PARSE_SERVER=${{ secrets.DEV_REACT_APP_PARSE_SERVER }}
export DEV_REACT_APP_SERVER=${{ secrets.DEV_REACT_APP_SERVER }}
export DEV_CI_DEPLOY_ACCESS_KEY_ID=${{ secrets.DEV_CI_DEPLOY_ACCESS_KEY_ID }}
export DEV_CI_DEPLOY_SECRET_ACCESS_KEY=${{ secrets.DEV_CI_DEPLOY_SECRET_ACCESS_KEY }}
export DEV_CI_DEPLOY_REGION=${{ secrets.DEV_CI_DEPLOY_REGION }}
export DEV_CI_DEPLOY_BUCKET=${{ secrets.DEV_CI_DEPLOY_BUCKET }}
export DEV_CI_CF_DISTRIBUTION=${{ secrets.DEV_CI_CF_DISTRIBUTION }}
./ci deploy && ./ci deploy_success_slack || { ./ci deploy_failure_slack; exit 1; }
- name: QA deploy
if: github.ref == 'refs/heads/qa'
run: |
export CI_COMMIT_REF_NAME=$(echo ${GITHUB_REF#refs/heads/} | sed -e 's/[^A-Za-z0-9._-]/_/g')
export CI_ENVIRONMENT_NAME=QA
export QA_REACT_APP_ENV_NAME=${{ secrets.QA_REACT_APP_ENV_NAME }}
export QA_REACT_APP_JS_KEY=${{ secrets.QA_REACT_APP_JS_KEY }}
export QA_REACT_APP_KEY=${{ secrets.QA_REACT_APP_KEY }}
export QA_REACT_APP_PARSE_SERVER=${{ secrets.QA_REACT_APP_PARSE_SERVER }}
export QA_REACT_APP_SERVER=${{ secrets.QA_REACT_APP_SERVER }}
export QA_CI_DEPLOY_ACCESS_KEY_ID=${{ secrets.QA_CI_DEPLOY_ACCESS_KEY_ID }}
export QA_CI_DEPLOY_SECRET_ACCESS_KEY=${{ secrets.QA_CI_DEPLOY_SECRET_ACCESS_KEY }}
export QA_CI_DEPLOY_REGION=${{ secrets.QA_CI_DEPLOY_REGION }}
export QA_CI_DEPLOY_BUCKET=${{ secrets.QA_CI_DEPLOY_BUCKET }}
export QA_CI_CF_DISTRIBUTION=${{ secrets.QA_CI_CF_DISTRIBUTION }}
./ci deploy && ./ci deploy_success_slack || { ./ci deploy_failure_slack; exit 1; }
- name: UAT deploy
if: github.ref == 'refs/heads/main'
run: |
export CI_COMMIT_REF_NAME=$(echo ${GITHUB_REF#refs/heads/} | sed -e 's/[^A-Za-z0-9._-]/_/g')
export CI_ENVIRONMENT_NAME=UAT
export UAT_REACT_APP_ENV_NAME=${{ secrets.UAT_REACT_APP_ENV_NAME }}
export UAT_REACT_APP_JS_KEY=${{ secrets.UAT_REACT_APP_JS_KEY }}
export UAT_REACT_APP_KEY=${{ secrets.UAT_REACT_APP_KEY }}
export UAT_REACT_APP_PARSE_SERVER=${{ secrets.UAT_REACT_APP_PARSE_SERVER }}
export UAT_REACT_APP_SERVER=${{ secrets.UAT_REACT_APP_SERVER }}
export UAT_CI_DEPLOY_ACCESS_KEY_ID=${{ secrets.UAT_CI_DEPLOY_ACCESS_KEY_ID }}
export UAT_CI_DEPLOY_SECRET_ACCESS_KEY=${{ secrets.UAT_CI_DEPLOY_SECRET_ACCESS_KEY }}
export UAT_CI_DEPLOY_REGION=${{ secrets.UAT_CI_DEPLOY_REGION }}
export UAT_CI_DEPLOY_BUCKET=${{ secrets.UAT_CI_DEPLOY_BUCKET }}
export UAT_CI_CF_DISTRIBUTION=${{ secrets.UAT_CI_CF_DISTRIBUTION }}
./ci deploy && ./ci deploy_success_slack || { ./ci deploy_failure_slack; exit 1; }
- name: LIVE deploy
if: github.ref == 'refs/heads/prod'
run: |
export CI_COMMIT_REF_NAME=$(echo ${GITHUB_REF#refs/heads/} | sed -e 's/[^A-Za-z0-9._-]/_/g')
export CI_ENVIRONMENT_NAME=LIVE
export LIVE_REACT_APP_ENV_NAME=${{ secrets.LIVE_REACT_APP_ENV_NAME }}
export LIVE_REACT_APP_KEY=${{ secrets.LIVE_REACT_APP_KEY }}
export LIVE_REACT_APP_PARSE_SERVER=${{ secrets.LIVE_REACT_APP_PARSE_SERVER }}
export LIVE_REACT_APP_SERVER=${{ secrets.LIVE_REACT_APP_SERVER }}
export LIVE_CI_DEPLOY_ACCESS_KEY_ID=${{ secrets.LIVE_CI_DEPLOY_ACCESS_KEY_ID }}
export LIVE_CI_DEPLOY_SECRET_ACCESS_KEY=${{ secrets.LIVE_CI_DEPLOY_SECRET_ACCESS_KEY }}
export LIVE_CI_DEPLOY_REGION=${{ secrets.LIVE_CI_DEPLOY_REGION }}
export LIVE_CI_DEPLOY_BUCKET=${{ secrets.LIVE_CI_DEPLOY_BUCKET }}
export LIVE_CI_CF_DISTRIBUTION=${{ secrets.LIVE_CI_CF_DISTRIBUTION }}
./ci deploy && ./ci deploy_success_slack || { ./ci deploy_failure_slack; exit 1; }
These are errors:
Done in 86.82s.
The user-provided path build does not exist.
fatal: unsafe repository ('/__w/halo-web/halo-web' is owned by someone else)
To add an exception for this directory, call:
git config --global --add safe.directory /__w/halo-web/halo-web
fatal: unsafe repository ('/__w/halo-web/halo-web' is owned by someone else)
To add an exception for this directory, call:
git config --global --add safe.directory /__w/halo-web/halo-web
Error: Process completed with exit code 3.
This is the thing which is very all ok but providing some type of errors and make some errors. Have any suggestions related to this..?

Issue installing Terratest using Go Task's Yaml Azure pipeline - issue triggering terratest tests in sub-folder

I'm facing this issue while installing terratest by azure yaml pipeline :
C:\hostedtoolcache\windows\go\1.17.1\x64\bin\go.exe install -v github.com/gruntwork-io/terratest#v0.40.6
go: downloading github.com/gruntwork-io/terratest v0.40.6
go install: github.com/gruntwork-io/terratest#v0.40.6: module github.com/gruntwork-io/terratest#v0.40.6 found, but does not contain package github.com/gruntwork-io/terratest
##[error]The Go task failed with an error: Error: The process 'C:\hostedtoolcache\windows\go\1.17.1\x64\bin\go.exe' failed with exit code 1
Finishing: Install Go Terratest module - v0.40.6
My code for installation is bellow :
- task: Go#0
displayName: Install Go Terratest module - v$(TERRATEST_VERSION)
inputs:
command: custom
customCommand: install
arguments: $(TF_LOG) github.com/gruntwork-io/terratest#v$(TERRATEST_VERSION)
workingDirectory: $(pipeline_artefact_folder_extract)/$(pathToTerraformRootModule)
But peharps I made mistakes in the use of terratest.
Bellow is a screenshot of my code tree :
I have terraform code in (for exemple) Terraform\azure_v2_X\ResourceModules sub-directory, and terratest test in Terraform\azure_v2_X\Tests_Unit_ResourceModules subdirectories (in screenshot app_configuration tests for app_configuration resourceModules).
In my terratest module, I'm calling for my resourceModule as in the following code :
######test in a un isolated Resource Group defined in locals
module "app_configuration_tobetested" {
source = "../../ResourceModules/app_configuration"
resource_group_name = local.rg_name
location = local.location
environment = var.ENVIRONMENT
sku = "standard"
// rem : here app_service_shared prefix and app_config_shared prefix are the same !
app_service_prefix = module.app_configuration_list_fortests.settings.frontEnd_prefix
# stage = var.STAGE
app_config_list = module.app_configuration_list_fortests.settings.list_app_config
}
And in my Go file, I test my module result regarding the expected result I want :
package RM_app_configuration_Test
import (
"os"
"testing"
// "github.com/gruntwork-io/terratest/modules/azure"
"github.com/gruntwork-io/terratest/modules/terraform"
"github.com/stretchr/testify/assert"
)
var (
globalBackendConf = make(map[string]interface{})
globalEnvVars = make(map[string]string)
)
func TestTerraform_RM_app_configuration(t *testing.T) {
t.Parallel()
// terraform Directory
fixtureFolder := "./"
// backend specification
strlocal := "RMapCfg_"
// input value
inputStage := "sbx_we"
inputEnvironment := "SBX"
inputApplication := "DEMO"
// expected value
expectedRsgName := "z-adf-ftnd-shrd-sbx-ew1-rgp01"
// expectedAppCfgPrefix := "z-adf-ftnd-shrd"
expectedAppConfigReader_ID := "[/subscriptions/f04c8fd5-d013-41c3-9102-43b25880d2e2/resourceGroups/z-adf-ftnd-shrd-sbx-ew1-rgp01/providers/Microsoft.AppConfiguration/configurationStores/z-adf-ftnd-shrd-sbx-ew1-blue-sbx-cfg01 /subscriptions/f04c8fd5-d013-41c3-9102-43b25880d2e2/resourceGroups/z-adf-ftnd-shrd-sbx-ew1-rgp01/providers/Microsoft.AppConfiguration/configurationStores/z-adf-ftnd-shrd-sbx-ew1-green-sbx-cfg01]"
// getting enVars from environment variables
/*
Go and Terraform uses two differents methods for Azure authentification.
** Terraform authentification is explained bellow :
- https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#configuring-the-service-principal-in-terraform
** Go authentification is explained bellow
- https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authorization#use-environment-based-authentication
** Terratest is using both authentification methods regarding the work it has to be done :
- azure existences tests uses Go azure authentification :
- https://github.com/gruntwork-io/terratest/blob/master/modules/azure/authorizer.go#L11
- terraform commands uses terraform authentification :
- https://github.com/gruntwork-io/terratest/blob/0d654bd2ab781a52e495f61230cf892dfba9731b/modules/terraform/cmd.go#L12
- https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/guides/service_principal_client_secret#configuring-the-service-principal-in-terraform
so both authentification methods have to be implemented
*/
// getting terraform EnvVars from Azure Go environment variables
ARM_CLIENT_ID := os.Getenv("AZURE_CLIENT_ID")
ARM_CLIENT_SECRET := os.Getenv("AZURE_CLIENT_SECRET")
ARM_TENANT_ID := os.Getenv("AZURE_TENANT_ID")
ARM_SUBSCRIPTION_ID := os.Getenv("ARM_SUBSCRIPTION_ID")
if ARM_CLIENT_ID != "" {
globalEnvVars["ARM_CLIENT_ID"] = ARM_CLIENT_ID
globalEnvVars["ARM_CLIENT_SECRET"] = ARM_CLIENT_SECRET
globalEnvVars["ARM_SUBSCRIPTION_ID"] = ARM_SUBSCRIPTION_ID
globalEnvVars["ARM_TENANT_ID"] = ARM_TENANT_ID
}
// getting terraform backend from environment variables
resource_group_name := os.Getenv("resource_group_name")
storage_account_name := os.Getenv("storage_account_name")
container_name := os.Getenv("container_name")
key := strlocal + os.Getenv("key")
if resource_group_name != "" {
globalBackendConf["resource_group_name"] = resource_group_name
globalBackendConf["storage_account_name"] = storage_account_name
globalBackendConf["container_name"] = container_name
globalBackendConf["key"] = key
}
// User Terratest to deploy the infrastructure
terraformOptions := terraform.WithDefaultRetryableErrors(t, &terraform.Options{
// website::tag::1::Set the path to the Terraform code that will be tested.
// The path to where our Terraform code is located
TerraformDir: fixtureFolder,
// Variables to pass to our Terraform code using -var options
Vars: map[string]interface{}{
"STAGE": inputStage,
"ENVIRONMENT": inputEnvironment,
"APPLICATION": inputApplication,
},
EnvVars: globalEnvVars,
// backend values to set when initialziing Terraform
BackendConfig: globalBackendConf,
// Disable colors in Terraform commands so its easier to parse stdout/stderr
NoColor: true,
})
// website::tag::4::Clean up resources with "terraform destroy". Using "defer" runs the command at the end of the test, whether the test succeeds or fails.
// At the end of the test, run `terraform destroy` to clean up any resources that were created
defer terraform.Destroy(t, terraformOptions)
// website::tag::2::Run "terraform init" and "terraform apply".
// This will run `terraform init` and `terraform apply` and fail the test if there are any errors
terraform.InitAndApply(t, terraformOptions)
// tests the resource_group for the app_configuration
/*
actualAppConfigReaderPrefix := terraform.Output(t, terraformOptions, "app_configuration_tested_prefix")
assert.Equal(t, expectedAppCfgprefix, actualAppConfigReaderPrefix)
*/
actualRSGReaderName := terraform.Output(t, terraformOptions, "app_configuration_tested_RG_name")
assert.Equal(t, expectedRsgName, actualRSGReaderName)
actualAppConfigReader_ID := terraform.Output(t, terraformOptions, "app_configuration_tobetested_id")
assert.Equal(t, expectedAppConfigReader_ID, actualAppConfigReader_ID)
}
The fact is locally, I can do, from My main folder Terraform\Azure_v2_X\Tests_Unit_ResourceModules the following command to trigger all my tests in a raw :
(from Go v1.11)
Go test ./...
With Go version 1.12, I could set GO111MODULE=auto to have the same results.
But with Go 1.17, I have now to set GO111MODULE=off to trigger my tetst.
For now, I have 2 main questions that nagging me :
How can I Go import Terratest (and other) modules from azure Pipeline ?
What I have to do to correctly use Go Modules with terratest ?
I have no Go code in my main folder _Terraform\Azure_v2_X\Tests_Unit_ResourceModules_ and would like to trigger all the sub_folder go tests in a simple command line in my Azure Pipeline.
Thank you for any help you could give.
Best regards,
I will once again answer my own question. :D
so, for now, using the following versions :
-- GOVERSION: 1.17.1
-- TERRAFORM_VERSION : 1.1.7
-- TERRATEST_VERSION: 0.40.6
The folder hierarchy has changed the following, regarding terratest tests :
I do no longer try to Go import my Terratest module. (so point 1) above is ansered, obviously)
I now just have to :
Go mod each of my terratest modules
Trigger each of them individually one by one, using script
so my pipeline just became the following :
- task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller#0
displayName: Install Terraform $(TERRAFORM_VERSION)
inputs:
terraformVersion: $(TERRAFORM_VERSION)
- task: GoTool#0
displayName: 'Use Go $(GOVERSION)'
inputs:
version: $(GOVERSION)
goPath: $(GOPATH)
goBin: $(GOBIN)
- task: PowerShell#2
displayName: run Terratest for $(pathToTerraformRootModule)
inputs:
targettype : 'filePath'
filePath: $(pipeline_artefact_folder_extract)/$(pathToTerraformRootModule)/$(Run_Terratest_script)
workingDirectory: $(pipeline_artefact_folder_extract)/$(pathToTerraformRootModule)
env:
# see https://learn.microsoft.com/en-us/azure/developer/go/azure-sdk-authorization#use-environment-based-authentication
# for Azure authentification with Go
ARM_SUBSCRIPTION_ID: $(TF_VAR_ARM_SUBSCRIPTION_ID)
AZURE_CLIENT_ID: $(TF_VAR_ARM_CLIENT_ID)
AZURE_TENANT_ID: $(TF_VAR_ARM_TENANT_ID)
AZURE_CLIENT_SECRET: $(TF_VAR_ARM_CLIENT_SECRET) # set as pipeline secret
resource_group_name: $(storageAccountResourceGroup)
storage_account_name: $(storageAccount)
container_name: $(stateBlobContainer)
key: '$(MODULE)-$(TF_VAR_APPLICATION)-$(TF_VAR_ENVIRONMENT).tfstate'
GO111MODULE: 'auto'
And in my main folder for my terratest sub-folders, I have the run_terratests.ps1 script and the Terratests list file as bellow :
run_terratests.ps1
# this file is based on https://github.com/google/go-cloud/blob/master/internal/testing/runchecks.sh
#
# This script runs all go Terratest suites,
# compatibility checks, consistency checks, Wire, etc.
$moduleListFile = "./Terratests"
# regex to filter : not began with #
$regexFilter = "^[^#]"
# read the ModuleListFile
[object] $arrayFromFile = Get-Content -Path $moduleListFile | Where-Object { $_ -match $regexFilter} | ConvertFrom-String -PropertyNames folder, totest
$result = 0 # set no error by default
# get the actual folder
$main_path = Get-Location | select -ExpandProperty "Path"
#read the array to show if to be tested !
foreach ($line in $arrayFromFile) {
# write-Host $line
if ($line.totest -eq "yes") {
$path = $line.folder
set-location $main_path\$path
$myPath = Get-Location
# Write-Host $myPath
# trigger terratest for files
Go test ./...
}
if ($false -eq $?)
{
$result = 1
}
}
# back to school :D
set-location $main_path
if ($result -eq 1)
{
Write-Error "Msbuild exit code indicate test failure."
Write-Host "##vso[task.logissue type=error]Msbuild exit code indicate test failure."
exit(1)
}
the code
if ($false -eq $?)
{
$result = 1
}
is usefull to make the pipeline fail on test error without escaping the other tests.
Terratests
# this file lists all the modules to be tested in the "Tests_Unit_ConfigHelpers" repository.
# it us used by the "run_terratest.ps1" powershell script to trigger terratest for each test.
#
# Any line that doesn't begin with a '#' character and isn't empty is treated
# as a path relative to the top of the repository that has a module in it.
# The 'tobetested' field specifies whether this is a module that have to be tested.
#
# this file is based on https://github.com/google/go-cloud/blob/master/allmodules
# module-directory tobetested
azure_constants yes
configure_app_srv_etc yes
configure_frontdoor_etc yes
configure_hostnames yes
constants yes
FrontEnd_AppService_slots/_main yes
FrontEnd_AppService_slots/settings yes
merge_maps_of_strings yes
name yes
name_template yes
network/hostname_generator yes
network/hostnames_generator yes
replace_2vars_into_string_etc yes
replace_var_into_string_etc yes
sorting_map_with_an_other_map yes
And the change in each terratest folder is that I will add the go.mod and go.sum files :
$ go mod init mytest
go: creating new go.mod: module mytest
go: to add module requirements and sums:
go mod tidy
and
$ go mod tidy
# link each of the go modules needed for your terratest module
so, with that, the go test ./... from the powershell script will downlaod the needed go modules and run the test for that particulary test.
Thanks for reading and vote if you think that can help :)

Error: Missing required provider in next stage even after init

I have following CI configurations:
...
cache:
key: ${CI_PROJECT_NAME}
paths:
- ${TF_ROOT}/.terraform
before_script:
- echo -e "credentials \"$CI_SERVER_HOST\" {\n token = \"$CI_JOB_TOKEN\"\n}" > $TF_CLI_CONFIG_FILE
- cd ${TF_ROOT}
- export TF_LOG_CORE=TRACE
- export TF_LOG_PATH=terraform_logs.txt
stages:
- initialize
- validate
init:
stage: initialize
script:
- terraform -v
- terraform init
#- terraform validate
validate:
stage: validate
script:
- terraform validate
My init runs totally fine however i get following in the next stage i.e. validate:
$ terraform validate
╷
│ Error: Missing required provider
│
│ This configuration requires provider registry.terraform.io/datadog/datadog,
│ but that provider isn't available. You may be able to install it
│ automatically by running:
│ terraform init
in provider.tf:
terraform {
required_version = ">= 0.14"
required_providers {
datadog = {
source = "DataDog/datadog"
version = "2.24.0"
}
}
}
in config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "some rummer"
url = "****
token = "***"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
If run the validate as subsequent command in the init stage itself if works fine, but just not in the different stage.
If i do ls -al in the next stage before validate, i can even see .terraform folder present which should be having providers inside?
Second guess was a caching issue, however I believe I have specified caches correctly - ${TF_ROOT}/.terraform?
I am running the gitlab-runner as shell executor.
Any idea what is wrong here?

Github action write to a repo in Node with #actions/core or #actions/github?

Learning Github Actions I'm finally able to call an action from a secondary repo, example:
org/action-playground
.github/workflows/test.yml
name: Test Write Action
on:
push:
branches: [main]
jobs:
test_node_works:
runs-on: ubuntu-latest
name: Test if Node works
strategy:
matrix:
node-version: [12.x]
steps:
- uses: actions/checkout#v2
with:
repository: org/write-file-action
ref: main
token: ${{ secrets.ACTION_TOKEN }} # stored in GitHub secrets created from profile settings
args: 'TESTING'
- name: action step
uses: ./ # Uses an action in the root directory
id: foo
with:
who-to-greet: 'Darth Vader'
- name: output time
run: |
echo "The details are ${{ steps.foo.outputs.repo }}"
echo "The time was ${{ steps.foo.outputs.time }}"
echo "time: ${{ steps.foo.outputs.time }}" >> ./foo.md
shell: bash
and the action is a success.
org/write-file-action
action.yml:
## https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions
name: 'Write File Action'
description: 'workflow testing'
inputs:
who-to-greet: # id of input
description: 'Who to greet'
required: true
default: './'
outputs:
time: # id of output
description: 'The time we greeted you'
repo:
description: 'user and repo'
runs:
using: 'node12'
main: 'dist/index.js'
branding:
color: 'green'
icon: 'truck' ## https://docs.github.com/en/actions/creating-actions/metadata-syntax-for-github-actions#brandingicon
index.js that is built to dist/index.js
fs = require('fs')
const core = require('#actions/core')
const github = require('#actions/github')
try {
// `who-to-greet` input defined in action metadata file
const nameToGreet = core.getInput('who-to-greet')
console.log(`Hello ${nameToGreet}!`)
const time = new Date().toTimeString()
core.setOutput('time', time)
const repo = github.context.payload.repository.full_name
console.log(`full name: ${repo}!`)
core.setOutput('repo', repo)
// Get the JSON webhook payload for the event that triggered the workflow
const payload = JSON.stringify(github.context.payload, undefined, 2)
console.log(`The event payload: ${payload}`)
fs.writeFileSync('payload.json', payload) // Doesn't write to repo
} catch (error) {
core.setFailed(error.message)
}
package.json:
{
"name": "wite-file-action",
"version": "1.0.0",
"description": "workflow testing",
"main": "dist/index.js",
"scripts": {
"build": "ncc build ./index.js"
},
"dependencies": {
"#actions/core": "^1.4.0",
"#actions/github": "^5.0.0"
},
"devDependencies": {
"#vercel/ncc": "^0.28.6",
"prettier": "^2.3.2"
}
}
but at current workflow nothing is created in action-playground. The only way I'm able to write to the repo is from a module using the API with github-api with something like:
const GitHub = require('github-api')
const gh = new GitHub({
token: config.app.git_token,
}, githubUrl)
const repo = gh.getRepo(config.app.repoOwner, config.app.repoName)
const branch = config.app.repoBranch
const path = 'README.md'
const content = '#Foo Bar\nthis is foo bar'
const message = 'add foo bar to the readme'
const options = {}
repo.writeFile(
branch,
path,
content,
message,
options
).then((r) => {
console.log(r)
})
and passing in the repo, org or user from github.context.payload. My end goal is to eventually read to see if it exists, if so overwrite and write to README.md a badge dynamically:
`![${github.context.payload.workflow}](https://github.com/${github.context.payload.user}/${github.context.payload.repo}/actions/workflows/${github.context.payload.workflow}.yml/badge.svg?branch=main)`
Second goal from this is to create a markdown file (like foo.md or payload.json) but I cant run an echo command from the action to write to the repo, which I get is Bash and not Node.
Is there a way without using the API to write to a repo that is calling the action with Node? Is this only available with Bash when using run:
- name: output
shell: bash
run: |
echo "time: ${{ steps.foo.outputs.time }}" >> ./time.md
If so how to do it?
Research:
Passing variable argument to .ps1 script not working from Github Actions
How to pass variable between two successive GitHub Actions jobs?
GitHub Action: Pass Environment Variable to into Action using PowerShell
How to create outputs on GitHub actions from bash scripts?
Self-updating GitHub Profile README with JavaScript
Workflow syntax for GitHub Actions

GitHub Action for reading JSON file

I would like to write a GH action that uses JSON data as a parameter.
The plan is to save the Terraform output as a JSON file in one step and access it in the sequential step.
name: Test and Terraform
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
Unit-Tests:
...
Terraform:
runs-on: ubuntu-18.04
needs: [ Unit-Tests, S3-Sync ]
steps:
...
- name: Terraform Apply
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: |
cd terraform
terraform apply -auto-approve
terraform output --json > output.json
- name: Set Terraform Output
id: output
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: |
content=`cat ./terraform/output.json`
# the following lines are only required for multi line json
content="${content//'%'/'%25'}"
content="${content//$'\n'/'%0A'}"
content="${content//$'\r'/'%0D'}"
# end of optional handling for multi line json
echo "::set-output name=terraform::$content"
- name: Gatsby Cloud Sync
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
run: |
echo "${{fromJson(steps.output.outputs.terraform)}}"
The current error I get with this is that the final step is formatted improperly.
The template is not valid. .github/workflows/main.yml (Line: 154, Col: 14): Unexpected character encountered while parsing value: c. Path '', line 1, position 1.
How is the last step, Gatsby Cloud Sync, formatted incorrectly?

Resources