Jenkins console shows permission denied error when command : terraform init runs in pipeline - azure

Please find below pipeline script and jenkins console output which is framed after running the jenkins job.Below shared is pipeline code jenkins and jenkins console output:
pipeline {
agent any
parameters {
string(name: 'myvmname', defaultValue: 'az-k8s', description: 'VM name')
string(name: 'mymodulename', defaultValue: 'aks', description: 'Module name')
string(name: 'operation', defaultValue: 'apply', description: 'terraform operation apply/destroy')
booleanParam(name: 'AKS', defaultValue: true, description: 'AKS Cluster Creation')
}
stages {
stage('IAC AKS -Git Clone') {
when
{
expression {params.AKS==true}
}
steps {
git branch: 'main', credentialsId: 'xxxx-xxxx-xxxx-xxx', url: 'http://xxx.xx.xx.x/root/rapidopsiacaks.git'
}
}
stage('AKS Cluster Creation') {
when
{
expression {params.AKS==true}
}
steps {
sh '''echo "AKS cluster creation"
mkdir -p /var/lib/jenkins/workspace/VM_Prov_Terraform/${myvmname}
cp -r /var/lib/jenkins/workspace/TerraformAKS/* /var/lib/jenkins/workspace/VM_Prov_Terraform/${myvmname}
cd /var/lib/jenkins/workspace/VM_Prov_Terraform/${myvmname}/${mymodulename}
terraform init
terraform plan
terraform ${operation} -auto-approve
'''
}
}
}
}
WHen i am running the job / triggering the job via jenkins i am getting below error (sharing complete output of console):
+ cd /var/lib/jenkins/workspace/VM_Prov_Terraform/az-k8s/aks
+ terraform init
[31mThere are some problems with the configuration, described below.
The Terraform configuration must be valid before initialization so that
Terraform can determine which modules and providers need to be installed.[0m[0m
[31m
[1m[31mError: [0m[0m[1mReserved argument name in module block[0m
[0m on main.tf line 24, in module "aks":
24: [4mdepends_on[0m = [azurerm_resource_group.rg]
[0m
The name "depends_on" is reserved for use in a future version of Terraform.
[0m[0m
[31m
[1m[31mError: [0m[0m[1mInvalid version constraint[0m
[0m on provider.tf line 3, in terraform:
3: azurerm = [4m{[0m
[0m 4: [4m source = "hashicorp/azurerm"[0m
[0m 5: [4m version = "~> 2.65"[0m
[0m 6: [4m }[0m
[0m
A string value is required for azurerm.
[0m[0m
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
ERROR: script returned exit code 1
Finished: FAILURE
-------------------------------xoxoxo -------------------------------
Also , when i run ( terraform init command ) in terminal getting below error :
[devopsadmin#jenkins1 aks]$ terraform init
Initializing modules...
╷
│ Error: Failed to update module manifest
│
│ Unable to write the module manifest file: open .terraform/modules/modules.json: permission denied
Please help !
Thanks in advance !

Related

How to load local file with cdktf for helm release?

I'd like to reference a local yaml file when creating a helm chart with cdktf.
I have the following cdktf config:
{
"language": "typescript",
"app": "npx ts-node main.ts",
"projectId": "...",
"terraformProviders": [
"hashicorp/aws#~> 3.42",
"hashicorp/kubernetes# ~> 2.7.0",
"hashicorp/http# ~> 2.1.0",
"hashicorp/tls# ~> 3.1.0",
"hashicorp/helm# ~> 2.4.1",
"hashicorp/random# ~> 3.1.0",
"gavinbunney/kubectl# ~> 1.14.0"
],
"terraformModules": [
{
"name": "secrets-store-csi",
"source": "app.terraform.io/goldsky/secrets-store-csi/aws",
"version": "0.1.5"
}
],
"context": {
"excludeStackIdFromLogicalIds": "true",
"allowSepCharsInLogicalIds": "true"
}
}
Note npx ts-node main.ts as the app.
In main.ts I have the following helm release
new helm.Release(this, "datadog-agent", {
chart: "datadog",
name: "datadog",
repository: "https://helm.datadoghq.com",
version: "3.1.3",
set: [
{
name: "datadog.clusterChecks.enabled",
value: "true",
},
{
name: "clusterAgent.enabled",
value: "true"
},
],
values: ["${file(\"datadog-values.yaml\")}"],
});
Note that I'm referencing a yaml file called datadog-values.yaml similar to this example from the helm provider.
datadog-values.yaml is a sister file to main.ts
However, when I try to deploy this with cdktf deploy I get the following error
│ Error: Invalid function argument
│
│ on cdk.tf.json line 1017, in resource.helm_release.datadog-agent.values:
│ 1017: "${file(\"datadog-values.yaml\")}"
│
│ Invalid value for "path" parameter: no file exists at
│ "datadog-values.yaml"; this function works only with files that are
│ distributed as part of the configuration source code, so if this file will
│ be created by a resource in this configuration you must instead obtain this
goldsky-infra-dev ╷
│ Error: Invalid function argument
│
│ on cdk.tf.json line 1017, in resource.helm_release.datadog-agent (datadog-agent).values:
│ 1017: "${file(\"datadog-values.yaml\")}"
│
│ Invalid value for "path" parameter: no file exists at
│ "datadog-values.yaml"; this function works only with files that are
│ distributed as part of the configuration source code, so if this file will
│ be created by a resource in this configuration you must instead obtain this
│ result from an attribute of that resource.
To run a deployment I execute npm run deploy:dev which is a customer script in my package.json:
"build": "tsc",
"deploy:dev": "npm run build && npx cdktf deploy",
How can I reference my datadog yaml file in a helm release like in the example shown by the helm provider?
To reference local files in CDKTF, you need to use assets. Assuming at the root level of your project there's a values folder where you store your values yaml file:
const valuesAsset = new TerraformAsset(this, 'values-asset', {
path: `${process.cwd()}/values/${this.chartValues}`,
type: AssetType.FILE,
});
new HelmRelease(this, 'helm-release', {
name: this.releaseName,
chart: this.chartName,
repository: this.chartRepository,
values: [ Fn.file(valuesAsset.path) ]
})
}
Note that I've used the file Terraform function to read the content of the file.

Error: Missing required provider in next stage even after init

I have following CI configurations:
...
cache:
key: ${CI_PROJECT_NAME}
paths:
- ${TF_ROOT}/.terraform
before_script:
- echo -e "credentials \"$CI_SERVER_HOST\" {\n token = \"$CI_JOB_TOKEN\"\n}" > $TF_CLI_CONFIG_FILE
- cd ${TF_ROOT}
- export TF_LOG_CORE=TRACE
- export TF_LOG_PATH=terraform_logs.txt
stages:
- initialize
- validate
init:
stage: initialize
script:
- terraform -v
- terraform init
#- terraform validate
validate:
stage: validate
script:
- terraform validate
My init runs totally fine however i get following in the next stage i.e. validate:
$ terraform validate
╷
│ Error: Missing required provider
│
│ This configuration requires provider registry.terraform.io/datadog/datadog,
│ but that provider isn't available. You may be able to install it
│ automatically by running:
│ terraform init
in provider.tf:
terraform {
required_version = ">= 0.14"
required_providers {
datadog = {
source = "DataDog/datadog"
version = "2.24.0"
}
}
}
in config.toml:
concurrent = 1
check_interval = 0
[session_server]
session_timeout = 1800
[[runners]]
name = "some rummer"
url = "****
token = "***"
executor = "shell"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
If run the validate as subsequent command in the init stage itself if works fine, but just not in the different stage.
If i do ls -al in the next stage before validate, i can even see .terraform folder present which should be having providers inside?
Second guess was a caching issue, however I believe I have specified caches correctly - ${TF_ROOT}/.terraform?
I am running the gitlab-runner as shell executor.
Any idea what is wrong here?

Why is TF build failing with "Error refreshing state: HTTP remote state endpoint requires auth"?

My pipeline builds, plans and applies just fine for my dev branch. When I push to my master branch, I get "Error refreshing state: HTTP remote state endpoint requires auth". See pipeline logs: (and since someone will ask, the token "$onbaord_cloud_account_into_monitoring" has complete read/write access to the scoped project API.)...
Running with gitlab-runner 13.4.1 (REDACTED)
on runner-gitlab-runner-REDACTED-REDACTED REDACTED
Resolving secrets
00:00
Preparing the "kubernetes" executor
00:00
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image my-gitlab.io:5005/team-cloud-platform-team/terraform-modules/root-module-deployment/python-terraform14 ...
Preparing environment
00:03
Waiting for pod gitlab-managed-apps/runner-REDACTED-project-REDACTED-concurrent-REDACTED to be running, status is Pending
Running on runner-REDACTED-project-REDACTED-concurrent-REDACTED via runner-gitlab-runner-REDACTED-REDACTED...
Getting source from Git repository
00:02
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/team-cloud-platform-team/teamcloudv2/workers/onboard-cloud-account-into-monitoring/.git/
Created fresh repository.
Checking out REDACTED as master...
Skipping Git submodules setup
Restoring cache
00:00
Checking cache for REDACTEDREDACTEDREDACTEDREDACTED...
No URL provided, cache will not be downloaded from shared cache server. Instead a local version of cache will be extracted.
Successfully extracted cache
Executing "step_script" stage of the job script
00:05
$ git config --global credential.helper cache
$ git fetch
From https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/workers/onboard-cloud-account-into-monitoring
* [new branch] dev -> origin/dev
$ if [ "$CI_COMMIT_REF_NAME" == "master" ]; then TF_ACCOUNT="REDACTED";fi
$ if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then TF_ACCOUNT="REDACTED";fi
$ if [ "$CI_COMMIT_REF_NAME" == "master" ]; then ORG_ACCOUNT="REDACTED";fi
$ if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then ORG_ACCOUNT="REDACTED";fi
$ echo ${TF_ACCOUNT}
REDACTED
$ echo ${TF_ADDRESS}
https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master
$ echo TF_HTTP_ADDRESS=${TF_ADDRESS}
TF_HTTP_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master
$ echo TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
TF_HTTP_LOCK_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master/lock
$ echo TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
TF_HTTP_UNLOCK_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master/lock
$ echo TF_ADDRESS=${TF_ADDRESS}
TF_ADDRESS=https://my-gitlab.io/api/v4/projects/10849/terraform/state/onboard-cloud-account-into-monitoring-master
$ export TF_HTTP_ADDRESS=${TF_ADDRESS}
$ export TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
$ export TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
$ export TF_HTTP_LOCK_METHOD="POST"
$ export TF_HTTP_UNLOCK_METHOD="DELETE"
$ export TF_HTTP_RETRY_WAIT_MIN='5'
$ export TF_HTTP_USERNAME='JOHN.DOE'
$ export TF_HTTP_PASSWORD=${onbaord_cloud_account_into_monitoring}
$ export TF_VAR_ACCOUNT=${TF_ACCOUNT}
$ export TF_VAR_ORG_ACCOUNT=${ORG_ACCOUNT}
$ export AWS_DEFAULT_REGION='us-east-1'
$ terraform init
Initializing modules...
Downloading git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/lambda-modules.git?ref=dev for lambda...
- lambda in .terraform/modules/lambda
Downloading git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/iam-modules/lambda-iam.git?ref=dev for lambda_iam...
- lambda_iam in .terraform/modules/lambda_iam
Initializing the backend...
Successfully configured the backend "http"! Terraform will automatically
use this backend unless the backend configuration changes.
Error refreshing state: HTTP remote state endpoint requires auth
Cleaning up file based variables
00:00
ERROR: Job failed: command terminated with exit code 1
A peek at my gitlab-ci.yml:
image:
name: my-gitlab.io:5005/team-cloud-platform-team/terraform-modules/root-module-deployment/python-terraform14
entrypoint:
- '/usr/bin/env'
- 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin'
stages:
- build
- plan
- apply
variables:
PLAN: plan.tfplan
JSON_PLAN_FILE: tfplan.json
WORKSPACE: "dev"
TF_IN_AUTOMATION: "true"
ZIP_FILE: "lambda_package.zip"
STATE_FILE: ${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}
TF_ACCOUNT: ""
ORG_ACCOUNT: ""
TF_ADDRESS: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}
TF_LOCK: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}/lock
TF_UNLOCK: ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/terraform/state/${CI_PROJECT_NAME}-${CI_COMMIT_BRANCH}/lock
cache:
key: "$CI_COMMIT_SHA"
paths:
- .terraform
before_script:
- git config --global credential.helper cache
- git fetch
- if [ "$CI_COMMIT_REF_NAME" == "master" ]; then TF_ACCOUNT="REDACTED";fi
- if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then TF_ACCOUNT="REDACTED";fi
- if [ "$CI_COMMIT_REF_NAME" == "master" ]; then ORG_ACCOUNT="REDACTED";fi
- if [ "$CI_COMMIT_REF_NAME" == "dev" ]; then ORG_ACCOUNT="REDACTED";fi
- echo ${TF_ACCOUNT}
- echo ${TF_ADDRESS}
- echo TF_HTTP_ADDRESS=${TF_ADDRESS}
- echo TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
- echo TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
- echo TF_ADDRESS=${TF_ADDRESS}
- export TF_HTTP_ADDRESS=${TF_ADDRESS}
- export TF_HTTP_LOCK_ADDRESS=${TF_LOCK}
- export TF_HTTP_UNLOCK_ADDRESS=${TF_UNLOCK}
- export TF_HTTP_LOCK_METHOD="POST"
- export TF_HTTP_UNLOCK_METHOD="DELETE"
- export TF_HTTP_RETRY_WAIT_MIN='5'
- export TF_HTTP_USERNAME='MATTHEW.FETHEROLF'
- export TF_HTTP_PASSWORD=${onbaord_cloud_account_into_monitoring}
- export TF_VAR_ACCOUNT=${TF_ACCOUNT}
- export TF_VAR_ORG_ACCOUNT=${ORG_ACCOUNT}
- export AWS_DEFAULT_REGION='us-east-1'
- terraform init
# -----BUILD-----
# 1 build job per lambda function
buildLambda:
stage: build
tags:
- cluster
script:
- echo "Beginning Build"
- cd lambda_code/
- echo "Switched into Lambda dir"
- pip3 install -r requirements.txt --target .
- echo "Requirements installed"
- echo $ZIP_FILE
- zip -r $ZIP_FILE *
- echo "Zip file packaged up"
artifacts:
paths:
- lambda_code/
only:
- dev
- master
- merge_requests
# -----PLAN-----
planMerge:
stage: plan
tags:
- cluster
script:
- terraform plan
dependencies:
- buildLambda
only:
- merge_requests
planCommit:
stage: plan
tags:
- cluster
script:
- terraform plan
dependencies:
- buildLambda
# -----APPLY-----
# dev branch will auto deploy
applyDev:
stage: apply
tags:
- cluster
script:
- echo "Running Terraform Apply"
- terraform apply -auto-approve
dependencies:
- buildLambda
only:
- dev
#when: manual
# prod branch will require manual deployment approval
applyProd:
stage: apply
tags:
- cluster
script:
- echo "Running Terraform Apply"
- terraform apply -auto-approve
when: manual
dependencies:
- buildLambda
only:
- master
backend.tf:
terraform {
backend "http" {
}
}
main.tf:
locals {
common_tags = {
SVC_ACCOUNT_ID = "REDACTED",
CLOUD_PLATFORM_PROD_ID = "REDACTED"
}
}
module "lambda" {
source = "git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/lambda-modules.git?ref=dev"
lambda_name = var.name
lambda_role = "arn:aws:iam::${var.ACCOUNT}:role/${var.lambda_role}"
lambda_handler = var.handler
lambda_runtime = var.runtime
default_lambda_timeout = var.timeout
env = local.common_tags
ACCOUNT = var.ACCOUNT
}
module "lambda_iam" {
source = "git::https://my-gitlab.io/team-cloud-platform-team/teamcloudv2/terraform/iam-modules/lambda-iam.git?ref=dev"
lambda_policy = var.lambda_policy
ACCOUNT = var.ACCOUNT
lambda_role = var.lambda_role
}
inputs.tf:
variable "handler" {
type = string
default = "handler.lambda_handler"
}
variable "runtime" {
type = string
default = "python3.8"
}
variable "name" {
type = string
default = "onboard-cloud-account-into-monitoring"
}
variable "timeout"{
type = string
default = "120"
}
variable "lambda_role" {
type = string
default = "cloud-platform-onboard-to-monitoring"
}
variable "ACCOUNT" {
type = string
}
variable "ORG_ACCOUNT" {
type = string
}
variable "lambda_policy" {
default = "{\"Version\": \"2012-10-17\",\"Statement\": [{\"Sid\": \"VisualEditor0\",\"Effect\": \"Allow\",\"Action\": [\"logs:CreateLogStream\",\"logs:CreateLogGroup\"],\"Resource\": \"*\"},{\"Sid\": \"VisualEditor1\",\"Effect\": \"Allow\",\"Action\": \"logs:PutLogEvents\",\"Resource\": \"*\"},{\"Sid\": \"VisualEditor2\",\"Effect\": \"Allow\",\"Action\": \"sts:AssumeRole\",\"Resource\": \"*\"},{\"Sid\": \"VisualEditor3\",\"Effect\": \"Allow\",\"Action\": \"secretsmanager:GetSecretValue\",\"Resource\": \"*\"}]}"
}
provider.tf:
provider "aws" {
region = "us-east-1"
assume_role {
role_arn ="arn:aws:iam::${var.ACCOUNT}:role/team-platform-onboard-to-monitoring"
}
default_tags {
tags = {
owner = "REDACTED#REDACTED.com"
altowner = "REDACTED#REDACTED.com"
blc = "REDACTED"
costcenter = "REDACTED"
itemid = "PlatformAutomation"
}
}
}
Perhaps I also need to share the terraform modules being invoked?
We found the problem:
The pipeline referenced a protected environment variable. The master branch was not a protected branch. The solution was to unprotect the environment variable or protect the branch. Instantly solved the problem.

Jenkins. Invalid agent type "docker" specified. Must be one of [any, label, none]

My JenkinsFile looks like:
pipeline {
agent {
docker {
image 'node:12.16.2'
args '-p 3000:3000'
}
}
stages {
stage('Build') {
steps {
sh 'node --version'
sh 'npm install'
sh 'npm run build'
}
}
stage ('Deliver') {
steps {
sh 'readlink -f ./package.json'
}
}
}
}
I used to have Jenkins locally and this configuration worked, but I deployed it to a remote server and get the following error:
WorkflowScript: 3: Invalid agent type "docker" specified. Must be one of [any, label, none] # line 3, column 9.
docker {
I could not find a solution to this problem on the Internet, please help me
You have to install 2 plugins: Docker plugin and Docker Pipeline.
Go to Jenkins root page > Manage Jenkins > Manage Plugins > Available and search for the plugins. (Learnt from here).
instead of
agent {
docker {
image 'node:12.16.2'
args '-p 3000:3000'
}
}
try
agent {
any {
image 'node:12.16.2'
args '-p 3000:3000'
}
}
that worked for me.
For those that are using CasC you might want to include in plugin declaration
docker:latest
docker-commons:latest
docker-workflow:latest

Getting permission denied when trying to run a shell script on jenkins pipeline

I'm trying to set a Jenkins pipeline to build an Ionic app inside a Linux server (ec2 instance on amazon services) My first stage inside the jenkinsfile is to run npm install, but it returns permission denied.
I've tried setting permissions to the folder using:
chmod 777 /home/ec2-user/.nvm/versions/node/v10.16.0/bin
I've also tried adding the Jenkins user to the group that also has permissions. none of these seemed to work.
This is my Jenkinsfile
pipeline {
agent any
environment {
PATH='/usr/local/bin:/usr/bin:/bin'
}
stages {
stage('NPM Setup') {
steps { sh '/home/ec2-user/.nvm/versions/node/v10.16.0/bin/npm install' }
}
stage('Android Build') {
steps {
sh 'ionic cordova build android --release'
}
}
stage('APK Sign') {
steps {
echo "Sign Android APK Action"
}
}
stage('Zip APK') {
steps {
echo "Zip the APK Action"
}
}
}
}
I get this output
[Pipeline] // stage
[Pipeline] withEnv
[Pipeline] {
[Pipeline] withEnv
[Pipeline] {
[Pipeline] stage
[Pipeline] { (NPM Setup)
[Pipeline] sh
+ /home/ec2-user/.nvm/versions/node/v10.16.0/bin/npm install
/var/lib/jenkins/workspace/p-ionic4_borderapp_ionic4_master#tmp/durable-9b0ecc49/script.sh: line 1: /home/ec2-user/.nvm/versions/node/v10.16.0/bin/npm: Permission denied

Resources