Terraform output value failed formatted by jq in github actions - terraform

What I want to achieve?
Store terraform output value (ip address) as env variable in git hub actions and use it during updating network security group.
What I have done?
Based on: Github Actions, how to share a calculated value between job steps?:
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
- name: Extract gateway ip
run: |
terraform init
echo "IP_GAT=$(terraform output -json gatewayStaticIp | jq -r .)" >> $GITHUB_ENV
working-directory: ${{ env.my_dir }}
- name: Update security group
run: |
ip=${{ env.IP_GAT }}
az network nsg rule update -g myGroup --nsg-name myName -n myRuleName --source-address-prefix $ip
Apparently there is some problem with jq even it seems to be exactly like in example(https://www.terraform.io/docs/commands/output.html):
Error: write EPIPE
Any ideas?
Thanks in advance

The hashicorp/setup-terraform#v1 uses a wrapper to execute terraform and messes up the output when using redirection (like you do with shell pipe). There's an issue describing the problem in their repo.
Disabling the wrapper will make it work but you'll lose some functionalities to reuse stdout, stderr and exit code from the github integration.
- name: Terraform setup
uses: hashicorp/setup-terraform#v1
with:
terraform_version: 0.13.5
terraform_wrapper: false

Related

Replace kubernetes yaml environment variables in Azure pipeline?

I have an azure devops pipeline that I'm using to deploy a kubernetes project to rancher. In my k8s deployment.yaml file I have environment variables defined like this:
containers:
- name: frontend
env:
- name: GIT_HASH
value: dummy_value
I want to be able to replace the GIT_HASH with a value created in the azure yml pipeline. Specifically, I have a script to get a git commit e.g:
- task: Bash#3
displayName: Set the short git hash as a variable
inputs:
targetType: 'inline'
script: |
short_hash=$(git rev-parse --short=7 HEAD)
echo "##vso[task.setvariable variable=git-hash;]$short_hash"
And I want to be able to inject this value into kubernetes as the GIT_HASH. Is there a way to do this? I've tried using the qetza.replacetokens.replacetokens-task.replacetokens#3 but can't get it to work.
Posting comment as the community wiki answer for better visibility.
Have you considered using Helm or Kustomize to template your deployment manifests instead of relying on token replacement?
More information can be found on official Azure documentation

How to set KUBECONFIG from terraform cloud generate file in github actions

I am trying to set up github actions to run a CI with terraform and kubernetes. I am connecting to terraform cloud to run the terraform commands and it appears to be generating the kubeconfig during the apply process. I get this in the outout:
local_file.kubeconfig: Creation complete after 0s
In the next step, I try to run kubectl to see the resources that were built, but the command fails because it can't find the the configuration file. Specifically:
error: Missing or incomplete configuration info.
So my question is, how do I use the newly generated local_file.kubeconfig in my kubectl commands?
My first attempt was to expose the KUBECONFIG as an environment variable in the github action step, but I didn't know how I would get the value from terraform cloud into the github actions. So instead, I tried to set the variable in my terraform file with a provisioner definition. But this doesn't seem to work.
Is there an easier way to load that value?
Github Actions Steps
steps:
- name: Checkout code
uses: actions/checkout#v2
with:
ref: 'privatebeta-kubes'
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TERRAFORM_API_TOKEN }}
- name: Terraform Init
run: terraform init
- name: Terraform Format Check
run: terraform fmt -check -v
- name: Terraform Plan
run: terraform plan
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
- name: Terraform Apply
run: terraform apply -auto-approve
env:
LINODE_TOKEN: ${{ secrets.LINODE_TOKEN }}
# this step fails because kubectl can't find the token
- name: List kube nodes
run: kubectl get nodes
and my main.tf file has this definition:
provider "kubernetes" {
kubeconfig = "${local_file.kubeconfig.content}"
}

Error: Apply not allowed for workspaces with a VCS connection

Error: Apply not allowed for workspaces with a VCS connection
I am getting this error when trying to apply a terraform plan via Github Actions.
Github Action (terraform apply)
- name: Terraform Apply Dev
id: apply_dev
if: github.ref == 'refs/heads/master' && github.event_name == 'push'
run: TF_WORKSPACE=dev terraform apply -auto-approve deployment/
Terraform workspace
The workspace was created on Terraform Cloud as a Version control workflow and is called app-infra-dev
Terraform backend
# The configuration for the `remote` backend.
terraform {
backend "remote" {
hostname = "app.terraform.io"
organization = "my-org-name"
workspaces {
prefix = "app-infra-"
}
}
}
So because I called my workspace app-infra-dev, my workspace prefix in the backend file is app-infra- and TF_WORKSPACE=dev is set in my GH Action. I would have hoped that would have been enough to make it work.
Thanks for any help!
Your workspace type must be "API driven workflow".
https://learn.hashicorp.com/tutorials/terraform/github-actions
I had the same issue because I initially created it as "Version control workflow", which makes sense, but it doesn't work as expected.
Extracted from the documentation:
In the UI and VCS workflow, every workspace is associated with a
specific branch of a VCS repo of Terraform configurations. Terraform
Cloud registers webhooks with your VCS provider when you create a
workspace, then automatically queues a Terraform run whenever new
commits are merged to that branch of workspace's linked repository.
https://www.terraform.io/docs/cloud/run/ui.html#summary
Instead of if: github.ref == 'refs/heads/master' && github.event_name == 'push', you might consider trigger the apply on the GitHub event itself, as in this example
name: terraform apply
# Controls when the action will run.
on:
# Triggers the workflow on push or pull request events but only for the main branch
push:
branches: [ master ]
In that example, you can see the terraform apply used at the end of a terraform command sequence:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
apply:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
- uses: hashicorp/setup-terraform#v1
with:
terraform_wrapper: true
terraform_version: 0.14.0
# Runs a single command using the runners shell
- name: create credentials
run: echo "$GOOGLE_APPLICATION_CREDENTIALS" > credentials.json
env:
GOOGLE_APPLICATION_CREDENTIALS: ${{ secrets.GOOGLE_APPLICATION_CREDENTIALS }}
- name: export GOOGLE_APPLICATION_CREDENTIALS
run: |
echo "GOOGLE_APPLICATION_CREDENTIALS=`pwd`/credentials.json" >> $GITHUB_ENV
- name: terraform init
run: terraform init
- name: terraform workspace new
run: terraform workspace new dev-tominaga
continue-on-error: true
- name: terraform workspace select
run: terraform workspace select dev-tominaga
continue-on-error: true
- name: terraform init
run: terraform init
- name: terraform workspace show
run: terraform workspace show
- name: terraform apply
id: apply
run: terraform apply -auto-approve
Check if you can adapt that to your workflow.

Array variable in gitlab CI/CD yml

I am writing a CI/CD pipeline for terraform to deploy GCP resources.
In terraform code, I got many functionality and Network is one of them. The folder structure of the Network is
Network
VPC
LoadBalancer
DNS
VPN
So, I want to loop terraform init, plan and apply commands for all the sub-folder of Network folder. The yml file looks like
image:
name: registry.gitlab.com/gitlab-org/terraform-images/stable:latest
variables:
TF_ROOT: ${CI_PROJECT_DIR}
env: 'prod'
network_services: ""
stages:
- init
init:
stage: init
script:
- |
network_services = ['vpc' 'vpn']
for service in $network_services[#]
do
echo The path is var/$env/terraform.tfvars
done
The above gives me the error:
$ network_services = ['vpc' 'vpn'] # collapsed multi-line command
/bin/sh: eval: line 103: network_services: not found
Please suggest a way to declare array variable in gitlab CI/CD yml.
Try changing your job to something like this:
init:
stage: init
script:
- network_services = ('vpc' 'vpn')
- for service in $network_services[#]
do
echo "The path is var/$env/terraform.tfvars
done
I don't believe you can do multi-line commands like that due to the way the the gitlab-runner eval's the entries in the script array, though combining them with ; has worked for me if I want them on one line.

Ansible return code error: 'dict object' has no attribute 'rc'

I am using Ansible to automate the installation, configuration and deployment of an application server which uses JBOSS, therefore I need to use the in-built jboss-cli to deploy packages.
This Ansible task is literally the last stage to run, it simply needs to check if a deployment already exists, if it does, undeploy it and redeploy it (to be idempotent).
Running the below commands manually on the server and checking the return code after each command works as expected, something, somewhere in Ansible refuses to read the return codes correctly!
# BLAZE RMA DEPLOYMENT
- name: Check if Blaze RMA has been assigned to dm-server-group from a previous Ansible run
shell: "./jboss-cli.sh --connect '/server-group=dm-server-group/deployment={{ blaze_deployment_version }}:read-resource()' | grep -q success"
args:
chdir: "{{ jboss_home_dir }}/bin"
register: blaze_deployment_status
failed_when: blaze_deployment_status.rc == 2
tags: # We always need to check this as the output determines whether or not we need to undeploy an existing deployment.
- skip_ansible_lint
- name: Undeploy Blaze RMA if it has already been assigned to dm-server-group from a previous Ansible run
command: "./jboss-cli.sh --connect 'undeploy {{ blaze_deployment_version }} --all-relevant-server-groups'"
args:
chdir: "{{ jboss_home_dir }}/bin"
when: blaze_deployment_status.rc == 0
register: blaze_undeployment_status
- name: Deploy Blaze RMA once successfully undeployed
command: "./jboss-cli.sh --connect 'deploy {{ jboss_deployment_dir }}/{{ blaze_deployment_version }} --server-groups=dm-server-group'"
args:
chdir: "{{ jboss_home_dir }}/bin"
when: blaze_undeployment_status.rc == 0 or blaze_deployment_status.rc == 1
Any advice would be appreciated.
Your second command contains a when clause. If it is skipped, ansible still registers the variable but there is no rc attribute in the data.
You need to take this into consideration when using the var in the next task. The following condition on last task should fix your issue.
when: blaze_undeployment_status.rc | default('') == 0 or blaze_deployment_status.rc == 1
Also there is maybe the same as author situation, when you run ansible --check

Resources