Using data external to receive variable from bash script in terraform - terraform

I want to fetch sshkey with digital ocean token with get_sshkey.sh script:
do_token=$1
curl -X GET -s -H "Authorization: Bearer ${do_token//\"}" "https://api.digitalocean.com/v2/account/keys?page=1" | jq -r --arg queryname "User's key" '.ssh_keys[] | select(.name == $queryname).public_key'
I have a declared variable of DO token var.do-token, I am trying to use $1 bash concept to pass parameter to get_sshkey.sh and run it in terraform data "external" in following:
data "external" "fetchssh" {
program = ["sh", "/input/get_sshkey.sh `echo "var.do-token" | terraform -chdir=/input console -var-file terraform.auto.tfvars`"]
}
but get the error: Expected a comma to mark the beginning of the next item.

In order to include literal quotes in your shell command, you'll need to escape them so that Terraform can see that you don't intend to end the quoted string template:
data "external" "fetchssh" {
program = ["sh", "/input/get_sshkey.sh `echo \"var.do-token\" | terraform -chdir=/input console -var-file terraform.auto.tfvars`"]
}
Using the external data source in Terraform to recursively run another terraform is a pretty irregular thing to do, but as long as there's such a variable defined in that configuration I suppose it could work.

Related

parse error when trying to parse terraform output in GitHub Actions workflow with jq

I have a GitHub Actions workflow that reads output from a terraform configuration. I'm trying to do this:
terraform -chdir=terraform/live/dev output -json > /tmp/output.json
APP_URL=$(cat /tmp/output.json | jq -r '.app_url.value')
I'm getting the following error in the GitHub Action logs:
parse error: Invalid numeric literal at line 1, column 9
I added the following to debug this:
# debugging output.json file
echo "output.json:"
cat /tmp/output.json
And I'm finding that output of cat /tmp/output.json is:
/home/runner/work/_temp/2b622f60-be99-4a29-a295-593b06dde9a8/terraform-bin -chdir=terraform/live/dev output -json
{
"app_url": {
"sensitive": false,
"type": "string",
"value": "https://app.example.com"
}
}
This tells me that jq can't parse the temporary file that I wrote the terraform JSON output to because it seems to be adding the command to the file itself:
/home/runner/work/_temp/2b622f60-be99-4a29-a295-593b06dde9a8/terraform-bin -chdir=terraform/live/dev output -json
How can I get the terraform output as JSON and write it to a file without the extra header line that is causing the parse error?
When I run the same commands locally, I do not get this parse error.
Here's the code for the section of my GitHub Action workflow that is producing this error: https://github.com/briancaffey/django-step-by-step/blob/main/.github/workflows/terraform_frontend_update.yml#L72-L74
Things I have tried
using cd terraform/live/dev instead of -chdir=terraform/live/dev - this resulted in the same error
I was able to fix this issue by adding the following setting to the setup-terraform/#v1 action:
- uses: hashicorp/setup-terraform#v1
with:
terraform_version: 1.1.7
terraform_wrapper: false <-- added this
More documentation about this setting can be found here: https://github.com/hashicorp/setup-terraform#inputs
Whilst #briancaffey's answer is exactly right, if you need to use the terraform_wrapper for other parts of your workflow (e.g. using the output), you can switch out the problematic terraform calls with terraform-bin instead.
If you also want to run the script outside GitHub Actions, the following workaround will do the trick:
tf=terraform-bin
type "$tf" >/dev/null 2>&1 || tf=terraform
$tf output -json > /tmp/output.json
See https://github.com/hashicorp/setup-terraform/issues/167#issuecomment-1090760365 for an issue that mentions the same workaround.

How to add an input variable in terraform terraform.tfvars from JSON file?

How to add an input variable in terraform terraform.tfvars from JSON file ?
I have written a Terraform code for a snowflake. Now I have defined the input variables manually in terraform.tfvars but I would like to integrate the input in JSON format as an input variable for terraform.tfvars
Does anyone have any example like get an input variable in JSON format and include in terraform.tfvars and run terraform code.
Thanks for your support.
The easiest way would be json2hcl. All what you need is intall the tool
curl -SsL https://github.com/kvz/json2hcl/releases/download/v0.0.6/json2hcl_v0.0.6_darwin_amd64 \
| sudo tee /usr/local/bin/json2hcl > /dev/null && sudo chmod 755 /usr/local/bin/json2hcl && json2hcl -version
and then run it as follows:
$ json2hcl < fixtures/infra.tf.json
"output" "arn" {
"value" = "${aws_dynamodb_table.basic-dynamodb-table.arn}"
}
... rest of HCL truncated

Get All AWS Lambda Functions and Their Tags and Output to CSV

My bash script runs to retrieve lambda functions and their tags.
It runs ok and does what it needed to do, however I need to get the output written to a .txt or a .csv file, which needs to be in a readable format.
Below is the script I have;
#!/bin/bash
while read -r name; do
aws lambda list-functions | jq -r ".Functions[].FunctionArn" | xargs -I {} aws lambda list-tags --resource {} --query '{"{}":Tags}' --output text
done
Below is what a returned value looks like after the script runs;
ARN:AWS:LAMBDA:EU-WEST-1:1939999:FUNCTION:example-lambda EXX dev example-lambda False release-1.1.9 False True
I need to get all the items returned and lined up neatly in a txt or csv file. Any help would be appreciated.
I would recommend to use the resourcegroupstaggingapi API to solve this problem. This API allows you to get all resources of a specific type and their tags.
To get all your Lambda functions for your default region and their tags you can run the following command:
aws resourcegroupstaggingapi get-resources --resource-type-filters "lambda"
The output of this command can now be parsed with jq. The great thing about jq is that you can manipulate the output to be CSV.
To get CSV output with two columns (ARN, Tags) you can run the following command:
aws \
resourcegroupstaggingapi \
get-resources \
--resource-type-filters "lambda" \
| jq -r '.ResourceTagMappingList[] | [.ResourceARN, ((.Tags | map([.Key, .Value] | join("="))) | join(","))] | #csv'
The advantage of this approach is that you only have a single HTTP call making it relatively fast. The disadvantage is that you only get the ARN and the tags.
As shimo mentioned in a comment to the question, a way to save the output of a command to a file is using the > operator.
> operator replaces the existing content of the file with the output of the command. If you want to save the output of multiple commands to the same file, you should use >> operator.
You can also use a pipe and the command tee. The output will be printed in your screen and in a file, which will be at the end of the file.
I found this tutorial helpful. Based on what you written, you could pipe the output of your command into a csv, or concotanate into an array then write it into a file with a newline character at the end of each line.

Unable to register Patch baselineid with Patch Group by aws cli

I am registering patch baseline with Patch Group by AWS CLI and I am getting an error as mentioned in the subject.
Command used to create a Patch baseline is below:
baselineid=$(aws ssm create-patch-baseline --name "Test-NonProd-Baseline" --operating-system "WINDOWS" --tags "Key=Environment,Value=Production" --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Key=MSRC_SEVERITY,Values=[Critical,Important]},{Key=CLASSIFICATION,Values=[SecurityUpdates,Updates,ServicePacks,UpdateRollups,CriticalUpdates]}]},ApproveAfterDays=7}]" --description "Baseline containing all updates approved for production systems" --query BaselineId)
I used then above id to register patch baseline with patch group as like below
aws ssm register-patch-baseline-for-patch-group --baseline-id $baselineid --patch-group "Group A"
Unfortunately, I get the error below:
An error occurred (ValidationException) when calling the RegisterPatchBaselineForPatchGroup operation: 1 validation error detected: Value '"pb-08a507ce98777b410"' at 'baselineId' failed to satisfy constraint: Member must satisfy regular expression pattern: ^[a-zA-Z0-9_\-:/]{20,128}$.
Note: Even if I use double quotes around "$baslineid" still I do get the same error.
Variable $baselineid has a valid value, please see below:
[ec2-user#ip-172-31-40-59 ~]$ echo $baselineid
"pb-08a507ce98777b410"
Want to understand what is the issue when I'm getting a legit value and how to have it resolved.
You can perhaps use this in your second command, basically using tr utility to truncate the double qoutes:
echo $baselineid| tr -d '"'
The solution to the double quote issue has many possible fixes in this SO post:
Shell script - remove first and last quote (") from a variable
So a possible solution command could be something like this:
aws ssm register-patch-baseline-for-patch-group --baseline-id $(echo $baselineid| tr -d '"') --patch-group "Group A"

How do I parse output from result of CURL command?

I have a Jenkins console output that looks like this:
Started by remote host 10.16.17.13
Building remotely on ep9infrajen201 (ep9) in workspace d:\Jenkins\workspace\Tools\Provision
[AWS-NetProvision] $ powershell.exe -NonInteractive -ExecutionPolicy ByPass "& 'C:\Users\user\AppData\Local\Temp\jenkins12345.ps1'"
Request network range: 10.1.0.0/13
{
    "networks":  [
                     "10.1.0.0/24"
                 ]
}
Finished: SUCCESS
I get this from a curl command that I run. to check the JENKINS_JOB_URL/lastBuild/consoleText
My question is, for the sake of some other automation I am doing, how do I get just "10.1.0.0/24" so I can assign it to a shell variable using LINUX tools?
Thank you
Since you listed jq among the tags of your duplicate question, I'll assume you have jq installed. You have to clean up your output to get JSON first, then get to the part of JSON you need. awk does the former, jq the latter.
.... | awk '/^{$/{p=1}{if(p){print}}/^}$/{p=0}' | jq -r .networks[0]
The AWK script looks for { on its own on a line to turn on a flag p; prints the current line if the flag is set; and switches off the flag when it encounters } all by itself.
EDIT: Since this output was generated on a DOS machine, it has DOS line endings (\r\n). To convert those before awk, additionally pipe through dos2unix.

Resources