I'm using jq to parse some output from AWS cli.
I'm trying to use 'select(startswith("$variable") )' but for some reason it doesn't work even though when I use the variable's value it works properly.
Example:
itai#MacBook-Pro ~/src/Scripts - $ echo $App
Analytics
itai#MacBook-Pro ~/src/Scripts - $ aws cloudformation describe-stacks --stack-name $StackName --region $region | jq -r '.Stacks[].Outputs[].OutputKey | select(startswith("Analytics") )'
AnalyticsAutoScalingGroup
itai#MacBook-Pro ~/src/Scripts - $ aws cloudformation describe-stacks --stack-name $StackName --region $region | jq -r '.Stacks[].Outputs[].OutputKey | select(startswith("$App") )'
itai#MacBook-Pro ~/src/Scripts - $
I know I can use grep like so:
aws cloudformation describe-stacks --stack-name $StackName --region $region | jq -r '.Stacks[].Outputs[].OutputKey' | grep $App
But I prefer to use jq all along the command.
Am I doing it wrong or is it just not possible?
You can also do it without using --arg or --argjson. In your case, since your variable holds a string value, you may do it like this.
aws cloudformation describe-stacks --stack-name $StackName --region $region |\
jq -r '.Stacks[].Outputs[].OutputKey | select(startswith('\"$App\"') )'
Notice I added single quotation marks and escaped the double-quotation marks around your variable
See here https://github.com/stedolan/jq/wiki/FAQ on the "How can environment variables be passed to a jq program? How can a jq program be parameterized?" second answer.
Generally, the best way to pass the value of shell variables to a jq program is using the -—arg and/or —-argjson command-line options, as described in the manual. Environment variables can be passed in using env.
See also the https://github.com/stedolan/jq/wiki/FAQ
Related
For example do this command.
aws ecs list-tasks --cluster aic-prod
then it returns below
{
"taskArns": [
"arn:aws:ecs:ap-northeast-1:678100228133:task/aic-prod-cn/ae340032378f4155bd2d0eb4ee60b5c7"
]
}
then do next command using the ae340032378f4155bd2d0eb4ee60b5c7 of return sentence.
aws ecs execute-command --cluster aic-prod-cn --container AicDjangoContainer --interactive --command '/bin/bash' --task ae340032378f4155bd2d0eb4ee60b5c7
I want to do this thing in one sentence, or shell script. Is it possible?
I googled around the regular expression, but still unclear.
aws ecs list-tasks --cluster aic-prod | grep taskArns | (regular expression??)
Could you give some help?
Manipulate JSON with jq
jq is the best tool for manipulating JSON in shell scripts. It has a pretty sophisticated query language.
Here's how you could use it to extract the string you need. I've shown the query I've built piece by piece so you can see the what's happening a step at a time:
❯ aws ecs list-tasks --cluster aic-prod | jq
{
"taskArns": [
"arn:aws:ecs:ap-northeast-1:678100228133:task/aic-prod-cn/ae340032378f4155bd2d0eb4ee60b5c7"
]
}
❯ aws ecs list-tasks --cluster aic-prod | jq '.taskArns[0]'
"arn:aws:ecs:ap-northeast-1:678100228133:task/aic-prod-cn/ae340032378f4155bd2d0eb4ee60b5c7"
❯ aws ecs list-tasks --cluster aic-prod | jq '.taskArns[0] | split(":")[-1]'
"task/aic-prod-cn/ae340032378f4155bd2d0eb4ee60b5c7"
❯ aws ecs list-tasks --cluster aic-prod | jq '.taskArns[0] | split(":")[-1] | split("/")[-1]'
"ae340032378f4155bd2d0eb4ee60b5c7"
Capture output with $(...)
The next step is to add -r so it prints the string raw without quotes, and use $(...) to capture the output so we can reuse it in a second command.
task_id=$(aws ecs list-tasks --cluster aic-prod | jq -r '.taskArns[0] | split(":")[-1] | split("/")[-1]')
aws ecs execute-command --cluster aic-prod-cn --container AicDjangoContainer --interactive --command '/bin/bash' --task "$task_id"
Or use xargs
Another way to write this is with xargs, which takes the output of one command and passes it as an argument to the next.
aws ecs list-tasks --cluster aic-prod |
jq -r '.taskArns[0] | split(":")[-1] | split("/")[-1]' |
xargs aws ecs execute-command --cluster aic-prod-cn --container AicDjangoContainer --interactive --command '/bin/bash' --task
I have around 100 S3 buckets and I want to enable SSE-Encryption for these buckets using AWS CLI.
I've gone through some AWS docs for this. Seems like I can use the below command:
aws s3api put-bucket-encryption
--bucket my-bucket
--server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
But I want to exclude a few buckets. How can I do that?
You say you're running on Linux, so you can use a shell loop.
First, store the list of buckets in a file (the sed command is necessary because the aws s3 ls adds timestamp information to the output):
aws s3 ls | sed -e 's/.* //' > /tmp/$$
Then, edit this file and delete any buckets that you don't want to update.
Finally, run your command in a loop:
for b in $(cat /tmp/$$) ; do YOUR_COMMAND_HERE ; done
In general this should be done carefully because it will affect all buckets except the excluded
Make sure you know what you're doing.
#!/bin/bash
# excluded buckets list
excluded_list="my-excluded-bucket-1|my-excluded-bucket-2|my-excluded-bucket-3"
aws s3 ls | awk '{print $NF}' | grep -vE "$excluded_list"
echo "#############################################################"
echo "# WARNING: The above s3 buckets encryption will be updated. #"
echo "#############################################################"
read -p "Continue (y/n)?" choice
case "$choice" in
y|Y ) echo "yes";;
n|N ) echo "no";exit;;
* ) echo "invalid";exit;;
esac
for b in $(aws s3 ls | awk '{print $NF}' | grep -vE "$excluded_list"); do
aws s3api put-bucket-encryption --bucket $b --server-side-encryption-configuration '{"Rules": [{"ApplyServerSideEncryptionByDefault": {"SSEAlgorithm": "AES256"}}]}'
done
I have a YAML anchor that deploys logic-apps. I want the pipeline to look for logic apps in subdirectory and loop through each one and deploy it. Here's my deploy-logicapp anchor
- step: &deploy-logicapp
name: Deploy logic app
script:
- source environment.sh
- pipe: microsoft/azure-arm-deploy:1.0.2
variables:
AZURE_APP_ID: $AZURE_CLIENT_ID
AZURE_PASSWORD: $AZURE_SECRET
AZURE_TENANT_ID: $AZURE_TENANT
AZURE_LOCATION: $AZURE_LOCATION
AZURE_RESOURCE_GROUP: $AZURE_RESOURCE_GROUP
AZURE_DEPLOYMENT_TEMPLATE_FILE: 'logic-apps/$DIR/template.$DEPLOYMENT_SLOT.json'
so in my pipeline, I loop through all the subdirectories and this works, it echoes each $DIR
- step:
script:
- cd logic-apps
- for DIR in $(ls -l | grep '^d' | awk '{print $9}'); do echo $DIR ; done
What I want to do is inside this loop I want to call my YAML anchor with the $DIR environment variable. I have tried a number of ways. The problem is the for loop is inside bash and not YAML so I can not call it.
Any guidance will be much appreciated.
As it turns out, I need to do everything from the azure command line. Here's the bash script that loops through all directories and deploy them.
#!/bin/bash
az login --service-principal -u $AZURE_CLIENT_ID -p $AZURE_SECRET --tenant $AZURE_TENANT
for DIR in $(ls -l | grep '^d' | awk '{print $9}'); do
az deployment group create --resource-group $AZURE_RESOURCE_GROUP --template-file $DIR/template.$DEPLOYMENT_SLOT.json --name $DIR
done
Next is to only deploy those that have changed :)
I have a file hosted on an s3 bucket that has the following format:
var1=value1
var2=value2
var3=value3
I wish to create a bash script on my Linux box that when executed, set environment variables from the remote file. SO far, I have tried the following:
#!/bin/sh
export $(aws s3 cp s3://secret-bucket/file.txt - | sed -e /^$/d -e /^#/d | xargs)
#!/bin/sh
eval $(aws s3 cp s3://secret-bucket/file.txt - | sed 's/^/export /')
But none of them seems to work because when I execute printenv the variables I need do not show. Any help would be very much appreciated.
Is it possible to call a remote function defined in bash (for example added in one of the scripts that are stored under /etc/profile.d) via ansible ad-hoc command (using shell , command modules ?)
For example I have the following function that allows to see the status of apt history:
function apt-history(){
case "$1" in
install)
cat /var/log/dpkg.log | grep 'install '
;;
upgrade|remove)
cat /var/log/dpkg.log | grep $1
;;
rollback)
cat /var/log/dpkg.log | grep upgrade | \
grep "$2" -A10000000 | \
grep "$3" -B10000000 | \
awk '{print $4"="$5}'
;;
*)
cat /var/log/dpkg.log
;;
esac
}
Is it possible to make a call to this function directly via function name from one of the ansible existing modules via ad-hoc command ? I know it would be possible to create a new script and call it directly remotely, but this is not what I want to achieve here. Any suggestions appreciated.
You have to instantiate bash on the remote side, using the command or shell module like this :
ansible localhost -m command -a 'bash -lc apt-history'
This is a common trick if you need environment variables to be set-up.