How to fetch the tags for ec2-describe-instances in a shell script - linux

I want to extract the instance ID and the tags from the result of a command ec2-describe-instances and want to store the result in a text file. The result set gives :
But i want the tags owner and cost.centre also to be fetched
Kindly guide me how to do that

this will help to find instance-id
$ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxx | awk '{ print $8 }' | sort -n | grep "i-"
i-4115d38c
i-5d534697
i-6e679a45
i-7a659851
i-8d6bae40
i-cd6f9000
i-d264ad1e
i-d5888618
i-e2332e2e
ps considering that you already configured / run "aws configure"

If I understand the question correctly, I think you just need to add expressions to your 2nd grep:
ec2-describe-instances | grep -i "tag" | grep -i -e "name" -e "owner" -e "cost.centre"

This is going to be unnecessarily complicated to do in a shell script. Here are some suggestion:
you are using ec2cli. Don't use that. Use AWS-CLI instead. Because parsing the output in ec2cli is a pain. Whereas AWS-CLI provides output in JSON, it is way more easier to parse. Also, AWS is going to support AWS-CLI only henceforth.
The information that you need a perfect use-case for using a hash. You can either install and run AWs-CLI commands via a perl script and then capture the output in a hash. Perl is very powerful for handling such data structure.
OR, you can use one of the SDKs from AWS (I use Ruby SDK) and then capture the whole information in a hash and then print it the way you want.
Bottom line is, you need to capture the tags in a hash to make your life easier. Ans this becomes more and more prominent when you have multiple tags.

Using awk
ec2-describe-instances |awk 'BEGIN{IGNORECASE=1}/(name|owner|cost.center)/&&/tag/'
TAG instance i-c4 Name Rii_Win_SAML
TAG instance i-c42 Owner Rii Pandey

Here is another way to do without using jq and other parsing tools.
ec2 describe-instances --query 'Reservations[*].Instances[*].[InstanceId,Tags[?Key==`Name`].Value | [0],Tags[?Key==`cost.centre`].Value | [0]]' --output text

This should work:
aws ec2 describe-instances | jq '.Reservations[].Instances[] | select(contains({Tags: [{Key: "owner"},{Key: "costcenter"}]}))|"Instance ID: \(.InstanceId) Owner: \(.Tags[]|select(.Key=="owner")|.Value), Cost Center: \(.Tags[]|select(.Key=="costcenter")|.Value)"'
TL;DR: The way AWS does tags is a living nightmare, even with jq
Also, use AWS CLI, not old, unsupported tools.

Related

Get All AWS Lambda Functions and Their Tags and Output to CSV

My bash script runs to retrieve lambda functions and their tags.
It runs ok and does what it needed to do, however I need to get the output written to a .txt or a .csv file, which needs to be in a readable format.
Below is the script I have;
#!/bin/bash
while read -r name; do
aws lambda list-functions | jq -r ".Functions[].FunctionArn" | xargs -I {} aws lambda list-tags --resource {} --query '{"{}":Tags}' --output text
done
Below is what a returned value looks like after the script runs;
ARN:AWS:LAMBDA:EU-WEST-1:1939999:FUNCTION:example-lambda EXX dev example-lambda False release-1.1.9 False True
I need to get all the items returned and lined up neatly in a txt or csv file. Any help would be appreciated.
I would recommend to use the resourcegroupstaggingapi API to solve this problem. This API allows you to get all resources of a specific type and their tags.
To get all your Lambda functions for your default region and their tags you can run the following command:
aws resourcegroupstaggingapi get-resources --resource-type-filters "lambda"
The output of this command can now be parsed with jq. The great thing about jq is that you can manipulate the output to be CSV.
To get CSV output with two columns (ARN, Tags) you can run the following command:
aws \
resourcegroupstaggingapi \
get-resources \
--resource-type-filters "lambda" \
| jq -r '.ResourceTagMappingList[] | [.ResourceARN, ((.Tags | map([.Key, .Value] | join("="))) | join(","))] | #csv'
The advantage of this approach is that you only have a single HTTP call making it relatively fast. The disadvantage is that you only get the ARN and the tags.
As shimo mentioned in a comment to the question, a way to save the output of a command to a file is using the > operator.
> operator replaces the existing content of the file with the output of the command. If you want to save the output of multiple commands to the same file, you should use >> operator.
You can also use a pipe and the command tee. The output will be printed in your screen and in a file, which will be at the end of the file.
I found this tutorial helpful. Based on what you written, you could pipe the output of your command into a csv, or concotanate into an array then write it into a file with a newline character at the end of each line.

How to execute svn command along with grep on windows?

Trying to execute svn command on windows machine and capture the output for the same.
Code:
import subprocess
cmd = "svn log -l1 https://repo/path/trunk | grep ^r | awk '{print \$3}'"
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
'grep' is not recognized as an internal or external command,
operable program or batch file.
I do understand that 'grep' is not windows utility.
Getting error as "grep' is not recognized as an internal or external command,
operable program or batch file."
Is it only limited to execute on Linux?
Can we execute the same on Windows?
Is my code right?
For windows your command will look something like the following
svn log -l1 https://repo/path/trunk | find "string_to_find"
You need to use the find utility in windows to get the same effect as grep.
svn --version | find "ra"
* ra_svn : Module for accessing a repository using the svn network protocol.
* ra_local : Module for accessing a repository on local disk.
* ra_serf : Module for accessing a repository via WebDAV protocol using serf.
Use svn log --search FOO instead of grep-ing the command's output.
grep and awk are certainly available for Windows as well, but there is really no need to install them -- the code is easy to replace with native Python.
import subprocess
p = subprocess.run(["svn", "log", "-l1", "https://repo/path/trunk"],
capture_output=True, text=True)
for line in p.stdout.splitlines():
# grep ^r
if line.startswith('r'):
# awk '{ print $3 }'
print(line.split()[2])
Because we don't need a pipeline, and just run a single static command, we can avoid shell=True.
Because we don't want to do the necessary plumbing (which you forgot anyway) for Popen(), we prefer subprocess.run(). With capture_output=True we conveniently get its output in the resulting object's stdout atrribute; because we expect text output, we pass text=True (in older Python versions you might need to switch to the old, slightly misleading synonym universal_newlines=True).
I guess the intent is to search for the committer in each revision's output, but this will incorrectly grab the third token on any line which starts with an r (so if you have a commit message like "refactored to use Python native code" the code will extract use from that). A better approach altogether is to request machine-readable output from svn and parse that (but it's unfortunately rather clunky XML, so there's another not entirely trivial rabbithole for you). Perhaps as middle ground implement a more specific pattern for finding those lines -- maybe look for a specific number of fields, and static strings where you know where to expect them.
if line.startswith('r'):
fields = line.split()
if len(fields) == 13 and fields[1] == '|' and fields[3] == '|':
print(fields[2])
You could also craft a regular expression to look for a date stamp in the third |-separated field, and the number of changed lines in the fourth.
For the record, a complete commit message from Subversion looks like
------------------------------------------------------------------------
r16110 | tripleee | 2020-10-09 10:41:13 +0300 (Fri, 09 Oct 2020) | 4 lines
refactored to use native Python instead of grep + awk
(which is a useless use of grep anyway; see http://www.iki.fi/era/unix/award.html#grep)

How to list out all the EBS volumes in CLI

am using CLI to get the list of all EBS volumes with some specific tags.
When i use the specific tag am getting the output as none in my output...
I need to list out all the instance which is Key:Environment Value: Prod
I need the output in table format with the headings.....
I don't know why am getting the none output in the column Environment
As of now am using the query like:
aws ec2 describe-volumes --filter Name=tag:Environment,Values=prod --query 'Volumes[*].Attachments[].{VolumeID:VolumeId,InstanceID:InstanceId,State:State,Environment:Environment}'
Am getting the output like:
DescribeVolumes |
+-------------+-----------------------+-----------+-------------------------+
| Environment | InstanceID | State | VolumeID |
+-------------+-----------------------+-----------+-------------------------+
| None | i-xxxxxxxxxxxxxxxxxx | attached | vol-xxxxxxxxxx |
Please help me
When tinkering with parameters in the AWS CLI, I highly recommend reading:
JMESPath Tutorial
JMESPath Specification
Here's a version of your command that extracts the specific tag:
aws ec2 describe-volumes --filter Name=tag:Environment,Values=prod --query "Volumes[*].{VolumeID:Attachments[0].VolumeId,InstanceID:Attachments[0].InstanceId,State:Attachments[0].State,Environment:Tags[?Key=='Environment']|[0].Value}"
It basically says "Include the Value of the tag that has a Key of Environment".
You might need to play with the quote characters. This worked for me on a Mac, but Windows needs different quotes (eg single vs double).

Azure CLI choose a option from a table

I am trying to get a table output with numbers in the Azure CLI which gives this as a output
Number Location Name
---------- ----------- -------------
1 somewhere ResourceGroup1
2 somewhere ResourceGroup2
The code I have right now is
az group list --query '[].{location:location, name:name}'
The output I'm getting right now is
Location Name
---------- ---------------
somewhere ResourceGroup1
somewhere ResourceGroup2
My end goal is that if you choose the number 1 you select the name so I can use that later in the script
For your issue, there is no Azure CLI command can achieve it. But you can use a script to let it come true. For example, you can use a shell script:
#!/bin/bash
az group list --query '[].{location: location, name: name}' -o table >> output.txt
# This command just add the line number inside the file, it's optional.
cat -n output.txt >> result.txt
# you can just get the group name with a specific line, the same result with output.txt
awk '{if (NR == line) print $3}' result.txt
Hope this will be helpful.
you can use contains expression (jmespath) in the filter to filter the results:
filter=resource_group_name
filterExpression="[?contains(name, '$filter')].name"
az group list --query "$filterExpression" -o tsv
which is a much better way compared to already present answers.
more reading:
http://jmespath.org/specification.html#filterexpressions
http://jmespath.org/specification.html#built-in-functions
From what i understand you are trying to create variable to use later from output. You do not need to put it in a table first. Using same example you have you could do something like below;
gpname="$(az group list --query [0].name --output tsv)"
az group show -n $gpname
Good Luck.....
Information in Comments::
What you are looking for is more Linux than Azure. I am not a Linux CLI expert but her is a basic script that you can build on.
#!/bin/bash
gpnames="$(az group list --query [].name --output tsv)"
PS3='Select A number: '
select gpname in $gpnames
do
az group show -n $gpname
Done
Hope this helps......

How do I parse output from result of CURL command?

I have a Jenkins console output that looks like this:
Started by remote host 10.16.17.13
Building remotely on ep9infrajen201 (ep9) in workspace d:\Jenkins\workspace\Tools\Provision
[AWS-NetProvision] $ powershell.exe -NonInteractive -ExecutionPolicy ByPass "& 'C:\Users\user\AppData\Local\Temp\jenkins12345.ps1'"
Request network range: 10.1.0.0/13
{
    "networks":  [
                     "10.1.0.0/24"
                 ]
}
Finished: SUCCESS
I get this from a curl command that I run. to check the JENKINS_JOB_URL/lastBuild/consoleText
My question is, for the sake of some other automation I am doing, how do I get just "10.1.0.0/24" so I can assign it to a shell variable using LINUX tools?
Thank you
Since you listed jq among the tags of your duplicate question, I'll assume you have jq installed. You have to clean up your output to get JSON first, then get to the part of JSON you need. awk does the former, jq the latter.
.... | awk '/^{$/{p=1}{if(p){print}}/^}$/{p=0}' | jq -r .networks[0]
The AWK script looks for { on its own on a line to turn on a flag p; prints the current line if the flag is set; and switches off the flag when it encounters } all by itself.
EDIT: Since this output was generated on a DOS machine, it has DOS line endings (\r\n). To convert those before awk, additionally pipe through dos2unix.

Resources