EC2-Describe-Instances for DNS - linux

I found a script online a few months ago that I changed and came up with the solution below.
It uses EC2-Describe-Instances and uses Perl to collect the Instance Names, IP address and updates Route53.
It works but its a bit in-efficient, I'm more of a .Net programmer and I am a little out of my depth so hopefully someone can help or point me in the right direction.
What I am thinking is I want it to save a copy of EC2-Describe-Instances from last time it ran and then get a fresh copy. Compare the differences and then only run the Route53 update for Instances that have changed IP. Any Ideas?
#!/bin/bash
set root='dirname $0'
ec2-describe-instances -O ###### -W ##### --region eu-west-1 |
perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/
and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sName\s+(\S+)/
and print "$1 $dns\n"' |
perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' |
grep -v '^i-' |
xargs --verbose -n 4 -I myvar /bin/sh -c '{ /usr/local/bin/cli53 rrcreate -x 300 contoso.com 'myvar'; sleep 1; printf "\n\n"; }'
--edit--
Basically what I need is a way to compare a saved file with the output of EC2-Describe-Instances and then only return lines that contain differences to be fed back into the rest of the code.
Something like:
ChangedLines(File.txt, "ec2-describe-instances -O ###### -W ##### --region eu-west-1") | perl......
If
File 1 =
ABC
DEF
GHI
JKL
Output =
ABC
DEF
GHJ
JKL
Return =
GHJ
Example of EC2-Descibe-Instances
PROMPT> ec2-describe-instances
RESERVATION r-1a2b3c4d 111122223333 my-security-group
INSTANCE i-1a2b3c4d ami-1a2b3c4d ec2-203-0-113-25.compute-1.amazonaws.com ip-10-251-50-12.ec2.internal running my-key-pair 0 t1.micro YYYY-MM-DDTHH:MM:SS+0000 us-west-2a aki-1a2b3c4d monitoring-disabled 184.73.10.99 10.254.170.223 ebs paravirtual xen ABCDE1234567890123 sg-1a2b3c4d default false
BLOCKDEVICE /dev/sda1 vol-1a2b3c4d YYYY-MM-DDTHH:MM:SS.SSSZ true
RESERVATION r-2a2b3c4d 111122223333 another-security-group
INSTANCE i-2a2b3c4d ami-2a2b3c4d ec2-203-0-113-25.compute-1.amazonaws.com ip-10-251-50-12.ec2.internal running my-key-pair 0 t1.micro YYYY-MM-DDTHH:MM:SS+0000 us-west-2c windows monitoring-disabled 50.112.203.9 10.244.168.218 ebs hvm xen ABCDE1234567890123 sg-2a2b3c4d default false
BLOCKDEVICE /dev/sda1 vol-2a2b3c4d YYYY-MM-DDTHH:MM:SS.SSSZ true
I need to capture the lines in which the IP address has changed from the previous run.

It sounds like your actual goal is to update Amazon Route 53 for newly launch Amazon EC2 instances. There's a few different approaches you could take.
List instances launched during a given period
Use the AWS Command-Line Interface (CLI) to list instances that were recently launched. I found this example on https://github.com/aws/aws-cli/issues/1209:
aws ec2 describe-instances --query 'Reservations[].Instances[?LaunchTime>=`2015-03-01`][].{id: InstanceId, type: InstanceType, launched: LaunchTime}'
Modified for your needs:
aws ec2 describe-instances --query 'Reservations[].Instances[?LaunchTime>=`2015-03-01`][].{id: InstanceId, ip: PrivateIpAddress}' --output text
Let the instance update itself
Thinking a different way, why not have the instances update Amazon Route 53 themselves? Use a start script (via User Data) that calls the AWS CLI to update Route 53 directly!
Instances can retrieve their IP address via instance metadata:
curl http://169.254.169.254/latest/meta-data/public-ipv4/public/
Then call aws route53 change-resource-record-sets to update records.

Related

In Prometheus Pushgateway getting metrics of stopped MIG instance also along with running instance

as MIG instances in GCP keep getting scaled up and scaled down and instance gets new name every-time it scaled up and also MIG don’t have static IP. So I am trying Prometheus Pushgateway to get metrics. But after setting up everything I am getting metrics of scaled down instances also along with scaled up instance. And scaled down instance metric is in form of straight line. Now I don't want metrics of scaled down instance and also if there is any way that can automatically remove stop instances metrics only but don’t disturb running instance.
I have created .sh script which can check stopped instances and don't send metrics but it is not working. i have created crontab to check script but it is also not working.
#!/bin/bash
Get the list of active MIG instances
active_instances=$(gcloud compute instances list --filter="status=RUNNING" --format="value(name)")
PUSHGATEWAY_SERVER=http://Address (which I have mentioned in actual script)
Get the list of existing metrics from the push gateway
existing_metrics=$(curl $PUSHGATEWAY_SERVER/metrics 2>/dev/null | grep "node-exporter" | awk '{print $NF}')
Loop through each active instance and scrape its metrics using Prometheus
for instance in $active_instances; do
NODE_NAME=$instance
Check if the instance is part of a MIG and if it is still running
if gcloud compute instances describe $NODE_NAME --format="value(mig,status)" | grep -q "mig"; then
instance_status=$(gcloud compute instances describe $NODE_NAME --format="value(status)")
if [ "$instance_status" == "RUNNING" ]; then
# Collect and push the metrics
curl -s localhost:9100/metrics | curl --data-binary #- $PUSHGATEWAY_SERVER/metrics/job/node-exporter/instance/$NODE_NAME
else
# If the instance is not running, delete its metrics from the push gateway
metric_to_delete=$(echo $existing_metrics | grep "$NODE_NAME")
if [ -n "$metric_to_delete" ]; then
curl -X DELETE $PUSHGATEWAY_SERVER/metrics/job/node-exporter/instance/$NODE_NAME
fi
fi
fi
done

Script for checking if the created AMI is available or not,

I am trying to write a script where we take backup of an AMI (Amazon Machine Image) & once its completed & it's status shows 'Available' than it email us informing the same.
I have got the first part covered but having problem with second part i.e. to check continuously for the when the image is available & email us. To check the status as available, i am using the following command,
/usr/bin/aws ec2 describe-images --image-ids=$AMI_ID --query "Images[*].{st:State}" | grep -e "available" | wc -l'
This will return output as 1 when AMI is available but having trouble in creating a loop which runs the above command continuously to check the output is equal to 1 or not.
Please help in figuring out this loop.
PS IMAGE creation takes anywhere from 10 to 30 minutes or even more in some cases.
You could use an infinite loop
while true
do
if /usr/bin/aws ec2 describe-images --image-ids=$AMI_ID --query "Images[*].{st:State}" | grep -e "available" | wc -l'; then
break
fi
esac
done
You can try like below as well: [Update sleepTime as needed]
Notice I've added flag --executable-users self to your command to list the images available for you.
sleepTime=60 # sleepTime in seconds
while true ; do
count=$(aws ec2 describe-images --executable-users self --query "Images[*].{st:State}" | grep -e "available" | wc -l)
if [[ $count == 1 ]] ; then
echo "Image is ready... Add your emailing code here"
exit 0
fi
sleep $sleepTime;
printf "."
done
From AWS Image-Exists Docs
could include the following the following code in the script:
aws ec2 wait image-exists \
--image-ids ami-0abcdef1234567890
(replace ami-0abcdef1234567890 with seeked AMI code)
According to documentation, image-exists has no output, but when found, wait will continue the script.

Nagios NSCA 4th variable always $OUTPUT$

I have implemented nsca in Nagios for distributed monitoring purposes, and everything seems to be working, except for one oddity that I can't seem to find an answer to anywhere.
The passive checks are sent and received, but the output shows the 4th variable to always be uninitialized, and thus it shows up as $OUTPUT$. It appears as though the checks are showing the proper information on the non-central server, but when it's sent, it doesn't seem to be interpolating properly.
commands.cfg
define command{
command_name submit_check_result
command_line /usr/share/nagios3/plugins/eventhandlers/submit_check_result $HOSTNAME$ '$SERVICEDESC$' $SERVICESTATE$ '$OUTPUT$'
}
submit_check_result
#!/bin/sh
return_code=-1
case "$3" in
OK)
return_code=0
;;
WARNING)
return_code=1
;;
CRITICAL)
return_code=2
;;
UNKNOWN)
return_code=-1
;;
esac
/usr/bin/printf "%s\t%s\t%s\t%s\n" "$1" "$2" "$return_code" "$4" | /usr/sbin/send_nsca 192.168.40.168 -c /etc/send_nsca.cfg
Example service
define service {
host_name example_host
service_description PING
check_command check_icmp
active_checks_enabled 1
passive_checks_enabled 0
obsess_over_service 1
max_check_attempts 5
normal_check_interval 5
retry_check_interval 3
check_period 24x7
notification_interval 30
notification_period 24x7
notification_options w,c,r
contact_groups admins
}
The output from the log on the non-central server shows:
Nov 29 22:52:52 nagios-server nagios3: SERVICE ALERT: example_host;PING;OK;HARD;5;OK - 192.168.1.1: rta nan, lost 0%
The output from the log on the central server shows:
EXTERNAL COMMAND: PROCESS_SERVICE_CHECK_RESULT;example_host;PING;0;$OUTPUT$
Status information on the central server (web interface) shows:
PING OK 2016-11-29 22:54:50 0d 0h 54m 6s 1/5 $OUTPUT$
It's not just this service either. All services, including those that are essentially preconfigured for the Nagios server itself "check_load, check_proc, etc".
Any assistance would be appreciated.
I found the issue. Turns out the submit_check_result script above is not formatted properly for submitting check results to a remote server. It will do it, but it doesn't account for the status properly. Below is the proper script:
#!/bin/sh
# SUBMIT_CHECK_RESULT_VIA_NSCA
# Written by Ethan Galstad (egalstad#nagios.org)
# Last Modified: 10-15-2008
#
# This script will send passive check results to the
# nsca daemon that runs on the central Nagios server.
# If you simply want to submit passive checks from the
# same machine that Nagios is running on, look at the
# submit_check_result script.
#
# Arguments:
# $1 = host_name (Short name of host that the service is
# associated with)
# $2 = svc_description (Description of the service)
# $3 = return_code (An integer that determines the state
# of the service check, 0=OK, 1=WARNING, 2=CRITICAL,
# 3=UNKNOWN).
# $4 = plugin_output (A text string that should be used
# as the plugin output for the service check)s
#
#
# Note:
# Modify the NagiosHost parameter to match the name or
# IP address of the central server that has the nsca
# daemon running.
printfcmd="/usr/bin/printf"
NscaBin="/usr/sbin/send_nsca"
NscaCfg="/etc/send_nsca.cfg"
NagiosHost="central_host_IP_address"
# Fire the data off to the NSCA daemon using the send_nsca script
$printfcmd "%s\t%s\t%s\t%s\n" "$1" "$2" "$3" "$4" | $NscaBin -H $NagiosHost -c $NscaCfg
# EOF
Much better results.

Linux command for public ip address

I want command to get Linux machine(amazon) external/public IP Address.
I tried hostname -I and other commands from blogs and stackoverflow like
ifconfig | sed -En 's/127.0.0.1//;s/.*inet (addr:)?(([0-9]*\.){3}[0-9]*).*/\2/p'
and many more. But they all are giving me internal IP Address.
Then I found some sites which provides API for this.
Example : curl http://ipecho.net/plain; echo
But I don't want to rely on third party website service. So, is there any command line tool available to get external IP Address?
simplest of all would be to do :
curl ifconfig.me
A cleaner output
ifconfig eth0 | awk '/inet / { print $2 }' | sed 's/addr://'
You could use this script
# !/bin/bash
#
echo 'Your external IP is: '
curl -4 icanhazip.com
But that is relying on a third party albeit a reliable one.
I don't know if you can get your external IP without asking someone/somesite i.e. some third party for it, but what do I know.
you can also just run:
curl -4 icanhazip.com
This is doing the same thing as a command the -4 is to get the output in Ipv4
You can use this command to get public ip and private ip(second line is private ip; third line is public ip.)
ip addr | awk '/inet / {sub(/\/.*/, "", $2); print $2}'
I would suggest you to use the command external-ip (sudo apt-get install miniupnpc) as it (I'm almost sure) uses upnp protocol to ask the router instead of asking an external website so it should be faster, but of course the router has to have upnp enabled.
You can simply do this :
curl https://ipinfo.io/ip
It might not work on amazon because you might be using NAT or something for the server to access the rest of the world (and for you to ssh into it also). If you are unable to ssh into the ip that is listed in ifconfig then you are either in a different network or dont have ssh enabled.
This is the best I can do (only relies on my ISP):
ISP=`traceroute -M 2 -m 2 -n -q 1 8.8.8.8 | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'`
extIP=`ping -R -c 1 -t 1 -s 1 -n $ISP | grep RR | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'`
echo $extIP
Or, the functionally same thing on one line:
ISP=`traceroute -M 2 -m 2 -n -q 1 8.8.8.8 | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'` | ping -R -c 1 -t 1 -s 1 -n $ISP | grep RR | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
to save it to a temporary & hidden file add > .extIP to the end of the last line, then cat .extIP to see it.
If your ISP's address never changes (honestly i'm not sure if it would or not), then you could fetch it once, and then replace $ISP in line two with it
This has been tested on a mac with wonderful success.
the only adjustment on linux that I've found so far is the traceroute "-M" flag might need to be "-f" instead
and it relies heavily on the ping's "-R" flag, which tells it to send back the "Record Route" information, which isn't always supported by the host. But it's worth a try!
the only other way to do this without relying on any external servers is to get it from curl'ing your modem's status page... I've done this successfully with our frontier DSL modem, but it's dirty, slow, unreliable, and requires hard-coding your modem's password.
Here's the "process" for that:
curl http://[user]:[password]#[modem's LAN address]/[status.html] | grep 'WanIPAddress =' | grep -m 1 -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}'
That fetches the raw html, searches for any lines containing "WanIpAddress =" (change that so it's appropriate for your modem's results), and then narrows down those results to an IPv4 style address.
Hope that helps!
As others suggested, we have to rely on third party service which I don't feel safe using it. So, I have found Amazon API on this answer :
$ curl http://169.254.169.254/latest/meta-data/public-ipv4
54.232.200.77
For more details, https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html#instancedata-data-retrieval
The super-easy way is using the glances tool. you can install it on Ubuntu using:
$ sudo apt install glances
then using it with:
$ glances
and at the top of the terminal, it highlights your public IP address, and so many other information about your system (like what htop does) and network status.
For a formatted output use :-
dig TXT +short o-o.myaddr.l.google.com #ns1.google.com
it'll give you formatted output like this
"30.60.10.11"
also FYI,
dig is more faster than curl and wget
The following works as long as you have ifconfig and curl.
curl ifconfig.me

Shell scripting support -- multiple nslookups on different URLs

My company hosts 2 datacenters and the traffic is expected to be routed in a round robin fashion. We have a bunch of URLs and traffic is expected to be catered from both the DC. I check the if traffic is hitting both the DCs by doing a continuous nslookup
for i in {1..100}; do nslookup www.xyz.com ; done | grep 'Address:' | grep -v 10 | sort | uniq | wc -l
If the word count is 1, I know traffic is going only to one DC and that is an error however, if the output is 2, I know everything is working as expected.
Currently, I have a bunch of sites and i have them in a file. I wanted to write a script that will "cat" the file and do an nslookup against each of the entires, echo the entry and the word count variable along with it. Hoping the output to look like
www.xyz.com ==> 2 DCs active
www.123.com ==> 1 DC active
I couldn't think of a logic to attain this output. Request your support..
Disclaimer: I'm assuming there is no anycast involved here.
First it would be good to specify the DNS server you are asking.. using a wild nslookup may be giving you cached data..
so assuming they're all being served from the same DNS server (and these are NOT anycast) you can easily lookup the rules (clearly) using dig #DNSSERVER +short <query>
So first make a text file like this.. (first variable line is the domain, after the comma is the DNS server you want to look up against)
domains.txt:
google.com,4.2.2.3
sampledomain.org,8.8.4.4
joeblogs.net,
I've intentionally left joeblogs with no DNS server to lookup against, it will fall back to the default DNS server or cache on your workstation
now I've made a simple script that :
loops over the file, line by line
summarises the results of the DNS
starts all over again every 10 secs
dig.sh
#!/usr/bin/env bash
DEFAULT_DNS_SERVER=4.2.2.3
while true ; do
while read line
do
domain="$(cut -d "," -f 1 <<<"$line")"
server="$(cut -d "," -f 2 <<<"$line")"
if [ "X$server" == "X" ]; then
export server="$DEFAULT_DNS_SERVER"
fi
result="$(dig +short #"$server" "$domain" | wc -l)"
echo "$domain ==> ${result} DCs active"
done < domains.txt
sleep 10
done
And run it.. EG:
stark#fourier ~ $ ./dig.sh
google.com ==> 6 DCs active
sampledomain.org ==> 4 DCs active
joeblogs.net ==> 1 DCs active
google.com ==> 11 DCs active
sampledomain.org ==> 4 DCs active
joeblogs.net ==> 1 DCs active
....etc...
To install "dig" on a modern ubuntu based distro (I use Mint Cinnamon):
sudo apt-get install dnsutils
Good luck
You can do something like this:
#!/bin/bash
readarray -t HOSTS < hosts_file
for HOST in "${HOSTS[#]}"; do
COUNT=$(for i in {1..100}; do nslookup "$HOST"; done | awk '/Address/ && !/10/ && !a[$0]++ { ++count } END { print count }')
echo "$HOST ==> $COUNT DCs active"
done

Resources