Shell scripting support -- multiple nslookups on different URLs - linux

My company hosts 2 datacenters and the traffic is expected to be routed in a round robin fashion. We have a bunch of URLs and traffic is expected to be catered from both the DC. I check the if traffic is hitting both the DCs by doing a continuous nslookup
for i in {1..100}; do nslookup www.xyz.com ; done | grep 'Address:' | grep -v 10 | sort | uniq | wc -l
If the word count is 1, I know traffic is going only to one DC and that is an error however, if the output is 2, I know everything is working as expected.
Currently, I have a bunch of sites and i have them in a file. I wanted to write a script that will "cat" the file and do an nslookup against each of the entires, echo the entry and the word count variable along with it. Hoping the output to look like
www.xyz.com ==> 2 DCs active
www.123.com ==> 1 DC active
I couldn't think of a logic to attain this output. Request your support..

Disclaimer: I'm assuming there is no anycast involved here.
First it would be good to specify the DNS server you are asking.. using a wild nslookup may be giving you cached data..
so assuming they're all being served from the same DNS server (and these are NOT anycast) you can easily lookup the rules (clearly) using dig #DNSSERVER +short <query>
So first make a text file like this.. (first variable line is the domain, after the comma is the DNS server you want to look up against)
domains.txt:
google.com,4.2.2.3
sampledomain.org,8.8.4.4
joeblogs.net,
I've intentionally left joeblogs with no DNS server to lookup against, it will fall back to the default DNS server or cache on your workstation
now I've made a simple script that :
loops over the file, line by line
summarises the results of the DNS
starts all over again every 10 secs
dig.sh
#!/usr/bin/env bash
DEFAULT_DNS_SERVER=4.2.2.3
while true ; do
while read line
do
domain="$(cut -d "," -f 1 <<<"$line")"
server="$(cut -d "," -f 2 <<<"$line")"
if [ "X$server" == "X" ]; then
export server="$DEFAULT_DNS_SERVER"
fi
result="$(dig +short #"$server" "$domain" | wc -l)"
echo "$domain ==> ${result} DCs active"
done < domains.txt
sleep 10
done
And run it.. EG:
stark#fourier ~ $ ./dig.sh
google.com ==> 6 DCs active
sampledomain.org ==> 4 DCs active
joeblogs.net ==> 1 DCs active
google.com ==> 11 DCs active
sampledomain.org ==> 4 DCs active
joeblogs.net ==> 1 DCs active
....etc...
To install "dig" on a modern ubuntu based distro (I use Mint Cinnamon):
sudo apt-get install dnsutils
Good luck

You can do something like this:
#!/bin/bash
readarray -t HOSTS < hosts_file
for HOST in "${HOSTS[#]}"; do
COUNT=$(for i in {1..100}; do nslookup "$HOST"; done | awk '/Address/ && !/10/ && !a[$0]++ { ++count } END { print count }')
echo "$HOST ==> $COUNT DCs active"
done

Related

Why echo isn't able to access a variable in Ubuntu 20.04

I can't find anything that helps with this issue even if the answer may be very simple.
I'm trying to get squid to redirect traffic through a 3g dongle, therefore I need to table the IP and change it in the squid.conf every time it changes.
I'm using this code here on "/etc/ppp/ip-up.local" so it's launched every time a new IP connects (or reconnects) to the machine.
#!/bin/bash
if [[ "$PPP_IFACE" == "ppp0" ]] ; then
TABLE=uplink2
/bin/sed -i "s&\(tcp_outgoing_address\).*\(very1\)&\1 $PPP_LOCAL \2&" /etc/squid/squid.conf
fi
##generate ip subnet for route
baseip=`echo "$PPP_LOCAL" | cut -d"." -f1-3`
/usr/sbin/ip route add "$baseip".0/24 dev "$PPP_IFACE" src "$PPP_LOCAL" table "$TABLE"
/usr/sbin/ip route add default via "$PPP_REMOTE" table "$TABLE"
/usr/sbin/ip rule add from "$PPP_LOCAL" table "$TABLE"
/usr/sbin/squid/ -k reconfigure
/usr/bin/systemctl squid restart
exit 0
The problem is that baseip=echo "$PPP_LOCAL" | cut -d"." -f1-3 cannot use $PPP_LOCAL
I tried to add echo $PPP_LOCAL >> file.txt but it just adds an empty line.
It's awkward to me that sed instead accesses the variable and modify correctly the squid.conf file with the new address
How can I fix this??
I also have a "sub-question", I'm a complete newbie just starting to learn and I'm not sure whether or not I should add an ip-down code to remove the table rules
Thanks everyone for the help

Add time and IP before bash cursor on Linux server SSH

I was thinking about syntax looking like this:
IP: 123.123.123 | 28.10.2016 17:24 | root#vps:~$
Is it possible?
I wish log bash history with this data for debugging and backup.
I was try on it, but time is static and I don't know how write IP:
echo "force_color_prompt=yes" >> /root/.bashrc
echo "PS1='$(date +%T) | ${debian_chroot:+($debian_chroot)}\[\033[01;31m\]\u#\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '" >> /root/.bashrc
Maybe IP can be printed only first time after SSH login, is it possible?
Thanks
For IP getting printed out, add following lines to your .bashrc:
ip=`ip a | grep wlan0 | grep -oE "\b([0-9]{1,3}\.){3}[0-9]{1,3}\b" | head -1`
echo $ip
unset $ip
Now when ever you open a new terminal, your will get ip printed out for you. Same goes for ssh.
Please notice that Im using my wlan0 adapter to get the ip, you may need to change that to eth0 depending on your environment.
Set ip with Farhad's answer, then:
PS1='IP: $ip | \D{%d.%m.%G %H:%M} | \u#\h:\W$'
The time is dynamic.
References
Controlling the Prompt - Bash Reference Manual
STRFTIME(3) - Linux Programmer's Manual

Using SSH to grep keywords from multiple servers

I am completely new to scripting and am having some trouble piecing this together from some other online resources.
What I want to do is run a bash script that will grep for a keyword domain in the /etc/hosts file on multiple servers. In the output file, I am looking for a list of the servers that contain this keyword but am not looking to make any changes. Simply looking for which machines have this value. Since there are a bunch of machines in question, listing the servers I am looking to search for won't work, but the machine I am doing this from does have SSH keys for all of the ones in question.
I have a listing of the servers I want to query in three files on the machine (one for each environment) I am going to run this script from.
Linux.prod.dat
Linux.qa.dat
Linux.dev.dat
Each file is simply a list of server names in the environment. For example..
server1
server2
server3
etc...
I am totally lost here and would appreciate any help.
Here is an example:
KEYWORD=foo
SERVERLIST=Linux.prod.dat
OUTPUTLIST=output.dat
for host in $(cat ${SERVERLIST}); do
if [[ -n "$(ssh ${host} grep '${KEYWORD}' /etc/hosts && echo Y)" ]]; then
echo ${host} >> ${OUTPUTLIST}
fi
done
Try GNU parallel
parallel --tag ssh {} grep -l "KEYWORD" /etc/hosts :::: Linux.prod.dat
parallel run command multiple times substituting{}with lines from Linux.prod.dat file.
--tag switch adds value from the Linux.prod.dat on the beginning of the file. So, the output of the command will look like:
server1 /etc/hosts
server5 /etc/hosts
server7 /etc/hosts
Where server1, server5, etc. will be names of the servers where /etc/hosts contains KEYWORD

My Ip changes Dynamically How can I get updated with the latest ip?

My isp provides dynamic ip addresses.I have forwarded my port to an raspberry pi and accessing it through ssh and also using it as web server.but the problem is that ip changes every 3-4 days is there any way or script so that i can be informed or updated with new ip address.
Thank You.
You can write a script like:
============
#!/bin/bash
OUT=$(wget http://checkip.dyndns.org/ -O - -o /dev/null | cut -d: -f 2 | cut -d\< -f 1)
echo $OUT > /root/ipfile
============
Set a cron to execute this every 3h or something and configure your mta to send the file /root/ipfile to your email address ( that too you can use a cron ). mutt can be a useful tool to attach the file and do the email delivery.

How to grep for the next instance of a variable in a logfile?

So I am trying to parse FTP logs and see if a certain user is logging in securely. So far I have this to pull the next couple of lines after the user logs in
cat proftpd.log.2 | grep -B 3 "USER $sillyvariable"
and this is a sample output it creates
::ffff:127.0.0.0 UNKNOWN ftp [04/Jan/2013:11:03:06 -0800] "AUTH TLS" 234 -
::ffff:127.0.0.0 UNKNOWN ftp [04/Jan/2013:11:03:06 -0800] "USER $sillyvariable" 331 -
Now this is a perfect example of what I want, it displays the AUTH TLS Message and the IPs match. However this is not always the case as many users are constantly logging in and out and most of the time the output is jumbled.
Is there a way I can grep for the USER $sillyvariable and find his/her matched IP containing the "AUTH TLS" in the preceding line so I can know they logged in securely? I guess you can say I want to grep the user and then grep backwards to see if the connection they originated from (matching IPs) was secure. I'm kind of stuck on this and could really use some help.
Thanks!
$ grep -B3 'USER $sillyvariable' proftpd.log.2 |
tac | awk 'NR==1 {IP=$1} $1==IP {print}' | tac
::ffff:127.0.0.0 UNKNOWN ftp [04/Jan/2013:11:03:06 -0800] "AUTH TLS" 234 -
::ffff:127.0.0.0 UNKNOWN ftp [04/Jan/2013:11:03:06 -0800] "USER $sillyvariable" 331 -
This uses tac to reverse the lines in the grep result. It then looks for all lines where the IP addresses match the one in the USER line. Finally it runs tac again to put the lines back in the original order.
I realize I am very late to this party, but the comment I made about the AUTH statement possibly being more than 3 lines earlier left me wondering.
I took a slightly different approach, in which I make minimal assumptions (based on limited knowledge of the contents of your log file):
There is one user per IP address (may not be true if they are behind a firewall)
For every AUTH entry there should be exactly one "good" USER entry from the same IP address
A sorted list of IP addresses which have entries in the log file will show more "USER" than "AUTH" requests for any IP address from which a "bad" request was made
If those assumptions are reasonable / true, then a simple bash script does quite a nice job of giving you exactly what you want (which is a list of the users that didn't log in properly - which is not exactly what you were asking for):
#!/bin/bash
# first, find all the "correct" IP addresses that did the login "right", and sort by IP address:
grep -F "AUTH TLS" $1 | awk '{print $1}' | sort > goodLogins
# now find all the lines in the log file with USER and sort by IP address
grep USER $1 | awk '{print $1}' | sort > userLogins
# now see if there were user logins that didn't correspond to a "good" login:
echo The following lines in the log file did not have a corresponding AUTH statement:
echo
sdiff goodLogins userLogins | grep "[<>]" | awk '{print $2 ".*USER"}' > badUsers
grep -f badUsers $1
echo -----
Note that this leaves you with three temporary files (goodLogins, userLogins, badUsers) which you might want to remove. I assume you know how to create a text file with the above code, set it to be executable ( chmod u+x scrubLog ), and run it with the name of the log file as parameter ( ./scrubLog proftpd.log.2 ).
Enjoy!
PS - I am not sure what you mean by "logging in correctly", but there are other ways to enforce good behaviors. For example, you could block port 21 so only sftp (port 22) requests come through, you could block anonymous ftp, ... But that's not what you were asking about.

Resources