WHOIS BASH Script sometimes not fetching data - linux

My whois bash script works for a few domains and doesn't for others.
When I run the command directly in my terminal for the same domain, I am able to see output. Also, sometimes the script will not run properly and gets stuck, then I need to interrupt that.
Why is that, and how can I fix it?
Let's say the domain.txt file contains: gmail.com, zoom.us, facebook.com, bank.com etc.
The script is:
#!/bin/bash
echo "Please enter the full path of txt file"
read path
filename=$path
while read line
do
echo "Checking domain $line"
a=$(whois $line | grep -i -e "Creation Date" | head -1)
b=$(whois $line | grep -i -e "no match" | head -1)
echo "$a"$line >> /root/outputdomain.csv
done <$filename
echo "file has been processed successfully."
A sample input txt file is:
linkedin.com
zoom.us
sbi.co.in
facebook.com
sap.com
hsbc.com
Expected Output is:
Creation Date: 2002-11-02T15:38:11Z linkedin.com
Creation Date: 2002-04-24T15:03:39Z zoom.us
Whats is working for me currently:
Creation Date: 2002-11-02T15:38:11Z linkedin.com
Creation Date: 1997-03-29T05:00:00Z facebook.com
But no output for zoom.us, sbi.co.in.
If I run the command below, I am able to fetch the required data:
$ whois zoom.us | grep -E "Creation Date" | head -1
Creation Date: 2002-04-24T15:03:39Z

I don't use/know whois but base on your post this is what I came up with.
#!/usr/bin/env bash
shopt -s extglob
echo "Please enter the full path of txt file"
read -r path
filename=$path
while read -r line; do
printf 'Processing %s\n' "$line"...
if a=$(whois "$line" | grep --line-buffer -Fi -m1 "creation date"); then
printf '%s %s\n' "${a##*+([[:blank:]])}" "$line" >> outputdomain.csv
else
printf '%s No domain match\n' "$line" >> outputdomain.csv
fi
sleep 5
done < "$filename"
-m Might not be POSIX but it is in GNU and BSD grep.

Related

I have to read config file and after reading it will run scp command to fetch all details from the available servers in config

I have a config file that has details like
#pem_file username ip destination
./test.pem ec2-user 00.00.00.11 /Desktop/new/
./test1.pem ec2-user 00.00.00.22 /Desktop/new/
Now I need to know how can I fix the below script to get all the details using scp
while read "$(cat $conf | awk '{split($0,array,"\n")} END{print array[]}')"; do
scp -i array[1] array[2]#array[3]:/home/ubuntu/documents/xyz.xml array[4]
done
please help me.
Build your while read like this:
#!/bin/bash
while read -r file user ip destination
do
echo $file
echo $user
echo $ip
echo $destination
echo ""
done < <(grep -Ev "^#" "$conffile")
Use these variables to build your scp command.
The grep is to remove commented out lines.
If you prefer using an array, you can do this:
#!/bin/bash
while read -a line
do
echo ${line[0]}
echo ${line[1]}
echo ${line[2]}
echo ${line[3]}
echo ""
done < <(grep -Ev "^#" "$conffile")
See https://mywiki.wooledge.org/BashFAQ/001 for looping on files and commands output using while.

if condition for when a token is not found in a shell script

I am trying to construct an if-else block wherein one of the conditions is to echo a message if, when running a grep command on a text file, the specified token can not be found.
The grep command is
grep -i -n "token" file | cut -d':' -f 1
If the token is found, it will return the line number as usual. I want to know how to account for the case when the token does not exist in the text file and the terminal simply outputs nothing when the command is executed.
i.e.
if []
then
echo "This token does not exist in the file"
fi
I hope that's what you need:
result=$(grep -i -n "token" file | cut -d':' -f 1)
if [[ -z "$result" ]]; then
echo "Not found"
else
echo "$result"
fi

Remove all but the latest X files from sftp via bash-script

I have a working bash script to create backups and upload them as a tar archive to a remote sftp server.
After the upload, the script should remove all but the latest 20 backup files. I can't use any, pipe, grep, whatever on the sftp. Also I don't get the file-listing result handled in my bash-script.
export SSHPASS=$(cat /etc/backup/pw)
SFTPCONNECTION=$(cat /etc/backup/sftp-connection)
sshpass -e sftp $SFTPCONNECTION - << SOMEDELIMITER
ls -lt backup-*.tar
quit
SOMEDELIMITER
There is this nice oneliner, but I did not figure out how the use it in my case (sftp).
This script deletes all tar files in the given directory except the last 20 ones. The -t flag sorts by time & date. The <<< redirect expands $RESULT feed's it into the stdin of the while loop. I'm not entirely pleased with it as it has to create multiple connections, but with sftp I don't believe there is another way.
RESULT=`echo "ls -t path/to/old_backups/" | sftp -i ~/.ssh/your_ssh_key user#server.com | grep tar`
i=0
max=20
while read -r line; do
(( i++ ))
if (( i > max )); then
echo "DELETE $i...$line"
echo "rm $line" | sftp -i ~/.ssh/your_ssh_key user#server.com
fi
done <<< "$RESULT"
Thanks to codelitt I went with this solution:
export SSHPASS=$(cat /etc/backup/pw)
SFTPCONNECTION="username#host"
RESULT=`echo "ls -tl backup*.tar" | sshpass -e sftp $SFTPCONNECTION | grep -oP "backup.*\.tar" `
i=0
max=24
while read -r line; do
# echo "$line "
(( i++ ))
if (( i > max )); then
echo "DELETE $i...$line"
echo "rm $line" | sshpass -e sftp $SFTPCONNECTION
fi
done <<< "$RESULT"
It's a slight modification of his version:
it counts/removes only files named backup*.tar
it uses ls -l (for line based listings)
I had to use sshpass instead of a certificate-based authentication. The sftp password is inside /etc/backup/pw

Shell Script to for remote copy and then processing the file

The below script works fine. But when I try to add a command to remote copy and then assign the variable FILENAME with the file received from the remote copy, the while loop doesn't work. I am quite new to scripting so I'm not able to find out what I'm missing. Please help!
#!/bin/sh
#SCRIPT: File processing
#PURPOSE: Process a file line by line with redirected while-read loop.
SSID=$1
ASID=$2
##rcp server0:/oracle/v11//dbs/${SSID}_ora_dir.lst /users/global/rahul/${ASID}_clone_dir.lst
##FILENAME=/users/global/rahul/${ASID}_clone_dir.lst
count=0
while read LINE
do
echo $LINE | sed -e "s/${SSID}/${ASID}/g"
count=`expr $count + 1`
done < $FILENAME
echo -e "\nTotal $count Lines read"
grep -v -e "pattern3" -e "pattern5" -e "pattern6" -e "pattern7" -e "pattern8" -e "pattern9" -e "pattern10" -e "pattern11" -e "
pattern12" ${ASID}_.lst > test_remote.test
When you say, "the while loop doesn't work", if you get an error message you should include that in your question to give us a clue.
Are you sure the rcp command is successful? The file /users/global/rahul/${ASID}_clone_dir.lst exists after the rcp is completed?
Btw your while loop is inefficient. This should be equivalent:
sed -e "s/${SSID}/${ASID}/g" < "$FILENAME"
count=$(wc -l "$FILENAME" | awk '{print $1}')
echo -e "\nTotal $count Lines read"

Bash | curl | curls 2 URL's then stops

I am trying to write a simple bash script that will use a list from a text document and curl each URL that is on the list in order to see what the contents of each URL is. It allows me to cURL 2 sites and creates the text documents for the rest however it only downloads the first 2. I have already manage to write the script that pulls there IP's and places them in a seperate file using the grep command. At first i tried
#!/bin/bash
for var in `cat host.txt`; do
curl -s $var >> /tmp/ping/html/$var.html
done
I have tried with and without the silent switch. I then tried the following:
#!/bin/bash
for var in `head -2 host.txt`; do
curl $var >> /tmp/ping/html/$var.html
wait
done
for var in `head -4 host.txt | tail -2`; do
curl $var >> /tmp/ping/html/$var.html
done
This would try and do them all at the same time again stopping after 2
#!/bin/bash
for var in `head -2 host.txt`; do
curl $var >> /tmp/ping/html/$var.html
done
wait
for var in `head -4 host.txt | tail -2`; do
curl $var >> /tmp/ping/html/$var.html
done
This would do the same, I am new to bash scripting and only know some of the basics, any help would be appreciated
Start with the simple: verify that you are in fact iterating over the entire list:
# This is the recommended way to iterate over the file. See
# http://mywiki.wooledge.org/BashFAQ/001
while read -r var; do
echo "$var"
done < hosts.txt
Then add in the call to curl, checking its exit status
while read -r var; do
echo "$var"
curl "$var" >> /tmp/ping/html/$var.html || echo "curl failed: $?"
done < hosts.txt
You pipe into $var, which could result in a wrong filename, because of the two slashes in the URL. Additionally i would quote the URL. For Example it works with the basename of the URL.
#!/bin/bash
for var in `cat host.txt`; do
name=$(basename $var)
curl -v -s "$var" -o "/tmp/ping/html/$name.html"
done
You may also want to skip blank lines and Comments (#)
#!/bin/bash
file="host.txt"
curl="curl"
while read -r line
do
[[ $line = \#* ]] || [[ -z "${line}" ]] && continue
filename=$(basename $line)
$curl -s "$line" >> "/tmp/ping/html/$filename.html"
done < "$file"

Resources