Testing active ssh keys on the local network - linux

I am trying currently to achieve a bash script that will validate if SSH keys on a server are still linked to known hosts that are active on the local area network. You can find below the beginning of my bash script to achieve this:
#!/bin/bash
# LAN SSH KEYS DISCOVERY SCRIPT
# TRYING TO FIND THOSE SSH KEYS NOW
cat /etc/passwd | grep /bin/bash > bash_users
cat bash_users | cut -d ":" -f 6 > cutted.bash_users_home_dir
for bash_users in $(cat cutted.bash_users_home_dir)
do
ls -al $bash_users/.ssh/*id_* >> ssh-keys.txt
done
# DISCOVERING THE KNOWN_HOSTS NOW
for known_hosts in $(cat cutted.bash_users_home_dir)
do
cat $bash_users/.ssh/known_hosts | awk '{print $1}' | sort -u >>
hosts_known.txt
sleep 2
done
hosts_known=$(wc -l hosts_known.txt)
echo "We have $hosts_known known hosts that could be still active via SSH
keys"
# TIME TO TEST WHICH SSH servers are still active with the SSH keys
# AND THIS IS WHERE I AM FROZEN...
# Would love to have bash script that could
# ssh -l $users_that_have_/bin/bash -i $ssh_keys $ssh_servers
# Would also be very nice if it could save active
# SSH servers with the valid keys in output.txt in the format
# username:local-IP:/path/to/SSH_key
Please feel very comfortable to edit/modify the bash script above if it can serve better the goals described.
Any help would be very appreciated,
Thanks

The following works cool:
</etc/passwd \
grep /bin/bash |
cut -d: -f6 |
sudo xargs -i -- sh -c '
[ -e "$1" ] && cat "$1"
' -- {}/.ssh/known_hosts |
cut -d' ' -f1 |
tr ',' '\n' |
sed '
/^\[/{
s/\[\(.*\)\]:\(.*\)/\1 \2/;
t;
};
s/$/ 22/;
' |
sort -u |
xargs -l1 -- sh -c '
if echo "~" | nc -q1 -w3 "$1" "$2" | grep -q "^SSH"; then
echo "#### SUCCESS $1 $2";
else
echo "#### ERROR $1 $2";
fi
' --
So:
Start with /etc/passwd
Filter all "bash_users" as you call them
Filter user home directories only cut -d: -f6
For each user home directory sudo xargs -i -- run
Check if the file .ssh/known_hosts inside the user home directory exists
If it does, print it
Filter only hosts names
Multiple hosts signatures may share same key and are separated by a comma. Replace comma for newline
Now a sed script:
If a line starts with a [ that means it has a format of [host]:port and I want to replace it with host port
If the line does not start with a [ I add 22 to the end of the line so it's host 22
Then I sort -u
Now for each line:
I get the ssh version from ssh echo "~" | nc hostname port returns smth like "SSH-2.0-OpenSSH_6.0" + newline + "Protocol mismatch".
So if the line returned by nc hostname port starts with SSH that means there is ssh running on the other side
I added timeout for unresponsive hosts, but I think nc -w timeout option may also be used. Probably also nc -q 1 should be specified.
Now the real fun is, when you add the max-procs option to the last xargs line, you can check all hosts simultaneously. On my host I have 47 unique addresses and xargs -P30 checks them ALL in like 2 seconds.
But really there are some problems. The script needs root to read from all users known_hosts. But worse, the known_hosts may be hashed. It would be better to firstly know the list of hosts on your network, and then generate known_hosts from it. It would look like ssh-keyscan -f list_of_hosts > ~/.ssh/known_hosts or similar. Generaly ssh-keygen -F hostname should be used if a host exists in known_hosts, sadly there is no listing command. known_hosts file format may be found in ssh documentation.

Related

Using ssh inside a script to run another script that itself calls ssh

I'm trying to write a script that builds a list of nodes then ssh into the first node of that list
and runs a checknodes.sh script which it's self is just a for i loop that calls checknode.sh
The first 2 lines seems to work ok, the list builds successfully, but then I get either get just the echo line of checknodes.sh to print out or an error saying cat: gpcnodes.txt: No such file or directory
MYSCRIPT.sh:
#gets the master node for the job
MASTERNODE=`qstat -t -u \* | grep $1 | awk '{print$8}' | cut -d'#' -f 2 | cut -d'.' -f 1 | sed -e 's/$/.com/' | head -n 1`
#builds list of nodes in job
ssh -qt $MASTERNODE "qstat -t -u \* | grep $1 | awk '{print$8}' | cut -d'#' -f 2 | cut -d'.' -f 1 | sed -e 's/$/.com/' > /users/issues/slow_job_starts/gpcnodes.txt"
ssh -qt $MASTERNODE cd /users/issues/slow_job_starts/
ssh -qt $MASTERNODE /users/issues/slow_job_starts/checknodes.sh
checknodes.sh
for i in `cat gpcnodes.txt `
do
echo "### $i ###"
ssh -qt $i /users/issues/slow_job_starts/checknode.sh
done
checknode.sh
str=`hostname`
cd /tmp
time perf record qhost >/dev/null 2>&1 | sed -e 's/^/${str}/'
perf report --pretty=raw | grep % | head -20 | grep -c kernel.kallsyms | sed -e "s/^/`hostname`:/"
When ssh -qt $MASTERNODE cd /users/issues/slow_job_starts/ is finished, the changed directory is lost.
With the backquotes replaced by $(..) (not an error here, but get used to it), the script would be something like
for i in $(cat /users/issues/slow_job_starts/gpcnodes.txt)
do
echo "### $i ###"
ssh -nqt $i /users/issues/slow_job_starts/checknode.sh
done
or better
while read -r i; do
echo "### $i ###"
ssh -nqt $i /users/issues/slow_job_starts/checknode.sh
done < /users/issues/slow_job_starts/gpcnodes.txt
Perhaps you would also like to change your last script (start with cd /users/issues/slow_job_starts)
You will find more problems, like sed -e 's/^/${str}/' (the ${str} inside single quotes won't be replaced by a host), but this should get you started.
EDIT:
I added option -n to the ssh call.
Redirects stdin from /dev/null (actually, prevents reading from stdin).
Without this option only one node is checked.

echo text to multiple files in bash script

I am working on a bash script that uses pssh to run external commands, then join the output of the commands with the IP of each server. pssh has an option -o that writes a file for each server into a specified directory, but if the commands do not run, you just have an empty file. What I am having issues with is updating these empty files with something like "Server Unreachable" so that I know there was a connection issue reaching the server and to not cause problems with the rest of the script.
Here is what I have so far:
#!/bin/bash
file="/home/user/tools/test-host"
now=$(date +"%F")
folder="./cnxhwinfo-$now/"
empty="$(find ./cnxhwinfo-$now/ -maxdepth 1 -type f -name '*' -size 0 -printf '%f%2d')"
command="echo \$(uptime | awk -F'( |,|:)+' '{d=h=m=0; if (\$7==\"min\") m=\$6; else {if (\$7~/^day/) {d=\$6;h=\$8;m=\$9} else {h=\$6;m=\$7}}} {print d+0,\"days\",h+0,\"hours\",m+0,\"minutes\"}'), \$(hostname | awk '{print \$1}'), \$(sudo awk -F '=' 'FNR == 2 {print \$2}' /etc/connex-release/version.txt), \$(lscpu | awk -F: 'BEGIN{ORS=\", \";} NR==4 || NR==6 || NR==15 {print \$2}' | sed 's/ *//g') \$(free -k | awk '/Mem:/{print \$2}'), \$(df -Ph | awk '/var_lib/||/root/ {print \$2,\",\"\$5,\",\"}')"
pssh -h $file -l user -t 10 -i -o /home/user/tools/cnxhwinfo-$now -x -tt $command
echo "Server Unreachable" | tee "./cnxhwinfo-$now/$empty"
ls ./cnxhwinfo-$now >> ./cnx-data-$now
cat ./cnxhwinfo-$now/* >> ./cnx-list-$now
paste -d, ./cnx-data-$now ./cnx-list-$now >>./cnx-data-"$(date +"%F").csv"
I was trying to use find to locate the empty files and write "Server" unavailable using tee with this:
echo "Server Unreachable" | tee "./cnxhwinfo-$now/$empty"
if the folder specified doesn't already exist i get this error:
tee: ./cnxhwinfo-2019-09-03/: Is a directory
And if it does exist (ie, i run the script again), it instead creates a file named after the IP addresses returned by the find command, like this:
192.168.1.2 192.168.1.3 192.168.1.4 1
I've also tried:
echo "Server Unreachable" | tee <(./cnxhwinfo-$now/$empty)
The find command outputs the IP addresses on a single line with a space in between each one, so I thought that would be fine for tee to use, but I feel like I am either running into syntax issues, or am going about this the wrong way. I have another version of this same script that uses regular ssh and works great, just much slower than using pssh.
empty should be an array, assuming none of the file names will contain any whitespace in their names.
readarray -t empty < <(find ...)
echo "Server unreachable" | (cd ./cnxhwinfo-$now/; tee "${empty[#]}" > /dev/null)
Otherwise, you are building a single file name by concatenating the empty file names.

Multiple ssh in a Single command

I need to pipe multiple ssh commands in order to run commands on a remote machine.
The commands are working fine with a single ssh but not after piping ssh.
E.g
ssh abc#remotemachine1.com "a=hello ; echo \$a"
return hello
but
ssh abc#remotemachine1.com ssh abc#remotemachine2.com"a=hello ; echo \$a"
produces no output.
Similarly:
ssh abc#remotemachine1.com "mountedDir=\$(df \tmp | grep -vi filesystem | rev | cut -d ' ' -f 1); mount | grep -w \$mountedDir"
Is working fine producing the following output :
/dev/sda2 on / type xfs (rw,relatime,attr2,inode64,noquota)
but
ssh abc#remotemachine1.com ssh abc#remotemachine2.com "mountedDir=\$(df \tmp | grep -vi filesystem | rev | cut -d ' ' -f 1); mount | grep -w \$mountedDir"
is throwing the following error:
Usage: grep [OPTION]... PATTERN [FILE]...
Try 'grep --help' for more information.
Note: Passwordless ssh is established from my machine to remotemachine1.com and from remotemachine1.com to remotemachine2.com
If for some reason you do not want to modify your ssh_config file, you need to use ssh -t which will cause a real TTY to be allocated on machine 2, like so:
ssh -t abc#remotemachine1.com ssh abc#remotemachine2.com"a=hello ; echo \$a"
Be wary, as using this method implies that all the SSH login authentication procedures will happen at remotemachine1.com, so if you have security concerns, you are better off with #allo 's answer.
ssh abc#remotemachine1.com ssh abc#remotemachine2.com"a=hello ; echo \$a"
Looks wrong. If you want to jump from remotemachine1 to remotemachine2 have a look at the ProxyJump option in the ssh config. You can give it on the command line using the -o option of the ssh binary.
It finally worked after I added multiple escape characters
ssh abc#remotemachine1.com " ssh abc#remotemachine2.com \" a=hello ;echo \\\$a \" "
And
ssh abc#remotemachine1.com " ssh abc#remotemachine2.com \" mountedDir=\\\$(df /var | grep -vi filesystem | rev | cut -d ' ' -f 1); mount | grep -w \\\$mountedDir | grep -vi 'noexec' \" "

bash script while loop

hi i am new in bash scripting.
This is my script in this i use while loop this is working till giving input to ping the ips in serverfile but further i want to use those ips to make files of each ip as below i am doing but it has some issue i think there must be more while loops in it . but its not working it takes only one ip as input and make the only one file and further adding in the required file its not working on whole input lets say there are 5 ips in the file it only make the first ip file.
#!/bin/bash
l2=$(tail -1 /root/serverfile | grep hadoop | tr ' ' '\n' | grep hadoop)
awk '{print $1}' < serverFile.txt | while read ip; do
if ping -c1 $ip >/dev/null 2>&1; then
cd /usr/local/nagios/etc/objects/Hadoop
cp Hadoop-node.cfg $l2.cfg
sed -i 's/192.168.0.1/'$ip'/' $l2.cfg
sed -i 's/Hadoop-node/'$l2'/' $l2.cfg
echo "cfg_file=/usr/local/nagios/etc/objects/Hadoop/$l2.cfg" >> /usr/local/nagios/etc/nagios.cfg
service nagios restart
echo " Node is added successfull"
echo $ip IS UP
else
echo $ip IS DOWN NOT PINGING
fi
done

Open as many terminals as the number of ssh-s logged out and close the terminals in which ssh-s where logged out

There are several terminals in a single localhost in which I have ssh-ed into the same user and same IP address. I want to find all the terminals in which a remote host has been logged, terminate all processes running in those and log out of that remote host. I succeeded using the following shell script.
#Find list of terminals in which the remote host is logged in.
openedTerminals=`ssh $user#$publicIP "ps -aux | grep -i $user#pts | grep -v grep | cut -d' ' -f 3"`
#close all the ssh sessions to that remote host
i=1
terminalPID=`echo $openedTerminals | cut -d' ' -f $i`
while [[ -n "$terminalPID" ]]
do
ssh $user#$publicIP "kill $terminalPID"
i=`expr $i + 1`
terminalPID=`echo $openedTerminals | cut -d' ' -f $i`
done
I used the following command to open a new terminal and ssh into a remote host which worked fine when executed from the command prompt:
gnome-terminal -window-with-profile=NOCLOSEPROFILE -e "ssh -X $user#$publicIP"
Apart from doing the work of the 1st code, I want to open a new terminal (by ssh-ing into another remote machine) for every remote machine which was terminated by the 1st code. So I tried to insert the above command in the 1st code as:
#Find list of terminals in which the remote host is logged in.
openedTerminals=`ssh $user#$publicIP "ps -aux | grep -i $user#pts | grep -v grep | cut -d' ' -f 3"`
#close all the ssh sessions to that remote host
i=1
terminalPID=`echo $openedTerminals | cut -d' ' -f $i`
while [[ -n "$terminalPID" ]]
do
ssh $user#$publicIP "kill $terminalPID"
gnome-terminal -window-with-profile=NOCLOSEPROFILE -e "ssh -X $newUser#$newPublicIP"
i=`expr $i + 1`
terminalPID=`echo $openedTerminals | cut -d' ' -f $i`
done
But this starts running in an infinite loop and opens infinite number of new terminals.
Please tell me where I am wrong and suggest a way to correct it in order to get the desired solution.
Also, I wish to add a command in the same shell script (1st code) to close the terminals in which the remote machine was logged out. Can anyone please guide me on this?
Thanks in advance,
Saeya
When only one terminal which is ssh-ed to the remote machine is opened, this runs in an infinite loop because of the "cut" command. If there is a separate case to handle one terminal this will work fine.

Resources