Executing two command at remote server and storing the output to local server using shell scripting - linux

I'm trying to clean up my shell script. I am trying to read the list of server for
a text file "serverlist" and shh each server to get the outcome of the command and store in to a file "result_segfault". I managed to do that however for each server i wanted to list the name of the server and the output below:
Example :
-----------------
Servername
-----------------
Output of the command
----------------
Servername2
------------------
Output of the command
This is my code
#!/bin/bash
for HOSTNAME in $(cat serverlist);
do
SCRIPT="cat /var/log/messages | grep 'segfault at 0000000000000098'"
for HOSTNAME in ${HOSTNAME} ;
do
ssh ${HOSTNAME} "${SCRIPT}" >"result_segfault"
done;
done
I didn't know how to add the Server name and the separator

{
echo "----------------"
echo "$HOSTNAME"
echo "----------------"
ssh "$HOSTNAME" "${SCRIPT}"
} >>"result_segfault"
The braces {...} group statements together so that their stdout can be collected with a single redirect. I used ">>" in place of ">" for the redirect because it appeared from your sample that you wanted to append together the results from each host rather than overwrite it each time a new host is queried.

With various stylistic fixes;
#!/bin/bash
while read HOSTNAME
do
ssh ${HOSTNAME} grep 'segfault at 0000000000000098' /var/log/messages
done <serverlist >result_segfault

Related

How to connect input/output to SSH session

What is a good way to be able to directly send to STDIN and receive from STDOUT of a process? I'm specifically interested in SSH, as I want to do the following:
[ssh into a remote server]
[run remote commands]
[run local commands]
[run remote commands]
etc...
For example, let's say I have a local script "localScript" that will output the next command I want to run remotely, depending on the output of "remoteScript". I could do something like:
output=$(ssh myServer "./remoteScript")
nextCommand=$(./localScript $output)
ssh myServer "$nextCommand"
But it would be nice to do this without closing/reopening the SSH connection at every step.
You can redirect SSH input and output to FIFO-s and then use these for two-way communication.
For example local.sh:
#!/bin/sh
SSH_SERVER="myServer"
# Redirect SSH input and output to temporary named pipes (FIFOs)
SSH_IN=$(mktemp -u)
SSH_OUT=$(mktemp -u)
mkfifo "$SSH_IN" "$SSH_OUT"
ssh "$SSH_SERVER" "./remote.sh" < "$SSH_IN" > "$SSH_OUT" &
# Open the FIFO-s and clean up the files
exec 3>"$SSH_IN"
exec 4<"$SSH_OUT"
rm -f "$SSH_IN" "$SSH_OUT"
# Read and write
counter=0
echo "PING${counter}" >&3
cat <&4 | while read line; do
echo "Remote responded: $line"
sleep 1
counter=$((counter+1))
echo "PING${counter}" >&3
done
And simple remote.sh:
#!/bin/sh
while read line; do
echo "$line PONG"
done
The method you are using works, but I don't think you can reuse the same connection everytime. You can, however, do this using screen, tmux or nohup, but that would greatly increase the complexity of your script because you will now have to emulate keypresses/shortcuts. I'm not even sure if you can if you do directly in bash. If you want to emulate keypresses, you will have to run the script in a new x-terminal and use xdotool to emulate the keypresses.
Another method is to delegate the whole script to the SSH server by just running the script on the remote server itself:
ssh root#MachineB 'bash -s' < local_script.sh

Adding logfile command puts script on the loop?

I'm trying to create logfile for output/terminal files of a script.
Why is script going for the loop?
Script is used for remotely changing passwords over ssh using imported .txt file with list of addresses. Script is working fine until I add line for logging at the end after DONE:
#!/bin/bash
echo "Enter the username for which you want to change the password"
read USER
sleep 2
echo "Enter the password that you would like to set for $USER"
read PASSWORD
sleep 2
echo "Enter the file name that contains a list of servers. Ex: ip.txt"
read FILE
sleep 2
for HOST in $(cat $FILE)
do
ssh root#$HOST "echo $'$PASSWORD\n$PASSWORD' | passwd $USER"
done
I tried adding following log creating commands:
/root/passwordchange.sh | tee -a /root/output.log
logsave -a /root/output.log /root/passwordchange.sh
/root/passwordchange.sh >> /root/output.log
Adding logging line is creating loop for entire program instead of closing it.
I need to sigkill script to end the process.
Output file is created just as normal with all the information provided.
It is my first question here on stack, thank you from advance for all answers

Execute Bash script remotely via cURL

I have a simple Bash script that takes in inputs and prints a few lines out with that inputs
fortinetTest.sh
read -p "Enter SSC IP: $ip " ip && ip=${ip:-1.1.1.1}
printf "\n"
#check IP validation
if [[ $ip =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "SSC IP: $ip"
printf "\n"
else
echo "Enter a valid SSC IP address. Ex. 1.1.1.1"
exit
fi
I tried to upload them into my server, then try to run it via curl
I am not sure why the input prompt never kick in when I use cURL/wget.
Am I missing anything?
With the curl ... | bash form, bash's stdin is reading the script, so stdin is not available for the read command.
Try using a Process Substitution to invoke the remote script like a local file:
bash <( curl -s ... )
Your issue can be simply be reproduced by run the script like below
$ cat test.sh | bash
Enter a valid SSC IP address. Ex. 1.1.1.1
This is because the bash you launch with a pipe is not getting a TTY, when you do a read -p it is read from stdin which is content of the test.sh in this case. So the issue is not with curl. The issue is not reading from the tty
So the fix is to make sure you ready it from tty
read < /dev/tty -p "Enter SSC IP: $ip " ip && ip=${ip:-1.1.1.1}
printf "\n"
#check IP validation
if [[ $ip =~ ^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "SSC IP: $ip"
printf "\n"
else
echo "Enter a valid SSC IP address. Ex. 1.1.1.1"
exit
fi
Once you do that even curl will start working
vagrant#vagrant:/var/www/html$ curl -s localhost/test.sh | bash
Enter SSC IP: 2.2.2.2
SSC IP: 2.2.2.2
I personally prefer source <(curl -s localhost/test.sh) option. While it is similar to bash ..., the one significant difference is how processes handled.
bash will result in a new process being spun up, and that process will evoke commands from the script.
source on the other hand will use current process to evoke commands from the script.
In some cases that can play a key role. I admit that is not very often though.
To demonstrate do the following:
### Open Two Terminals
# In the first terminal run:
echo "sleep 5" > ./myTest.sh
bash ./myTest.sh
# Switch to the second terminal and run:
ps -efjh
## Repeat the same with _source_ command
# In the first terminal run:
source ./myTest.sh
# Switch to the second terminal and run:
ps -efjh
Results should look similar to this:
Before execution:
Running bash (main + two subprocesses):
Running source (main + one subprocess):
UPDATE:
Difference in use variable usage by bash and source:
source command will use your current environment. Meaning that upon execution all changes and variable declarations, made by the script, will be available in your prompt.
bash on the other hand will be running in as a different process; therefore, all variables will be discarded when process exits.
I think everyone will agree that there are benefits and drawbacks to each method. You just have to decide which one is better for your use case.
## Test for variables declared by the script:
echo "test_var3='Some Other Value'" > ./myTest3.sh
bash ./myTest3.sh
echo $test_var3
source ./myTest3.sh
echo $test_var3
## Test for usability of current environment variables:
test_var="Some Value" # Setting a variable
echo "echo $test_var" > myTest2.sh # Creating a test script
chmod +x ./myTest2.sh # Adding execute permission
## Executing:
. myTest2.sh
bash ./myTest2.sh
source ./myTest2.sh
./myTest2.sh
## All of the above results should print the variable.
I hope this helps.

Parallel SSH with Custom Parameters to Each Host

There are plenty of threads and documentation about parallel ssh, but I can't find anything on passing custom parameters to each host. Using pssh as an example, the hosts file is defined as:
111.111.111.111
222.222.222.222
However, I want to pass custom parameters to each host via a shell script, like this:
111.111.111.111 param1a param1b ...
222.222.222.222 param2a param2b ...
Or, better, the hosts and parameters would be split between 2 files.
Because this isn't common, is this misuse of parallel ssh? Should I just create many ssh processes from my script? How should I approach this?
You could use GNU parallel.
Suppose you have a file argfile:
111.111.111.111 param1a param1b ...
222.222.222.222 param2a param2b ...
Then running
parallel --colsep ' ' ssh {1} prog {2} {3} ... :::: argfile
Would run prog on each host with the corresponding parameters. It is important that the number of parameters be the same for each host.
Here is a solution that you can use, after tailoring it to suit your needs:
#!/bin/bash
#filename: example.sh
#usage: ./example.sh <par1> <par2> <par3> ... <par6>
#set your ip addresses
$traf1=1.1.1.1
$traf2=2.2.2.2
$traf3=3.3.3.3
#set some custom parameters for your scripts and use them as you wish.
#In this example, I use the first 6 command line parameters passed when run the example.sh
ssh -T $traf1 -l username "/export/home/path/to/script.sh $1 $2" 1>traf1.txt 2>/dev/null &
echo "Fetching data from traffic server 2..."
ssh -T $traf2 -l username "/export/home/path/to/script.sh $3 $4" 1> traf2.txt 2>/dev/null &
echo "Fetching data from traffic server 3..."
ssh -T $traf3 -l username "/export/home/path/to/script.sh $5 $6" 1> traf3.txt 2>/dev/null &
#your application will block on this line, and will only continue if all
#3 remotely executed scripts will complete
wait
Keep in mind that the above requires that you setup passwordless login between the machines, otherwise the solution will break to request for password input.
If you can use Perl:
use Net::OpenSSH::Parallel;
use Data::Dumper;
my $pssh = Net::OpenSSH::Parallel->new;
$pssh->add_host('111.111.111.111');
$pssh->add_host('222.222.222.222');
$pssh->push('111.111.111.111', $cmd, $param11, $param12);
$pssh->push('222.222.222.222', $cmd, $param21, $param22);
$pssh->run;
if (my %errors = $ssh->get_errors) {
print STDERR "ssh errors:\n", Dumper \%errors;
}

Run scripts remotely via SSH

I need to collect user information from 100 remote servers. We have public/private key infrastructure for authentication, and I have configured ssh-agent command to forward key, meaning i can login on any server without password prompt (auto login).
Now I want to run a script on all server to collect user information (how many user account we have on all servers).
This is my script to collect user info.
#!/bin/bash
_l="/etc/login.defs"
_p="/etc/passwd"
## get mini UID limit ##
l=$(grep "^UID_MIN" $_l)
## get max UID limit ##
l1=$(grep "^UID_MAX" $_l)
awk -F':' -v "min=${l##UID_MIN}" -v "max=${l1##UID_MAX}" '{ if ( $3 >= min && $3 <= max && $7 != "/sbin/nologin" ) print $0 }' "$_p"
I don't know how to run this script using ssh without interaction??
Since you need to log into the remote machine there is AFAICT no way to do this "without ssh". However, ssh accepts a command to execute on the remote machine once logged in (instead of the shell it would start). So if you can save your script on the remote machine, e.g. as ~/script.sh, you can execute it without starting an interactive shell with
$ ssh remote_machine ~/script.sh
Once the script terminates the connection will automatically be closed (if you didn't configure that away purposely).
Sounds like something you can do using expect.
http://linux.die.net/man/1/expect
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be.
If you've got a key on each machine and can ssh remotehost from your monitoring host, you've got all that's required to collect the information you've asked for.
#!/bin/bash
servers=(wopr gerty mother)
fmt="%s\t%s\t%s\n"
printf "$fmt" "Host" "UIDs" "Highest"
printf "$fmt" "----" "----" "-------"
count='awk "END {print NR}" /etc/passwd' # avoids whitespace problems from `wc`
highest="awk -F: '\$3>n&&\$3<60000{n=\$3} END{print n}' /etc/passwd"
for server in ${servers[#]}; do
printf "$fmt" "$server" "$(ssh "$server" "$count")" "$(ssh "$server" "$highest")"
done
Results for me:
$ ./doit.sh
Host UIDs Highest
---- ---- -------
wopr 40 2020
gerty 37 9001
mother 32 534
Note that this makes TWO ssh connections to each server to collect each datum. If you'd like to do this a little more efficiently, you can bundle the information into a single, slightly more complex collection script:
#!/usr/local/bin/bash
servers=(wopr gerty mother)
fmt="%s\t%s\t%s\n"
printf "$fmt" "Host" "UIDs" "Highest"
printf "$fmt" "----" "----" "-------"
gather="awk -F: '\$3>n&&\$3<60000{n=\$3} END{print NR,n}' /etc/passwd"
for server in ${servers[#]}; do
read count highest < <(ssh "$server" "$gather")
printf "$fmt" "$server" "$count" "$highest"
done
(Identical results.)
ssh remoteserver.example /bin/bash < localscript.bash
(Note: the "proper" way to authenticate without manually entering in password is to use SSH keys. Storing password in plaintext even in your local scripts is a potential security vulnerability)
You can run expect as part of your bash script. Here's a quick example that you can hack into your existing script:
login=user
IP=127.0.0.1
password='your_password'
expect_sh=$(expect -c "
spawn ssh $login#$IP
expect \"password:\"
send \"$password\r\"
expect \"#\"
send \"./$remote_side_script\r\"
expect \"#\"
send \"cd /lib\r\"
expect \"#\"
send \"cat file_name\r\"
expect \"#\"
send \"exit\r\"
")
echo "$expect_sh"
You can also use pscp to copy files back and forth as part of a script so you don't need to manually supply the password as part of the interaction:
Install putty-tools:
$ sudo apt-get install putty-tools
Using pscp in your script:
pscp -scp -pw $password file_to_copy $login#$IP:$dest_dir
maybe you'd like to try the expect command as following
#!/usr/bin/expect
set timeout 30
spawn ssh -p ssh_port -l ssh_username ssh_server_host
expect "password:"
send "your_passwd\r"
interact
the expect command will catch the "password:" and then auto fill the passwd your send by above.
Remember that replace the ssh_port, ssh_username, ssh_server_host and your_passwd with your own configure

Resources