Pass variables out of an interactive session from bash script - linux

Hello People of the world,
I am trying to write a script that will allow user to failover apps between sites in bash.
Our applications are controlled by Pacemaker and I thought I would be able to write a function that would take in the necessary variables and act. Stop on one site, start on another. Once I have ssh'd to the remote machine, I am unable to get the value of the grep/awk command back for the status of the application in PCS.
I am encountering a few issues, and have tried answers from stackoverflow and other sites.
I send the ssh command to /dev/null 2>&1 as banners pop up on screen that unix admin have on the local user and -q does not deal with it - Does this stop anything being returned?
when using awk '{print \\\\\\$4}' in the code, I get a "backslash not last character on line" error
To get round this, I tried result=$(sudo pcs status | grep nds_$resource), however this resulted in a password error on sudo
I have tried >/dev/tty and >$(tty)
I tried to not suppress the ssh (remove /dev/null 2>&1) and put the output in variable at function call, removing the awk from the sudo pcs status entry.
result=$(pcs_call "$site1" "1" "2" "disable" "pmr")
echo $result | grep systemd
This was OK, but when I added | awk '{print \\\$4}' I then got the fourth word in the banner.
Any help would be appreciated as I have been going at this for a few days now.
I have been looking at this answer from Bruno, but unsure how to implement as I have multiple sudo commands.
Below is my strip down of the function code for testing on one machine;
site1=lon
site2=ire
function pcs_call()
{
site=$1
serverA=$2
serverB=$3
activity=$4
resource=$5
ssh -tt ${site}servername0${serverA} <<SSH > /dev/null 2>&1
sudo pcs resource ${activity} proc_${resource}
sleep 10
sudo pcs status | grep proc_$resource | awk '{print \\\$4}' | tee $output
exit
SSH
echo $output
}
echo ====================================================================================
echo Shutting Down PMR in $site1
pcs_call "$site1" "1" "2" "disable" "pmr"

I'd say start by pasting the whole thing into ShellCheck.net and fixing errors until there are no suggestions, but there are some serious issues here shellcheck is not going to be able to handle alone.
> /dev/null says "throw away into the bitbucket any data that is returned. 2>&1 says "Send any useful error reporting on stderr wherever stdout is going". Your initial statement, intended to retrieve information from a remote system, is immediately discarding it. Unless you just want something to occur on the remote system that you don't want to know more about locally, you're wasting your time with anything after that, because you've dumped whatever it had to say.
You only need one backslash in that awk statement to quote the dollar sign on $4.
Unless you have passwordless sudo on the remote system, this is not going to work out for you. I think we need more info on that before we discuss it any deeper.
As long as the ssh call is throwing everything to /dev/null, nothing inside the block of code being passed is going to give you any results on the calling system.
In your code you are using $output, but it looks as if you intend for tee to be setting it? That's not how that works. tee's argument is a filename into which it expects to write a copy of the data, which it also streams to stdout (tee as in a "T"-joint, in plumbing) but it does NOT assign variables.
(As an aside, you aren't even using serverB yet, but you can add that back in when you get past the current issues.)
At the end you echo $output, which is probably empty, so it's basically just echo which won't send anything but a newline, which would just be sent back to the origin server and dumped in /dev/null, so it's all kind of pointless....
Let's clean up
sudo pcs status | grep proc_$resource | awk '{print \\\$4}' | tee $output
and try it a little differently, yes?
First, I'm going to assume you have passwordless sudo, otherwise there's a whole other conversation to work that out.
Second, it's generally an antipattern to use both grep AND awk in a pipeline, as they are both basically regex engines at heart. Choose one. If you can make grep do what you want, it's pretty efficient. If not, awk is super flexible. Please read the documentation pages on the tools you are using when something isn't working. A quick search for "bash man grep" or "awk manual" will quickly give you great resources, and you're going to want them if you're trying to do things this complex.
So, let's look at a rework, making some assumptions...
function pcs_call() {
local site="$1" serverA="$2" activity="$3" resource="$4" # make local and quotes habits you only break on purpose
ssh -qt ${site}servername0${serverA} "
sudo pcs resource ${activity} proc_${resource}; sleep 10; sudo pcs status;
" 2>&1 | awk -v resource="$resource" '$0~"proc_"resource { print $4 }'
}
pcs_call "$site1" 1 disable pmr # should print the desired field
If you want to cath the data in a variable to use later -
var1="$( pcs_call "$site1" 1 disable pmr )"
addendum
Addressing your question - use $(seq 1 10) or just {1..10}.
ssh -qt chis03 '
for i in {1..10}; do sudo pcs resource disable ipa $i; done;
sleep 10; sudo pcs status;
' 2>&1 | awk -v resource=ipa '$0~"proc_"resource { print $2" "$4 }'
It's reporting the awk first, because order of elements in a pipeline is "undefined", but the stdout of the ssh is plugged into the stdin of the awk (and since it was duped to stdout, so is the stderr), so they are running asynchronously/simultaneously.
Yes, since these are using literals, single quotes is simpler and effectively "better". If abstracting with vars, it doesn't change much, but switch back to double quotes.
# assuming my vars (svr, verb, target) preset in the context
ssh -qt $svr "
for i in {1..10}; do sudo pcs resource $verb $target \$i; done;
sleep 10; sudo pcs status;
" 2>&1 | awk -v resource="$target" '$0~"proc_"resource { print $2" "$4 }'
Does that help?

Related

Monitoring multiple Linux Systems or Servers Script

I want to modify my script in such a way it can monitor my cpu, memory and ram on 4 servers on my network, the script below is a script that can monitor for one server, is there a way i can check or modify my script below if i have the hosts and username and password.
printf "Memory\t\tDisk\t\tCPU\n"
end=$((SECONDS+3600))
while [ $SECONDS -lt $end ]; do
MEMORY=$(free -m | awk 'NR==2{printf "%.2f%%\t\t", $3*100/$2 }')
DISK=$(df -h | awk '$NF=="/"{printf "%s\t\t", $4}')
CPU=$(top -bn1 | grep load | awk '{printf "%.2f%%\t\t\n", $(NF-2)}')
echo "$MEMORY$DISK$CPU"
sleep 5
done
any ideas or suggestions?
A simple, naive implementation might look like:
for server in host1 host2 host3 host4; do
ssh "$server" bash -s <<'EOF'
...your script here...
EOF
done
...with RSA keys preconfigured for passwordless authentication. That could be made slightly less naive by leveraging ControlMaster/ControlSocket functionality in ssh, so you're keeping the same transport up between multiple ssh sessions and reusing it wherever possible.
However -- rolling your own system monitoring tools is a fool's errand, at least until you've been around the block with the existing established ones, know their strengths, know their weaknesses, and can make a reasoned argument as to why they aren't a good fit for you. Use something off-the-shelf maintained by people who've been doing this for a while.

run the disk space checking script in linux without login

I wrote a script for checking the disk space in the linux machine using shell script when i am login.
Now if i am not login in that machine but i need an alert if the disk space is more than the threshold (ex:80) means how can i check?
1.Using ssh (remote command execution)
2.Run as a background script.
Which one is more efficient?
or is there any other ideas to do this?
I do not want to do any kind of login directly/indirectly. ie. I even don't want to use ssh keygen to store keys. It should work like the webpages, with any new systems also but without any kind of security tuning
Pls let me know.
The cron program is a good place to start. It is available on any Linux system, and can be setup to run programs at regular intervals. If the program produces error messages, those are normally emailed to the account which create the cron job.
Script for checking the disk usage and cpu usage
ALERT=60
df command is used to check the list of disks available and their attributes.Here we get the Filesystem information and take name of the file system and usage of the file
system through awk command.Next for getting the value we cut the % symbol and get the interger only.Then we check the condition for Alert Message.
df -H | grep -vE '^Filesystem' | awk '{ print $5 " " $1 }' | while read output; do
usep=$(echo $output | awk '{ print $1}'| cut -d '%' -f1 )
partition=$(echo $output | awk '{ print $2 }' )
if [ $usep -ge $ALERT ]; then
echo "Alert: Almost out of disk space $usep $partition $HOSTNAME"
fi
done
This is the script i have written for checking the disk space.
Then i gave crontab -e command in the terminal add the script to the file.Now the new cron job is assigned.i set the cronjob to be executed for every 1 hour to monitor the disk space check.
So,Now i can't able to get the message notification.If i am not logged in in that machine,How to get the information about the disk space check?
Is there any thing needs to improve in the script?

Run scripts remotely via SSH

I need to collect user information from 100 remote servers. We have public/private key infrastructure for authentication, and I have configured ssh-agent command to forward key, meaning i can login on any server without password prompt (auto login).
Now I want to run a script on all server to collect user information (how many user account we have on all servers).
This is my script to collect user info.
#!/bin/bash
_l="/etc/login.defs"
_p="/etc/passwd"
## get mini UID limit ##
l=$(grep "^UID_MIN" $_l)
## get max UID limit ##
l1=$(grep "^UID_MAX" $_l)
awk -F':' -v "min=${l##UID_MIN}" -v "max=${l1##UID_MAX}" '{ if ( $3 >= min && $3 <= max && $7 != "/sbin/nologin" ) print $0 }' "$_p"
I don't know how to run this script using ssh without interaction??
Since you need to log into the remote machine there is AFAICT no way to do this "without ssh". However, ssh accepts a command to execute on the remote machine once logged in (instead of the shell it would start). So if you can save your script on the remote machine, e.g. as ~/script.sh, you can execute it without starting an interactive shell with
$ ssh remote_machine ~/script.sh
Once the script terminates the connection will automatically be closed (if you didn't configure that away purposely).
Sounds like something you can do using expect.
http://linux.die.net/man/1/expect
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be.
If you've got a key on each machine and can ssh remotehost from your monitoring host, you've got all that's required to collect the information you've asked for.
#!/bin/bash
servers=(wopr gerty mother)
fmt="%s\t%s\t%s\n"
printf "$fmt" "Host" "UIDs" "Highest"
printf "$fmt" "----" "----" "-------"
count='awk "END {print NR}" /etc/passwd' # avoids whitespace problems from `wc`
highest="awk -F: '\$3>n&&\$3<60000{n=\$3} END{print n}' /etc/passwd"
for server in ${servers[#]}; do
printf "$fmt" "$server" "$(ssh "$server" "$count")" "$(ssh "$server" "$highest")"
done
Results for me:
$ ./doit.sh
Host UIDs Highest
---- ---- -------
wopr 40 2020
gerty 37 9001
mother 32 534
Note that this makes TWO ssh connections to each server to collect each datum. If you'd like to do this a little more efficiently, you can bundle the information into a single, slightly more complex collection script:
#!/usr/local/bin/bash
servers=(wopr gerty mother)
fmt="%s\t%s\t%s\n"
printf "$fmt" "Host" "UIDs" "Highest"
printf "$fmt" "----" "----" "-------"
gather="awk -F: '\$3>n&&\$3<60000{n=\$3} END{print NR,n}' /etc/passwd"
for server in ${servers[#]}; do
read count highest < <(ssh "$server" "$gather")
printf "$fmt" "$server" "$count" "$highest"
done
(Identical results.)
ssh remoteserver.example /bin/bash < localscript.bash
(Note: the "proper" way to authenticate without manually entering in password is to use SSH keys. Storing password in plaintext even in your local scripts is a potential security vulnerability)
You can run expect as part of your bash script. Here's a quick example that you can hack into your existing script:
login=user
IP=127.0.0.1
password='your_password'
expect_sh=$(expect -c "
spawn ssh $login#$IP
expect \"password:\"
send \"$password\r\"
expect \"#\"
send \"./$remote_side_script\r\"
expect \"#\"
send \"cd /lib\r\"
expect \"#\"
send \"cat file_name\r\"
expect \"#\"
send \"exit\r\"
")
echo "$expect_sh"
You can also use pscp to copy files back and forth as part of a script so you don't need to manually supply the password as part of the interaction:
Install putty-tools:
$ sudo apt-get install putty-tools
Using pscp in your script:
pscp -scp -pw $password file_to_copy $login#$IP:$dest_dir
maybe you'd like to try the expect command as following
#!/usr/bin/expect
set timeout 30
spawn ssh -p ssh_port -l ssh_username ssh_server_host
expect "password:"
send "your_passwd\r"
interact
the expect command will catch the "password:" and then auto fill the passwd your send by above.
Remember that replace the ssh_port, ssh_username, ssh_server_host and your_passwd with your own configure

using a variable in a BASH command?

I have 20 machines, each running a process. The machines are named:
["machine1", "machine2", ...., "machine20"]
To inspect how the process is doing on machine1, I issue the following command from a remote machine:
ssh machine1 cat log.txt
For machine2, I issue the following command:
ssh machine2 cat log.txt
Similarly, for machine20, I issue the following command:
ssh machine20 cat log.txt
Is there a bash command that will allow me to view the output from all machines using one command?
If the machines are nicely numbered like in your example:
for i in {1..20} ; do ssh machine$i cat log.txt; done
If you have the list of machines in a file, you can use:
cat machinesList.txt | xargs -i ssh {} cat log.txt
You could store all your machine names in an array or text file, and loop through it.
declare -a machineList=('host1' 'host2' 'otherHost') # and more...
for machine in ${machineList[#]}
do
ssh $machine cat log.txt
done
I assume your machines aren't literally named 'machine1', 'machine2', etc.
Some links:
bash Array Tutorial
GNU Bash Array Documentation
for i in {1..20}
do
ssh machine$i cat log.txt
done
Use a loop?
for i in {1..20}
do
ssh machine$i cat log.txt
done
But note that you're running cat within a remote shell session, not the current one, so this might not quite work as you expect. Try it and see.
Put your hosts in a file and use a while loop as shown below. Note the use of the -n flag on ssh:
while read host; do ssh -n $host cat log.txt; done < hosts-file
Alternatively you can use PSSH:
pssh -h hosts-file -i "cat log.txt"
I would recommend using a program called Shmux. Despite the name, it works really well. I've used it with more than 100 machines with good results. It also gracefully handles machine failures for you which could be a disadvantage with a bash for loop approach.
I think the coolest thing about this program is the ability to issue multiple threads for your commands which allows to run the commands on all 20 machines in parallel.
Aside from the suggestions for using a loop, you might want to take a look at tools, like pssh or dsh, designed for running commands on multiple clients.

Most reliable way to identify the current user through a sudo

I have an application that may or may not be run while users are sudo'ed to a shared user account. I would like to reliably identify who the real user is for a sort of "honor-system" ACL. I think there's some way by tracing parent/group/session process ids the way that the pstree command does, but I'm not sure how to do that best or if there are better alternatives.
I tried getlogin() originally. That works if ./myapp is used, but it fails with 'cat input | ./myapp` (because the "controlling terminal" is a pipe owned by the shared account).
I'd rather not trust environment variables, as I don't want my "honor system" to be completely thwarted by a simply unset, when the information is still available elsewhere.
I'd also like to avoid forcing a lookup in the password database, as that is a remote RPC (NIS or LDAP) and I'm pretty sure wtmp already contains the information I need.
For a shell script, you might use this to get the sudo'ing user:
WHO=$(who am i | sed -e 's/ .*//'`)
and extract the id from the login using:
ID_WHO=$(id -u $WHO)
I'll ferret out the C library equivalent later.
sudo sets the environment variables SUDO_USER, SUDO_UID, and SUDO_GID.
You can test this with:
$ sudo env
[sudo] password for shteef:
TERM=xterm
# [...snip...]
SHELL=/bin/bash
LOGNAME=root
USER=root
USERNAME=root
SUDO_COMMAND=/usr/bin/env
SUDO_USER=shteef
SUDO_UID=1000
SUDO_GID=1000
But if your users have shell access on the shared account, then I suppose you cannot blindly trust this either.
How about:
#!/usr/bin/ksh
username=`id | cut -d"=" -f2 | cut -d" " -f1`
if [ $username == "0(root)" ]
then
print "Yes, the user is root"
else
print "Sorry! the user $username, is not a root"
fi

Resources