My first Bash script does not look so efficient - linux

This is my first bash script using some resources I found online. I think there is a better way to write this perhaps using some other form of conditionals (if then vs control operators).
It is a script that basically checks if a host is up or down (check if it is ping-able). You dump all the ip addresses you want into a file, then run the script calling on the file. The text file looks like this:
8.8.8.8
4.8.8.8
4.4.4.4
127.0.0.1
The actual script looks like this. Is the 2>&1 necessary because it worked without. I had to play around with the brackets a lot.
#!/bin/bash
while read line
do
A=$(ping -c 1 $line)
((echo $A | grep "64 bytes") > /dev/null 2>&1 && (echo "UP - "$line)) || echo "DOWN - "$line
done < $1
Thank you!

You can do it entirely without brackets:
while read -r address; do
ping -c 1 $address >/dev/null 2>&1 && echo "UP - $address" || echo "DOWN - $address"
done < file
The >/dev/null 2>&1 redirects both STDOUT and STDERR to /dev/null meaning that whatever ping outputs won't be printed to your terminal.
You can then use the && and || operators to echo a message in case of success (ping exits with 0) or failure (ping exits with >0)
You could use if..then..else if you prefer:
while read -r address; do
if ping -c 1 $address > /dev/null 2>&1; then
echo "UP - $address"
else
echo "DOWN - $address"
fi
done < file

arco444 has the right answer. Some other notes
with the form A && B || C, if A succeeds then B executes; if B fails, C will execute. That will not occur for if A; then B; else C; fi
proper indentation is extremely helpful to identify errors (none here, but in general)
Use better variable names: $ip is more meaningful than $line
variables go inside the quotes
parentheses introduce subshells, which will reduce performance. Use only when necessary

Related

Bash one liner to list all occupied IPs in a CIDR?

I need a Bash one liner that can print all occupied IPs in a CIDR that I give it, and I did not manage to find how to do it. I have a script that can perform this, but I did not manage to get it to run as a one liner. The script:
#!/bin/sh
pingf(){
if ping -w 2 -q -c 1 10.5.99."$1" > /dev/null ;
then
printf "IP %s is up\n" 10.5.99."$1"
fi
}
main(){
NUM=1
while [ $NUM -lt 255 ];do
pingf "$NUM" &
NUM=$(expr "$NUM" + 1)
done
wait
}
main
Any help will be appreciated!
Use a for loop with a sequence expression.
Replace the if statement with && to combine the ping and printf.
for ip in 10.5.99.{1..255}; do ping -w 2 -q -c 1 "$ip" >/dev/null && printf "IP %s is up\n" "$ip" & done
wait
Sequence expressions are a bash extension, so you'll need to change #!/bin/sh to #!/bin/bash.
Note that this just tests a single /24, not an arbitrary CIDR block. I can't think of a way to generalize this to CIDR blocks in a one-liner.

Unable to array values outside of function in shell script [duplicate]

Please explain to me why the very last echo statement is blank? I expect that XCODE is incremented in the while loop to a value of 1:
#!/bin/bash
OUTPUT="name1 ip ip status" # normally output of another command with multi line output
if [ -z "$OUTPUT" ]
then
echo "Status WARN: No messages from SMcli"
exit $STATE_WARNING
else
echo "$OUTPUT"|while read NAME IP1 IP2 STATUS
do
if [ "$STATUS" != "Optimal" ]
then
echo "CRIT: $NAME - $STATUS"
echo $((++XCODE))
else
echo "OK: $NAME - $STATUS"
fi
done
fi
echo $XCODE
I've tried using the following statement instead of the ++XCODE method
XCODE=`expr $XCODE + 1`
and it too won't print outside of the while statement. I think I'm missing something about variable scope here, but the ol' man page isn't showing it to me.
Because you're piping into the while loop, a sub-shell is created to run the while loop.
Now this child process has its own copy of the environment and can't pass any
variables back to its parent (as in any unix process).
Therefore you'll need to restructure so that you're not piping into the loop.
Alternatively you could run in a function, for example, and echo the value you
want returned from the sub-process.
http://tldp.org/LDP/abs/html/subshells.html#SUBSHELL
The problem is that processes put together with a pipe are executed in subshells (and therefore have their own environment). Whatever happens within the while does not affect anything outside of the pipe.
Your specific example can be solved by rewriting the pipe to
while ... do ... done <<< "$OUTPUT"
or perhaps
while ... do ... done < <(echo "$OUTPUT")
This should work as well (because echo and while are in same subshell):
#!/bin/bash
cat /tmp/randomFile | (while read line
do
LINE="$LINE $line"
done && echo $LINE )
One more option:
#!/bin/bash
cat /some/file | while read line
do
var="abc"
echo $var | xsel -i -p # redirect stdin to the X primary selection
done
var=$(xsel -o -p) # redirect back to stdout
echo $var
EDIT:
Here, xsel is a requirement (install it).
Alternatively, you can use xclip:
xclip -i -selection clipboard
instead of
xsel -i -p
I got around this when I was making my own little du:
ls -l | sed '/total/d ; s/ */\t/g' | cut -f 5 |
( SUM=0; while read SIZE; do SUM=$(($SUM+$SIZE)); done; echo "$(($SUM/1024/1024/1024))GB" )
The point is that I make a subshell with ( ) containing my SUM variable and the while, but I pipe into the whole ( ) instead of into the while itself, which avoids the gotcha.
#!/bin/bash
OUTPUT="name1 ip ip status"
+export XCODE=0;
if [ -z "$OUTPUT" ]
----
echo "CRIT: $NAME - $STATUS"
- echo $((++XCODE))
+ export XCODE=$(( $XCODE + 1 ))
else
echo $XCODE
see if those changes help
Another option is to output the results into a file from the subshell and then read it in the parent shell. something like
#!/bin/bash
EXPORTFILE=/tmp/exportfile${RANDOM}
cat /tmp/randomFile | while read line
do
LINE="$LINE $line"
echo $LINE > $EXPORTFILE
done
LINE=$(cat $EXPORTFILE)

Searching for a substring in a bash script will not work

I have been writing a bash script to call in my .bashrc file to print the results of whatis for a random command in my /usr/bin folder and wanted to exclude commands that returned "nothing appropriate" in the result and even if I use grep, wc, expr, ==, nothing seems to work. I have pretty much used every example here, and here with no progress. This is what I have so far but failes to do what I want when it finds somthing that contains "nothing appropriate." If anyone could figure out how to get it to work or what a good solution would be in this situation I would be greatfull.
#! /bin/bash
echo "Did you know that:";
while :
do
RESULT=$(whatis $(ls /usr/bin | shuf -n 1))
if [[ $RESULT != *"nothing appropriate"* ]]
then
echo $RESULT
break
fi
done
whatis prints the nothing appropriate message on the standard error stream. This stream is not caught by the $( ). This is the reason of your issue.
This is a way to fix it:
#! /bin/bash
echo "Did you know that:";
while :
do
RESULT=$(whatis $(ls /usr/bin | shuf -n 1) 2>&1 | cat - )
if [[ $RESULT != *"nothing appropriate"* ]]
then
echo $RESULT
break
fi
done
The 2>&1 | cat - addition does the trick

I keep getting a 'while syntax' error on the output of the at job in unix and I have no idea why

#!/usr/dt/bin/dtksh
while getopts w:m: option
do
case $option in
w) wflag=1
wval="$OPTARG";;
m) mflag=1
mval="$OPTARG";;
?) printf 'BAD\n' $0
exit 2;;
esac
done
if [ ! -z "$wflag" ]; then
printf "W and -w arg is $wval\n"
fi
if [ ! -z "$mflag" ]; then
printf "M and -m arg is $mval\n"
fi
shift $(($OPTIND - 1))
printf "Remaining arguments are: $* \n"
at $wval <<ENDMARKER
echo $* >> Search_List
tr " " "\n" <Search_List >Usr_List
while true; do
if [ -s Usr_List ]; then
for i in $(cat Usr_List); do
if finger -m | grep $i; then
echo '$i is online' | elm user
sed '/$i/d' <Usr_List >tmplist
mv tmplist Usr_List
fi
done
else
break
fi
done
ENDMARKER
Essentially I want to keep searching through until it is empty. Each time an element of the list is found, it is deleted. Once the list is empty quit.
There are no error messages when I first run the command, it only shows up in an email containing the output of the at job.
Thanks in advance for any advice
EDIT: The script uses getopts and takes one argument for -w and one for -m, the w value is set as the time for the at job, the m still has to be used. Any arguments after the one for m are sent to a file called Search_List, Search_List is edited and saved as Usr_List. Then in the while loop, while Usr_List is not empty, the script checks the results of finger -m against the names in Usr_List. If a name is found, it is removed from Usr_List. Once Usr_List is empty, the program should stop.
elm is a way to send an email, so elm user sends an email to user.
The error is :
while: Expression syntax
at uses /bin/sh by default.
at now <<ENDMARKER
<code here>
ENDMARKER
All of this executes under /bin/sh, which on some systems can be Bourne Shell (Solaris for example).
You need to figure out what /bin/sh is for your system, then modify things accordingly. Plus, read the gurantees about what is and what is not in your "at" environment. I think the problem lies there. You have both UNIX and linux tags. So I cannot give a lot more help than that.
You can enable logging -- the way YOU need it -- of the at code chunk:
exec 2&>1 > /tmp/somefile.log
Then write debugging messages to stdout or stderr.
Your HEREDOC is being interpolated. Try quoting the delimiter:
at $wval << 'ENDMARKER'
Although ( I haven't looked closely) it appears that you want some interpolation. But you definitely do not want it on the line in which you reference $i, so quote that $ if you do not quote the entire heredoc:
if finger -m | grep \$i; then
You need to pass the -k option to at:
...
at -k $wval <<ENDMARKER
...
at is otherwise defaulting to your login shell which is csh or one of its derivatives.
It turns out that the while command and the if command needed to be combined.
while [[ -s Usr_List ]]; do
......
done

Is it possible to make a bash shell script interact with another command line program?

I am using a interactive command line program in a Linux terminal running the bash shell. I have a definite sequence of command that I input to the shell program. The program writes its output to standard output. One of these commands is a 'save' command, that writes the output of the previous command that was run, to a file to disk.
A typical cycle is:
$prog
$$cmdx
$$<some output>
$$save <filename>
$$cmdy
$$<again, some output>
$$save <filename>
$$q
$<back to bash shell>
$ is the bash prompt
$$ is the program's prompt
q is the quit command for prog
prog is such that it appends the output of the previous command to filename
How can I automate this process? I would like to write a shell script that can start this program, and cycle through the steps, feeding it the commands one by one and, and then quitting. I hope the save command works correctly.
If your command doesn't care how fast you give it input, and you don't really need to interact with it, then you can use a heredoc.
Example:
#!/bin/bash
prog <<EOD
cmdx
save filex
cmdy
save filey
q
EOD
If you need branching based on the output of the program, or if your program is at all sensitive to the timing of your commands, then Expect is what you want.
I recommend you use Expect. This tool is designed to automate interactive shell applications.
Where there's a need, there's a way! I think that it's a good bash lesson to see
how process management and ipc works. The best solution is, of course, Expect.
But the real reason is that pipes can be tricky and many commands are designed
to wait for data, meaning that the process will become a zombie for reasons that
bay be difficult to predict. But learning how and why reminds us of what is
going on under the hood.
When two processes engage in a conversation, the danger is that one or both will
try to read data that will never arrive. The rules of engagement have to be
crystal clear. Things like CRLF and character encoding can kill the party.
Luckily, two close partners like a bash script and its child process are
relatively easy to keep in line. The easiest thing to miss is that bash is
launching a child process for just about every thing it does. If you can make it
work with bash, you thoroughly know what you're doing.
The point is that we want to talk to another process. Here's a server:
# a really bad SMTP server
# a hint at courtesy to the client
shopt -s nocasematch
echo "220 $HOSTNAME SMTP [$$]"
while true
do
read
[[ "$REPLY" =~ ^helo\ [^\ ] ]] && break
[[ "$REPLY" =~ ^quit ]] && echo "Later" && exit
echo 503 5.5.1 Nice guys say hello.
done
NAME=`echo "$REPLY" | sed -r -e 's/^helo //i'`
echo 250 Hello there, $NAME
while read
do
[[ "$REPLY" =~ ^mail\ from: ]] && { echo 250 2.1.0 Good guess...; continue; }
[[ "$REPLY" =~ ^rcpt\ to: ]] && { echo 250 2.1.0 Keep trying...; continue; }
[[ "$REPLY" =~ ^quit ]] && { echo Later, $NAME; exit; }
echo 502 5.5.2 Please just QUIT
done
echo Pipe closed, exiting
Now, the script that hopefully does the magic.
# Talk to a subprocess using named pipes
rm -fr A B # don't use old pipes
mkfifo A B
# server will listen to A and send to B
./smtp.sh < A > B &
# If we write to A, the pipe will be closed.
# That doesn't happen when writing to a file handle.
exec 3>A
read < B
echo "$REPLY"
# send an email, so long as response codes look good
while read L
do
echo "> $L"
echo $L > A
read < B
echo $REPLY
[[ "$REPLY" =~ ^2 ]] || break
done <<EOF
HELO me
MAIL FROM: me
RCPT TO: you
DATA
Subject: Nothing
Message
.
EOF
# This is tricky, and the reason sane people use Expect. If we
# send QUIT and then wait on B (ie. cat B) we may have trouble.
# If the server exits, the "Later" response in the pipe might
# disappear, leaving the cat command (and us) waiting for data.
# So, let cat have our STDOUT and move on.
cat B &
# Now, we should wait for the cat process to get going before we
# send the QUIT command. If we don't, the server will exit, the
# pipe will empty and cat will miss its chance to show the
# server's final words.
echo -n > B # also, 'sleep 1' will probably work.
echo "> quit"
echo "quit" > A
# close the file handle
exec 3>&-
rm A B
Notice that we are not simply dumping the SMTP commands on the server. We check
each response code to make sure things are OK. In this case, things will not be
OK and the script will bail.
I use Expect to interact with the shell for switch and router backups. A bash script calls the expect script with the correct variables.
for i in <list of machines> ; do expect_script.sh $i ; exit
This will ssh to each box, run the backup commands, copy out the appropriate files, and then move on to the next box.
For simple use cases you may use a combination of subshell, echo & sleep:
# in Terminal.app
telnet localhost 25
helo localhost
ehlo localhost
quit
(sleep 5; echo "helo localhost"; sleep 5; echo "ehlo localhost"; sleep 5; echo quit ) |
telnet localhost 25
echo "cmdx\nsave\n...etc..." | prog
..?

Resources