Bash Script: options - case: range of multiple numbers - linux

I'm working on a script in Linux Bash, with different kinds of options to use.
Basically, the program is going to ping to the given ip-address.
Now, I want to enable the user to write a range of ip-adresses in the terminal, which the program then will ping.
Fe: bash pingscript 25 - 125
the script will then ping all the addresses between 192.168.1.25 and 192.168.1.125.
That's not to hard, I just need to write a little case with
[0-9]-[0-9] ) ping (rest of code)
Now the problem is: this piece of code will only enable me to ping numbers of fe. 0 - 9 and not 10 - 25.
For that I'd need to write:
[0-9][0-9]-[0-9][0-9] (fe: ping 25 - 50)
But then there's the possibility of having 1 number on one side and 2 on the other: [0-9]-[0-9][0-9] (fe: ping 1 - 25)
or: [0-9]-[0-9][0-9][0-9] (fe: ping 1 - 125)
and so on... That means there're a lot of possibilities.
There's probably another way to write it, but how?
I don't want any letters to be in the arguments, but I can't start with that (loop-through system).

How about that:
for i in 192.168.1.{25..125}; do ping -qnc1 $i; done
Or in a script with variables as arguments:
for i in $(seq -f "192.168.1.%g" $1 $2); do ping -qnc1 -W1 $i; done
Where the first argument is the number where to begin and the second argument where to end. Call the script like this:
./script 25 125
The ping options:
-q: that ping doesn't print a summary
-n: no dns lookups
-c1: Only send 1 package
-W1: timeout to 1 second (can be increased of cource)

You can use extended pattern matching in your script by enabling the extglob shell option:
shopt -s extglob
So you can use braces with a + quantifier like this:
#!/bin/bash
shopt -s extglob
case $1 in
+([0-9])-+([0-9]) )
# do stuff
;;
esac

Related

Pass variables out of an interactive session from bash script

Hello People of the world,
I am trying to write a script that will allow user to failover apps between sites in bash.
Our applications are controlled by Pacemaker and I thought I would be able to write a function that would take in the necessary variables and act. Stop on one site, start on another. Once I have ssh'd to the remote machine, I am unable to get the value of the grep/awk command back for the status of the application in PCS.
I am encountering a few issues, and have tried answers from stackoverflow and other sites.
I send the ssh command to /dev/null 2>&1 as banners pop up on screen that unix admin have on the local user and -q does not deal with it - Does this stop anything being returned?
when using awk '{print \\\\\\$4}' in the code, I get a "backslash not last character on line" error
To get round this, I tried result=$(sudo pcs status | grep nds_$resource), however this resulted in a password error on sudo
I have tried >/dev/tty and >$(tty)
I tried to not suppress the ssh (remove /dev/null 2>&1) and put the output in variable at function call, removing the awk from the sudo pcs status entry.
result=$(pcs_call "$site1" "1" "2" "disable" "pmr")
echo $result | grep systemd
This was OK, but when I added | awk '{print \\\$4}' I then got the fourth word in the banner.
Any help would be appreciated as I have been going at this for a few days now.
I have been looking at this answer from Bruno, but unsure how to implement as I have multiple sudo commands.
Below is my strip down of the function code for testing on one machine;
site1=lon
site2=ire
function pcs_call()
{
site=$1
serverA=$2
serverB=$3
activity=$4
resource=$5
ssh -tt ${site}servername0${serverA} <<SSH > /dev/null 2>&1
sudo pcs resource ${activity} proc_${resource}
sleep 10
sudo pcs status | grep proc_$resource | awk '{print \\\$4}' | tee $output
exit
SSH
echo $output
}
echo ====================================================================================
echo Shutting Down PMR in $site1
pcs_call "$site1" "1" "2" "disable" "pmr"
I'd say start by pasting the whole thing into ShellCheck.net and fixing errors until there are no suggestions, but there are some serious issues here shellcheck is not going to be able to handle alone.
> /dev/null says "throw away into the bitbucket any data that is returned. 2>&1 says "Send any useful error reporting on stderr wherever stdout is going". Your initial statement, intended to retrieve information from a remote system, is immediately discarding it. Unless you just want something to occur on the remote system that you don't want to know more about locally, you're wasting your time with anything after that, because you've dumped whatever it had to say.
You only need one backslash in that awk statement to quote the dollar sign on $4.
Unless you have passwordless sudo on the remote system, this is not going to work out for you. I think we need more info on that before we discuss it any deeper.
As long as the ssh call is throwing everything to /dev/null, nothing inside the block of code being passed is going to give you any results on the calling system.
In your code you are using $output, but it looks as if you intend for tee to be setting it? That's not how that works. tee's argument is a filename into which it expects to write a copy of the data, which it also streams to stdout (tee as in a "T"-joint, in plumbing) but it does NOT assign variables.
(As an aside, you aren't even using serverB yet, but you can add that back in when you get past the current issues.)
At the end you echo $output, which is probably empty, so it's basically just echo which won't send anything but a newline, which would just be sent back to the origin server and dumped in /dev/null, so it's all kind of pointless....
Let's clean up
sudo pcs status | grep proc_$resource | awk '{print \\\$4}' | tee $output
and try it a little differently, yes?
First, I'm going to assume you have passwordless sudo, otherwise there's a whole other conversation to work that out.
Second, it's generally an antipattern to use both grep AND awk in a pipeline, as they are both basically regex engines at heart. Choose one. If you can make grep do what you want, it's pretty efficient. If not, awk is super flexible. Please read the documentation pages on the tools you are using when something isn't working. A quick search for "bash man grep" or "awk manual" will quickly give you great resources, and you're going to want them if you're trying to do things this complex.
So, let's look at a rework, making some assumptions...
function pcs_call() {
local site="$1" serverA="$2" activity="$3" resource="$4" # make local and quotes habits you only break on purpose
ssh -qt ${site}servername0${serverA} "
sudo pcs resource ${activity} proc_${resource}; sleep 10; sudo pcs status;
" 2>&1 | awk -v resource="$resource" '$0~"proc_"resource { print $4 }'
}
pcs_call "$site1" 1 disable pmr # should print the desired field
If you want to cath the data in a variable to use later -
var1="$( pcs_call "$site1" 1 disable pmr )"
addendum
Addressing your question - use $(seq 1 10) or just {1..10}.
ssh -qt chis03 '
for i in {1..10}; do sudo pcs resource disable ipa $i; done;
sleep 10; sudo pcs status;
' 2>&1 | awk -v resource=ipa '$0~"proc_"resource { print $2" "$4 }'
It's reporting the awk first, because order of elements in a pipeline is "undefined", but the stdout of the ssh is plugged into the stdin of the awk (and since it was duped to stdout, so is the stderr), so they are running asynchronously/simultaneously.
Yes, since these are using literals, single quotes is simpler and effectively "better". If abstracting with vars, it doesn't change much, but switch back to double quotes.
# assuming my vars (svr, verb, target) preset in the context
ssh -qt $svr "
for i in {1..10}; do sudo pcs resource $verb $target \$i; done;
sleep 10; sudo pcs status;
" 2>&1 | awk -v resource="$target" '$0~"proc_"resource { print $2" "$4 }'
Does that help?

CRON on Rpi simply will not run

I am using a Raspberry pi.
I need to turn on a LED whenever I'm connected to the net and turn off the LED if the connection ever fails. I want to use a cron job running once per minute to do this.
I wrote and compiled two programs in 'C' (ledon, ledoff) that handles the GPIO pin. Those programs work.
I am logged in as 'pi'.
I used crontab -e to write the following:
*/1 * * * * /home/pi/cron_scripts/nettest
I was informed by someone that the first asterisk must have '/1' in order to run properly at the once-per-minute rate that I want. There is no space to the left of the first '/1' and one space after the '1' and each '*' thereafter.
FOR TESTING ONLY, The contents of /home/pi/cron_scripts/nettest is -
#!/bin/bash
ping -c 1 -q 8.8.8.8
if [ "$?" -eq 0 ]; then
printf "%s\n\n" "SUCCESS\n"
else
printf "%s\n\n" "FAIL\n"
fi
exit 0
I used sudo chmod +x /home/pi/cron_scripts/nettest
to make the script executable.
I will replace the printf lines with "ledon" and "ledoff" for the final version.
BUT IT WILL NOT RUN!
echo $(ping -c 1 -q 8.8.8.8)
etc.

Ping with a variable in linux script

I want that my script pings the ip-addresses
192.168.0.45
192.168.0.17
192.168.0.108
by doing this:
bash Script.sh 45 17 108
I want to give the last numbers with bash to ping this ip-addresses.
I don't know how I have to do this. Do I have to work with a 'case' in a do while or something else??
#!/bin/bash
for i in $*; do
ping 192.168.0.$i
done
I want to give the last numbers with bash to ping this ip-addresses.
I presume, you want to ping the addresses simultaneously. In that case you can do this:
Script.sh:
#!/bin/bash
ping 192.168.0.$1 & ping 192.168.0.$2 & ping 192.168.0.$3 &
This will send all of the three ping commands to background where they will be executed simultaneously and print continuous output on terminal.
You can do this with a for loop too:
#!/bin/bash
for i in $*;do
ping 192.168.0.$i &
done
The for loop method can take any number of arguments

ping + how to minimize the time of the ping command

I want to create bash script that will verify by ping list of IP’s
The problem is that ping to any address take few seconds ( in case no ping answer ) in spite I defined the ping as the following:
Ping –c 1 126.78.6.23
The example above perform ping only one time – but the problem is the time , waiting few seconds until ping ended ( if no answer )
In my case this is critical because I need to check more than 150 IP’s ( usually more 90% of the IP’s are not alive )
So to check 150 IP’s I need more than 500 seconds
Please advice if there is some good idea how to perform ping quickly
remark my script need to run on both OS ( linux and solaris )
The best idea is to run ping in parallel
and then save the result in a file.
In this case your script will run not longer than a second.
for ip in `< list`
do
( ping -c1 $ip || echo ip >> not-reachable ) &
done
Update. In Solaris -c has other meaning, so for solaris you need
run ping other way:
ping $ip 57 1
(Here, 57 is the size of the packet and 1 is the number of the packets to be sent).
Ping's syntax in Solaris:
/usr/sbin/ping -s [-l | -U] [-adlLnrRv] [-A addr_family]
[-c traffic_class] [-g gateway [ -g gateway...]]
[-F flow_label] [-I interval] [-i interface] [-P tos]
[-p port] [-t ttl] host [data_size] [npackets]
You can make a function that aggregates the two methods:
myping()
{
[ `uname` = Linux ] && ping -c 1 "$i" || ping "$ip" 57 1
}
for ip in `< list`
do
( myping $ip || echo ip >> not-reachable ) &
done
Another option, don't use ping directly but use ICMP module from some language.
You can use for example Perl + Net::Ping module from Perl:
perl -e 'use Net::Ping; $timeout=0.5; $p=Net::Ping->new("icmp", $timeout) or die bye ; print "$host is alive \n" if $p->ping($host); $p->close;'
Does Solaris ship with coreutils OOTB these days? Then you can use timeout to specify an upper limit:
timeout 0.2s ping -c 1 www.doesnot.exist >/dev/null 2>&1
You could use hping3, which is scriptable (in Tcl).
As already stated, a simple way is to overcome the timing issue run the ping commands in parallel.
You already have the syntax for Linux (iputils) ping.
With Solaris, the proper option to send a single ping would be
ping -s 126.78.6.23 64 1
Installing nmap from sources would provide a more powerful alternative though.

Parallel SSH with Custom Parameters to Each Host

There are plenty of threads and documentation about parallel ssh, but I can't find anything on passing custom parameters to each host. Using pssh as an example, the hosts file is defined as:
111.111.111.111
222.222.222.222
However, I want to pass custom parameters to each host via a shell script, like this:
111.111.111.111 param1a param1b ...
222.222.222.222 param2a param2b ...
Or, better, the hosts and parameters would be split between 2 files.
Because this isn't common, is this misuse of parallel ssh? Should I just create many ssh processes from my script? How should I approach this?
You could use GNU parallel.
Suppose you have a file argfile:
111.111.111.111 param1a param1b ...
222.222.222.222 param2a param2b ...
Then running
parallel --colsep ' ' ssh {1} prog {2} {3} ... :::: argfile
Would run prog on each host with the corresponding parameters. It is important that the number of parameters be the same for each host.
Here is a solution that you can use, after tailoring it to suit your needs:
#!/bin/bash
#filename: example.sh
#usage: ./example.sh <par1> <par2> <par3> ... <par6>
#set your ip addresses
$traf1=1.1.1.1
$traf2=2.2.2.2
$traf3=3.3.3.3
#set some custom parameters for your scripts and use them as you wish.
#In this example, I use the first 6 command line parameters passed when run the example.sh
ssh -T $traf1 -l username "/export/home/path/to/script.sh $1 $2" 1>traf1.txt 2>/dev/null &
echo "Fetching data from traffic server 2..."
ssh -T $traf2 -l username "/export/home/path/to/script.sh $3 $4" 1> traf2.txt 2>/dev/null &
echo "Fetching data from traffic server 3..."
ssh -T $traf3 -l username "/export/home/path/to/script.sh $5 $6" 1> traf3.txt 2>/dev/null &
#your application will block on this line, and will only continue if all
#3 remotely executed scripts will complete
wait
Keep in mind that the above requires that you setup passwordless login between the machines, otherwise the solution will break to request for password input.
If you can use Perl:
use Net::OpenSSH::Parallel;
use Data::Dumper;
my $pssh = Net::OpenSSH::Parallel->new;
$pssh->add_host('111.111.111.111');
$pssh->add_host('222.222.222.222');
$pssh->push('111.111.111.111', $cmd, $param11, $param12);
$pssh->push('222.222.222.222', $cmd, $param21, $param22);
$pssh->run;
if (my %errors = $ssh->get_errors) {
print STDERR "ssh errors:\n", Dumper \%errors;
}

Resources