ping + how to minimize the time of the ping command - linux

I want to create bash script that will verify by ping list of IP’s
The problem is that ping to any address take few seconds ( in case no ping answer ) in spite I defined the ping as the following:
Ping –c 1 126.78.6.23
The example above perform ping only one time – but the problem is the time , waiting few seconds until ping ended ( if no answer )
In my case this is critical because I need to check more than 150 IP’s ( usually more 90% of the IP’s are not alive )
So to check 150 IP’s I need more than 500 seconds
Please advice if there is some good idea how to perform ping quickly
remark my script need to run on both OS ( linux and solaris )

The best idea is to run ping in parallel
and then save the result in a file.
In this case your script will run not longer than a second.
for ip in `< list`
do
( ping -c1 $ip || echo ip >> not-reachable ) &
done
Update. In Solaris -c has other meaning, so for solaris you need
run ping other way:
ping $ip 57 1
(Here, 57 is the size of the packet and 1 is the number of the packets to be sent).
Ping's syntax in Solaris:
/usr/sbin/ping -s [-l | -U] [-adlLnrRv] [-A addr_family]
[-c traffic_class] [-g gateway [ -g gateway...]]
[-F flow_label] [-I interval] [-i interface] [-P tos]
[-p port] [-t ttl] host [data_size] [npackets]
You can make a function that aggregates the two methods:
myping()
{
[ `uname` = Linux ] && ping -c 1 "$i" || ping "$ip" 57 1
}
for ip in `< list`
do
( myping $ip || echo ip >> not-reachable ) &
done
Another option, don't use ping directly but use ICMP module from some language.
You can use for example Perl + Net::Ping module from Perl:
perl -e 'use Net::Ping; $timeout=0.5; $p=Net::Ping->new("icmp", $timeout) or die bye ; print "$host is alive \n" if $p->ping($host); $p->close;'

Does Solaris ship with coreutils OOTB these days? Then you can use timeout to specify an upper limit:
timeout 0.2s ping -c 1 www.doesnot.exist >/dev/null 2>&1

You could use hping3, which is scriptable (in Tcl).

As already stated, a simple way is to overcome the timing issue run the ping commands in parallel.
You already have the syntax for Linux (iputils) ping.
With Solaris, the proper option to send a single ping would be
ping -s 126.78.6.23 64 1
Installing nmap from sources would provide a more powerful alternative though.

Related

How can I conduct Syn flood attack with incremental packet size using hping3

I am conducting penetration testing. I am trying to increment the packet number without manually exit outing the ping and pinging again. I tried with "sleep 5," but the ping doesn't end after 5 seconds. I have to do ^C and then the incremental command executes. Any suggestion? My host and attacker are in a separate virtual machine.
for i in {1..10000}; do sudo hping3 -c $i -d 120 -S -w 64 -p <port_number>--flood --rand-source <ip_address> --traceroute; date ; sleep 5;done
Edit: For those who are facing the same problem- use timeout
For my case: for i in {1..1000}; do sudo timeout 60 hping3 -c $i -d 120 -S -w 64 -p 80 --flood --rand-source 192.168.189.135 --tr-stop; date ; sleep 1;done.

Stopping the Ping process in bash script?

I created a bash script to ping my local network to see which hosts is up and I have a problem in stopping the Ping process by using ctrl+C once it is started
the only way i found to suspend it but even the kill command doesn't work with the PID of the Ping
submask=100
for i in ${submask -le 110}
do
ping -n 2 192.168.1.$submask
((submask++))
done
Ctrl + C exit ping, but another ping starts. So you can use trap.
#!/bin/bash
exit_()
{
exit
}
submask=100
while [ $submask -le 110 ]
do
fping -c 2 192.168.77.$submask
((submask++))
trap exit_ int
done
I suggest you to limit the amount of packets sent with ping with the option -c.
I also corrected the bash syntax, guessing what you intend to do.
Finally, it is faster to run all the ping processes in parallel with the operand &.
Try:
for submask in ${100..110}
do
echo ping -c 1 192.168.1.$submask &
done

bash ping ip run command on reply [duplicate]

This question already has answers here:
Checking host availability by using ping in bash scripts
(11 answers)
Closed 5 years ago.
I want to check the ping replies on two IP addresses and if they are both up, then I execute a command.
For example:
ping 8.8.8.8 on response do
ping 8.8.4.4 on response
execute command
Is there a simple bash script to do this?
According to the manpage on ping:
If ping does not receive any reply packets at all it will exit with code 1. If a packet count and deadline are both specified, and fewer than count packets are received by the time the deadline has arrived, it will also exit with code 1. On other error it exits with code 2. Otherwise it exits with code 0. This makes it possible to use the exit code to see if a host is alive or not.
Thus you can rely on the exit code to determine whether to continue in your script.
ping 8.8.8.8 -c 1
if [ $? = 0 ]
then
echo ok
else
echo ng
fi
Try ping only 1 time with -c 1 option. Change to any number as you like.
$? is the exit code of the previous command. You can refer ping's exit code with it.
Modify the above code snippet to what you want.
Bash commands to return yes or no if a host is up.
Try hitting a site that doesn't exist:
eric#dev ~ $ ping -c 1 does_not_exist.com > /dev/null 2>&1; echo $?
2
Try hitting a site that does exist:
eric#dev /var/www/sandbox/eric $ ping -c 1 google.com > /dev/null 2>&1; echo $?
0
If it returns 0, then the host is evaluated to be responsive. If anything other than zero, then it was determined that the host is unreachable or down.

How to check if a server is running

I want to use ping to check to see if a server is up. How would I do the following:
ping $URL
if [$? -eq 0]; then
echo "server live"
else
echo "server down"
fi
How would I accomplish the above? Also, how would I make it such that it returns 0 upon the first ping response, or returns an error if the first ten pings fail? Or, would there be a better way to accomplish what I am trying to do above?
I'ld recommend not to use only ping. It can check if a server is online in general but you can not check a specific service on that server.
Better use these alternatives:
curl
man curl
You can use curl and check the http_response for a webservice like this
check=$(curl -s -w "%{http_code}\n" -L "${HOST}${PORT}/" -o /dev/null)
if [[ $check == 200 || $check == 403 ]]
then
# Service is online
echo "Service is online"
exit 0
else
# Service is offline or not working correctly
echo "Service is offline or not working correctly"
exit 1
fi
where
HOST = [ip or dns-name of your host]
(optional )PORT = [optional a port; don't forget to start with :]
200 is the normal success http_response
403 is a redirect e.g. maybe to a login page so also accetable and most probably means the service runs correctly
-s Silent or quiet mode.
-L Defines the Location
-w In which format you want to display the response
-> %{http_code}\n we only want the http_code
-o the output file
-> /dev/null redirect any output to /dev/null so it isn't written to stdout or the check variable. Usually you would get the complete html source code before the http_response so you have to silence this, too.
nc
man nc
While curl to me seems the best option for Webservices since it is really checking if the service's webpage works correctly,
nc can be used to rapidly check only if a specific port on the target is reachable (and assume this also applies to the service).
Advantage here is the settable timeout of e.g. 1 second while curl might take a bit longer to fail, and of course you can check also services which are not a webpage like port 22 for SSH.
nc -4 -d -z -w 1 ${HOST} ${PORT} &> /dev/null
if [[ $? == 0 ]]
then
# Port is reached
echo "Service is online!"
exit 0
else
# Port is unreachable
echo "Service is offline!"
exit 1
fi
where
HOST = [ip or dns-name of your host]
PORT = [NOT optional the port]
-4 force IPv4 (or -6 for IPv6)
-d Do not attempt to read from stdin
-z Only listen, don't send data
-w timeout
If a connection and stdin are idle for more than timeout seconds, then the connection is silently closed. (In this case nc will exit 1 -> failure.)
(optional) -n If you only use an IP: Do not do any DNS or service lookups on any specified addresses, hostnames or ports.
&> /dev/null Don't print out any output of the command
You can use something like this -
serverResponse=`wget --server-response --max-redirect=0 ${URL} 2>&1`
if [[ $serverResponse == *"Connection refused"* ]]
then
echo "Unable to reach given URL"
exit 1
fi
Use the -c option with ping, it'll ping the URL only given number of times or until timeout
if ping -c 10 $URL; then
echo "server live"
else
echo "server down"
fi
Short form:
ping -c5 $SERVER || echo 'Server down'
Do you need it for some other script? Or are trying to hack some simple monitoring tool? In this case, you may want to take a look at Pingdom: https://www.pingdom.com/.
I using the following script function to check servers are online or not. It's useful when you want to check multiple servers. The function hide the ping output, and you can handle separately the server live or server down case.
#!/bin/bash
#retry count of ping request
RETRYCOUNT=1;
#pingServer: implement ping server functionality.
#Param1: server hostname to ping
function pingServer {
#echo Checking server: $1
ping -c $RETRYCOUNT $1 > /dev/null 2>&1
if [ $? -ne 0 ]
then
echo $1 down
else
echo $1 live
fi
}
#usage example, pinging some host
pingServer google.com
pingServer server1
One good solution is to use MRTG (a simple graphing tool for *NIX) with ping-probe script. look it up on Google.
read this for start.
Sample Graph:

Parallel Processes Results Written To Single File

I am new to Linux and was introduced to the "&" recently. I have to run several traceroutes and store them in a single file, and I am curious if I am able to kick off these traceroutes in parallel?
I tried the following but the results in the generated file, are not kept apart? Well, that is what it seems to me.
traceroute -n -z 100 www.yahoo.com >> theLog.log &
traceroute -n -z 100 www.abc.com >> theLog.log &
Is what I am asking even possible to do? If so what commands should I be using?
Thanks for any direction given.
Perhaps you could investigate parallel (and tell us about your experience)?
If you are on Ubuntu, you can do sudo apt-get install moreutils to obtain parallel.
If you want it to run parallel is better to keep the intermediary results in separated files them join them at the end. The steps would be to start each trace to it's log file and store their pid, wait for them all to stop, them join the results, something like the following:
traceroute -n -z 100 www.yahoo.com > theLog.1.log & PID1=$!
traceroute -n -z 100 www.abc.com > theLog.2.log & PID2=$!
wait $PDI1 $PDI2
cat theLog.1.log theLog.2.log > theLog.log
rm theLog.2.log theLog.1.log
With the following command they are not really in parallel, but you can continue using your terminal, and the results are taken apart:
{ traceroute -n -z 100 www.yahoo.com; traceroute -n -z 100 www.abc.com; } >> theLog.log &
As you have it written, the behavior is undefined. You might try what enzotib posted, or try to have each write to their own file and cat them together at the end.
traceroutes in the #enzotib's answer are executed one at a time in sequence.
You can execute traceroutes in parallel using suggested by #rmk the parallel utility.
$ /usr/bin/time parallel traceroute -n -z 100 <hosts.txt >> parallel.log
24.78user 0.63system 1:24.04elapsed 30%CPU (0avgtext+0avgdata 37456maxresident)k
72inputs+72outputs (2major+28776minor)pagefaults 0swaps
Sequential analog is 5 times slower:
$ /usr/bin/time ./sequential.sh
24.63user 0.51system 7:19.09elapsed 5%CPU (0avgtext+0avgdata 5296maxresident)k
112inputs+568outputs (1major+8759minor)pagefaults 0swaps
Where sequential.sh is:
#!/bin/bash
( while read host; do traceroute -n -z 100 $host; done; ) <hosts.txt >>sequential.log
And hosts.txt is:
www.yahoo.com
www.abc.com
www.google.com
stackoverflow.com
facebook.com
youtube.com
live.com
baidu.com
wikipedia.org
blogspot.com
qq.com
twitter.com
msn.com
yahoo.co.jp
taobao.com
google.co.in
sina.com.cn
amazon.com
google.de
google.com.hk

Resources