Linux Bash script to ping multiple hosts simultaneously - linux

I have a text file with list of 500 server names. I need to ping all of them simultaneously instead of one by one in a loop, and put the pingable ones in one file and unpingable ones in another file.
Can I run each ping in background or spawn a new process for each ping? What is the quickest and most efficient way to achieve this?

You can control the parallelism by using xargs:
cat file-of-ips | xargs -n 1 -I ^ -P 50 ping ^
Here we're keeping at most 50 pings going at a time. The ip itself is inserted at the ^; you can put arguments before and after.

Related

Bash script in parallel with multiple IP addresses

I want to create script, which one logging information from different IP's and in the same time it writes logs to different file, it should run like while:true, but when i start script it logs only first ip address in text file, what i already tried:
#!/bin/bash
IP=`cat IP.txt`
for i in $IP
do
/usr/bin/logclient -l all -f /root/$i.log $i 19999
done
IP.txt file contains:
x.x.x.x
x.x.x.x
x.x.x.x
x.x.x.x
It looks like your script should work as-is, and if logclient works like I think, it'll just create a number of different logs for each IP address. Doing a ls /root/*.log should reveal all the logs generated.
Parallelizing execution isn't something bash is particularly good at. It has job control for backgrounding tasks, but keeping track of those processes and not overloading your CPU/RAM can be tough.
GNU Parallel
If your system has it installed, I'd greatly suggest using GNU parallel. It will kick off one process for each CPU core to make parellizing jobs much easier. parallel only exits when all the children exit.
parallel /usr/bin/logclient -l all -f /root/{}.log {} 19999 ::::+ IP.txt
# all jobs finished, post-process the log (if wanted)
cat /root/*.log >> /root/all-ips.log
rather use while, than for. Something like this:
while read LINE; do /usr/bin/logclient -l all -f /root/$LINE.log $LINE 19999; done < IP.txt

Best method to output log content to listening port

I am outputting content of a log via netcat to an application over the network. I don't know if what I'm doing is the most efficient, especially when I notice the netcat session becomes non-responsive. I have to stop netcat and start it again for the application to work again.
The command I run is:
/bin/tail -n1 -f /var/log/custom_output.log | /bin/nc -l -p 5020 --keep-open
This needs to run like this 24/7. Is this the most efficient way of doing it? How can I improve on it so I don't have to restart the process daily?
EDIT
So I realised that when the log is being rotated, netcat is locked onto a file that's not longer being written to. I can deal with this easily enough.
The question still stands. Is this the best way to do something like this?
It's been 6 years, but maybe someone will come in handy.
To account for log rotation, use tail with the -F flag.
nc (aka netcat) variant
LOG_FILE="/var/log/custom_output.log"
PORT=5020
tail -n0 -F "$LOG_FILE" | nc -k -l -p $PORT
Notes:
Flag -k in nc is analog to --keep-open in "the OpenBSD rewrite of netcat";
Multiple clients can connect to nc at the same time, but only the first one will be receive appended log lines;
tail will run immediately, so it will collect appended log lines even if no client is connected. Thus, the first client can receive some buffered data - all log lines that have been appended since tail was run.
socat variant
LOG_FILE="/var/log/custom_output.log"
PORT=5020
socat TCP-LISTEN:$PORT,fork,reuseaddr SYSTEM:"tail -n0 -F \"$LOG_FILE\" </dev/null"
Note: here socat will fork (clone itself) on each client connection and start a separate tail process. Thus:
Each connected client will receive appended log lines at the same time;
Clients will not receive any previously buffered by tail strings.
additional
You can redirect stderr to stdout in the tail process by adding 2>&1 (in both variants). In this case, clients will receive auxiliary message lines, e.g.:
tail: /var/log/custom_output.log: file truncated;
tail: '/var/log/custom_output.log' has become inaccessible: No such file or directory - printed when the log file has been removed or renamed, only if -F is used;
tail: '/var/log/custom_output.log' has appeared; following new file - printed when a new log file is created, only if -F is used.

Kill a process with a certain value in the command lying between a specific range

I run a lot of curl process through a script. These curl processes specify the local ports to be used. Now I need to kill some of these processes based on their local ports. For eg i want to kill the processes with the local ports lying between 30000 and 30100.
Now how do i kill only the processes with local ports between 30000 and 30100.
I believe i can write a perl script to parse the output and extract the values of the local port then kill the process satifying my conditions, but is there a way to do it with a single nested linux command, perhaps using awk?
You can do:
ps -aux | awk '$14>=30000 && $14<=30100 && $0~/curl/ { print $2 }' | xargs kill -9
Based on your screenshot, port values appear on 14th column ($14 holds this value), putting a check of $0~/curl/ grabs only those lines with curl effectively removing the need for grep. print $2 prints the process id. We then pipe the output to xargs and kill.
You can use
kill `lsof -i TCP#<your-ip-address>:30000-30100 -t`
to kill the processes attached to those ports, where <your-ip-address> must be the IP address that those connections use on the local side (this could be "localhost" or the external IP address of your host, depending).
If you leave the IP address out, you risk killing unrelated processes (that are connected to a destination port in the given range).
See this post for the background on lsof.
You can use the pkill command like so:
pkill -f -- 'curl.*local-port 30(0[0-9][0-9]|100)'
A less strict regular expression of course works, too, if you are sure you won't kill unrelated processes. You can do pgrep -fa -- <regexp> first to check if your regexp is correct, if you think that is necessary.
Note that matching number ranges is not one of the strengths of regular expressions.

Passing Arguments to Running Bash Script

I have a bash script that takes a list of IP Addresses, and pings them every 15 seconds to test connectivity. Some of these IP Addresses are servers and computers as to which I have the ability to control. I would like to be able to do something of the following:
Run The Bash File
It pings non-controlled IP Addresses
It will list the controlled Computers
When a computer turns off, it sends my script a response saying it turned off
The script outputs accordingly
I have the code all set up that pings these computers every 15 seconds and displays. What I wish to achieve is to NOT ping my controlled computers. They will send a command to the bash script. I know this can be done by writing a file and reading such file, but I would like a way that changes the display AS IT HAPPENS. Would mkfifo be an viable option?
Yes, mkfifo is ok for this task. For instance, this:
mkfifo ./commandlist
while read f < ./commandlist; do
# Actions here
echo $f
done
will wait until a new line can be read from FIFO commandlist, read it into $f and execute the body.
From the outside, write to the FIFO with:
echo 42 > ./commandlist
But, why not let the remote server call this script, perhaps via SSH or even CGI? You can setup a /notify-disconnect CGI script with no parameters and get the IP address of the peer from the REMOTE_ADDR environment variable.

How do I save several running ping thread feedback/results to each related file output?

I have written below script in linux shell script, for pinging several routers in parallel and save output to files and other script analysis for packet lost on links. as you can see all pings run in background and simulate parralelism or multithreading.
for ips in 100.28.139.5 100.20.12.90 100.23.13.74 100.25.131.10
do
ping $ips -s 500 -c 500 &> ${ips}.500.text &
ping $ips -s 1500 -c 500 &> ${ips}.1500.text &
ping $ips -s 4500 -c 500 &> ${ips}.4500.text &
done
I have tried rewrite it by java but I find it so big(>100 lines) and I didn't able to save the thread results to related ping file output.
I need dedicate logger for each thread, to save outputs.
How do I save several running ping thread feedback/results to each related file output?
When you create your thread, using the costructor you will pass him certain data: suppose the url to be pinged. Using such a information you create your own file on disk where to output data coming from ping feedback.

Resources