Bash script in parallel with multiple IP addresses - linux

I want to create script, which one logging information from different IP's and in the same time it writes logs to different file, it should run like while:true, but when i start script it logs only first ip address in text file, what i already tried:
#!/bin/bash
IP=`cat IP.txt`
for i in $IP
do
/usr/bin/logclient -l all -f /root/$i.log $i 19999
done
IP.txt file contains:
x.x.x.x
x.x.x.x
x.x.x.x
x.x.x.x

It looks like your script should work as-is, and if logclient works like I think, it'll just create a number of different logs for each IP address. Doing a ls /root/*.log should reveal all the logs generated.
Parallelizing execution isn't something bash is particularly good at. It has job control for backgrounding tasks, but keeping track of those processes and not overloading your CPU/RAM can be tough.
GNU Parallel
If your system has it installed, I'd greatly suggest using GNU parallel. It will kick off one process for each CPU core to make parellizing jobs much easier. parallel only exits when all the children exit.
parallel /usr/bin/logclient -l all -f /root/{}.log {} 19999 ::::+ IP.txt
# all jobs finished, post-process the log (if wanted)
cat /root/*.log >> /root/all-ips.log

rather use while, than for. Something like this:
while read LINE; do /usr/bin/logclient -l all -f /root/$LINE.log $LINE 19999; done < IP.txt

Related

Taking sequentially output of multi ssh with bash scripting

I'm wrote a bash script and I don't have a chance to download pssh. However, I need to run multiple ssh commands in parallel. There is no problem so far, but when I run ssh commands, I want to see the outputs sequentially from the remote machine. I mean one ssh has multiple outputs and they get mixed up because more than one ssh is running.
#!/bin/bash
pid_list=""
while read -r list
do
ssh user#$list 'commands'&
c_pid=$!
pid_list="$pid_list $c_pid"
done < list.txt
for pid in $pid_list
do
wait $pid
done
What should I add to the code to take the output unmixed?
The most obvious way to me would be to write the outputs in a file and cat the files at the end:
#!/bin/bash
me=$$
pid_list=""
while read -r list
do
ssh user#$list 'hostname; sleep $((RANDOM%5)); hostname ; sleep $((RANDOM%5))' > /tmp/log-${me}-$list &
c_pid=$!
pid_list="$pid_list $c_pid"
done < list.txt
for pid in $pid_list
do
wait $pid
done
cat /tmp/log-${me}-*
rm /tmp/log-${me}-* 2> /dev/null
I didn't handle stderr because that wasn't in your question. Nor did I address the order of output because that isn't specified either. Nor is whether the output should appear as each host finishes. If you want those aspects covered, please improve your question.

Bash or systemd method to run shell script continuously after current task is finished

I'm using a program to scan huge IP segment (/8 IP address) and save the output in Linux directory.
To run the program, I'm writing a script namely scan.sh that look like this
#!/bin/sh
while IFS= read -r IP || [[ -N "$IP" ]]; do
/path/to/program $IP > /output/path/$IP.xml;
done < ip.txt
Since the IP address is so huge, it is not feasible to fit all IP in one file so I split it by chunk and
run the script concurrently like this. Let's name this allscan.sh
#!/bin/bash
/path/script/0-4/scan.sh & /path/script/5-10/scan.sh /path/script/11-15/scan.sh
Obviously, the script includes codes that's much longer than above but you get the idea. 0-4, 5-10 folder representing ip address that split into small chunk.
During first run, it took 28 days to finish.
My question is how to keep it running continuously once current task is finished?
I don't think monthly cronjob is suitable for this because if the task took more than 30 days to finish it will rerun the script and create unnecessary load to the server.

Monitoring multiple Linux Systems or Servers Script

I want to modify my script in such a way it can monitor my cpu, memory and ram on 4 servers on my network, the script below is a script that can monitor for one server, is there a way i can check or modify my script below if i have the hosts and username and password.
printf "Memory\t\tDisk\t\tCPU\n"
end=$((SECONDS+3600))
while [ $SECONDS -lt $end ]; do
MEMORY=$(free -m | awk 'NR==2{printf "%.2f%%\t\t", $3*100/$2 }')
DISK=$(df -h | awk '$NF=="/"{printf "%s\t\t", $4}')
CPU=$(top -bn1 | grep load | awk '{printf "%.2f%%\t\t\n", $(NF-2)}')
echo "$MEMORY$DISK$CPU"
sleep 5
done
any ideas or suggestions?
A simple, naive implementation might look like:
for server in host1 host2 host3 host4; do
ssh "$server" bash -s <<'EOF'
...your script here...
EOF
done
...with RSA keys preconfigured for passwordless authentication. That could be made slightly less naive by leveraging ControlMaster/ControlSocket functionality in ssh, so you're keeping the same transport up between multiple ssh sessions and reusing it wherever possible.
However -- rolling your own system monitoring tools is a fool's errand, at least until you've been around the block with the existing established ones, know their strengths, know their weaknesses, and can make a reasoned argument as to why they aren't a good fit for you. Use something off-the-shelf maintained by people who've been doing this for a while.

Using SSH to grep keywords from multiple servers

I am completely new to scripting and am having some trouble piecing this together from some other online resources.
What I want to do is run a bash script that will grep for a keyword domain in the /etc/hosts file on multiple servers. In the output file, I am looking for a list of the servers that contain this keyword but am not looking to make any changes. Simply looking for which machines have this value. Since there are a bunch of machines in question, listing the servers I am looking to search for won't work, but the machine I am doing this from does have SSH keys for all of the ones in question.
I have a listing of the servers I want to query in three files on the machine (one for each environment) I am going to run this script from.
Linux.prod.dat
Linux.qa.dat
Linux.dev.dat
Each file is simply a list of server names in the environment. For example..
server1
server2
server3
etc...
I am totally lost here and would appreciate any help.
Here is an example:
KEYWORD=foo
SERVERLIST=Linux.prod.dat
OUTPUTLIST=output.dat
for host in $(cat ${SERVERLIST}); do
if [[ -n "$(ssh ${host} grep '${KEYWORD}' /etc/hosts && echo Y)" ]]; then
echo ${host} >> ${OUTPUTLIST}
fi
done
Try GNU parallel
parallel --tag ssh {} grep -l "KEYWORD" /etc/hosts :::: Linux.prod.dat
parallel run command multiple times substituting{}with lines from Linux.prod.dat file.
--tag switch adds value from the Linux.prod.dat on the beginning of the file. So, the output of the command will look like:
server1 /etc/hosts
server5 /etc/hosts
server7 /etc/hosts
Where server1, server5, etc. will be names of the servers where /etc/hosts contains KEYWORD

using a variable in a BASH command?

I have 20 machines, each running a process. The machines are named:
["machine1", "machine2", ...., "machine20"]
To inspect how the process is doing on machine1, I issue the following command from a remote machine:
ssh machine1 cat log.txt
For machine2, I issue the following command:
ssh machine2 cat log.txt
Similarly, for machine20, I issue the following command:
ssh machine20 cat log.txt
Is there a bash command that will allow me to view the output from all machines using one command?
If the machines are nicely numbered like in your example:
for i in {1..20} ; do ssh machine$i cat log.txt; done
If you have the list of machines in a file, you can use:
cat machinesList.txt | xargs -i ssh {} cat log.txt
You could store all your machine names in an array or text file, and loop through it.
declare -a machineList=('host1' 'host2' 'otherHost') # and more...
for machine in ${machineList[#]}
do
ssh $machine cat log.txt
done
I assume your machines aren't literally named 'machine1', 'machine2', etc.
Some links:
bash Array Tutorial
GNU Bash Array Documentation
for i in {1..20}
do
ssh machine$i cat log.txt
done
Use a loop?
for i in {1..20}
do
ssh machine$i cat log.txt
done
But note that you're running cat within a remote shell session, not the current one, so this might not quite work as you expect. Try it and see.
Put your hosts in a file and use a while loop as shown below. Note the use of the -n flag on ssh:
while read host; do ssh -n $host cat log.txt; done < hosts-file
Alternatively you can use PSSH:
pssh -h hosts-file -i "cat log.txt"
I would recommend using a program called Shmux. Despite the name, it works really well. I've used it with more than 100 machines with good results. It also gracefully handles machine failures for you which could be a disadvantage with a bash for loop approach.
I think the coolest thing about this program is the ability to issue multiple threads for your commands which allows to run the commands on all 20 machines in parallel.
Aside from the suggestions for using a loop, you might want to take a look at tools, like pssh or dsh, designed for running commands on multiple clients.

Resources