call parallelly function in shell script - linux

i have an file serverlist file,reading the file and getting the server name and performing some operations on those servers using for loop and command used:/usr/local/bin/sshcmd -q -u $userName -s $serverName" and one command for execution tooks 5-7 minutes on the one server.
i don't want to run the command one by one on all servers but requires to run the command at least 15 servers parallelly at same time for saving the time .

You can run commands into background mode by adding '&' at end of command.
For example:
/usr/local/bin/sshcmd -q -u $userName1 -s $serverName1 &
/usr/local/bin/sshcmd -q -u $userName2 -s $serverName2 &
It is runs two copies of sshcmd parallely.

Related

BASH: simultaneous execution of a multiloop function without waiting

Usecase:
need to transfer binary files (1Gb) to array of IPs and start executing them upon arrival to their destinations without waiting all binaries to be transferred/executed. Sort of parallel mode.
Situation:
I have 2 functions - transfer and execution (depending on approach it can be shortened to 1 with 2 loops).
for N in "${NODES[#]}"; do
rsync -Pcz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --timeout=10 $FILE user#$N
done
and
for N in "${NODES[#]}"; do
ssh user#$N "cd ~/; ./exec.sh"
done
The point is that in this case i have to wait till all transfers finish first (and there sometimes can be tens of addresses)and just afterwards start the execution.
If i combine the loops into a single one, i have to wait again - this time for transfer+execution per node.
Expectation:
I'd like to transfer a file to the first node, start its execution, and switch to the second node with the same process, and so on. So timing would count for the transfers only, whereas each node executes the file on its own in parallel.
Obstacles:
1- need to be able to have an execution output from each node
2- additional packages, like screen are not an option.
What did i try:
i was thinking about injecting some script to the remote nodes via the loop to control the execution from there. But i'm sure there must be some less barbaric option.
What can be done here?
You should be able to use a single loop, and run the ssh command with a & suffix, which runs it in the background (i.e. without waiting for it to finish), and then after the loop use wait to wait for all of them to finish. Collecting output will be more interesting... I think you'll need to collect each run's output into a file, and then print the files at the end. Something like this (note that I have not tested this properly):
tmpdir="$(mktemp -qd -t "$(basename "$0")")" || {
echo "Error creating temporary directory" >&2
exit 1
}
for nodenum in "${!NODES[#]}"; do
# The ${!array[#]} idiom gets a list of array *indexes*, not elements; get the element by index:
N=${NODES[nodenum]}
# Copy file, and wait for copy to finish:
rsync -Pcz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --timeout=10 $FILE user#$N
# Start the script, and *don't* wait for it to finish:
ssh user#$N "cd ~/ sh exec.sh" >"$tmpdir/$nodenum.out" 2>&1 &
done
# Wait for all of the scripts to finish
wait
# Print all of the outputs (in order)
for nodenum in "${!NODES[#]}"; do
echo
echo "Output from ${NODES[nodenum]}:"
cat "$tmpdir/$nodenum.out"
done
# Clean up the temp directory
rm -R "$tmpdir"
BTW, the remote command "cd ~/ sh exec.sh" doesn't make sense. Is there supposed to be a semicolon in there? Also, I recommend using lower or mixed-case variable names to avoid conflicts with the many all-caps variables that have some sort of special meaning, and putting double-quotes around variable references (i.e. rsync ... "$FILE" "user#$N" instead of rsync ... $FILE user#$N).
EDIT: this assumes you want to start the script on each host as soon as that particular copy is done; if you want to wait until all copies are done, then fire all scripts at once, use two loops: one to do the copies, then a second that does the ssh commands in the background (collecting output as above), then wait for those to all finish, then print all of the outputs.
You could do the transfer and script as a single background task, so that the script on a particular host starts as soon as its transfer is complete
for N in "${NODES[#]}"; do
(rsync -Pcz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --timeout=10 $FILE user#$N
ssh user#$N "cd ~/; ./exec.sh") > ${N}.log 2>&1 &
done
You then collect all of the hostname.log files

Linux: how to change maximum number of files a process can open?

I have to execute a process on a cluster of machines. Size of cluster is of order 100. So I cannot execute processes manually, I have to execute them by script(which uses ssh, currently I am using python-paramiko for this). Number of tcp sockets these processes open is more than 1024(default limit of linux.) So I need to change that using {ulimit -n 10000}. This makes the changes for that shell session only. And this command works only with root user. So my script is not able to do that.
I tried to execute this command
sudo su && ulimit -n 10000 && <commandToExecuteMyProcess>
But this didn't work. The commands after "sudo su" didn't execute at all. They execute only when I logout of the su session.
This article shows way to make the changes permanently. But when I open limits.conf, I didn't find anything there. It only has some commented notes.
Please suggest me some way to increase the limit permanently or change it by script for each session.
That's not how it works: sudo su just opens a new shell so you can introduce commands as root, and after you exit that shell it executes the rest of the line as normal user.
Second: your this is a special case because ulimit is not actually a program, but a bash shell built-in command, so it must be used within bash, that is why something like sudo ulimit -n 10000 won't work: sudo can't find that program because it doesn't exist.
So, the only alternative is a bit ugly but works:
sudo bash -c 'ulimit -n 10000 && <command>'
Everything inside '...' will execute in a bash session of the root user.
Note that you can replace && with ; in this case: that's because it is being executed as root and ulimit -n 10000 will always complete successfully.

inotifywait misses events while script is running

I am running an inotify wait script that triggers a bash script to call a function to synchronize my database of files whenever files are modified, created, or deleted.
#!/bin/sh
while inotifywait -r -e modify -e create -e delete /var/my/path/Documents; do
cd /var/scripts/
./sync.sh
done
This actually works quite well except that during the 10 seconds it takes my sync script to run the watch doesn't pickup any additional changes. There are instances where the sync has already looked at a directory and an additional change occurs that isn't detected by inotifywait because it hasn't re-established watches.
Is there any way for the inofitywait to trigger the script and still maintain the watch?
Use the -m option so that it runs continuously, instead of exiting after each event.
inotifywait -q -m -r -e modify -e create -e delete /var/my/path/Documents | \
while read event; do
cd /var/scripts
./sync.sh
done
This would actually have the opposite problem: if multiple changes occur while the sync script is running, it will run it again that many times. You might want to put something in the sync.sh script that prevents it from running again if it has run too recently.

Simple bash script runs asynchronously when run as a cron job

I have a backup script written that will do the following in this order:
Zip up files via SSH on a remote backup server
Dump my local database
Transfer my local database via SSH rsync to the backup server
Now when I run this script from the command line in RHEL it works a-ok perfectly fine.
BUT when I set this script to run via a cronjob, the script does run, but from what I can tell, it's somehow running those above 3 commands simultaneously. Because of that, things are getting done out of order (my local database is completed dumping and transferred before the #1 zip job is actually complete).
Has anyone run across such a strange scenario? As the most simple fix, is there a way to force a script to run synchronously? Maybe add some kind of command to wait for the prior line to complete before moving on?
EDIT I added a example version of my backup script. It seems that the second line of my script runs at the same time as the first line of my script, so while the SSH command has been issued, it has not yet completed before my second line triggers and an SQL dump begins.
#!/bin/bash
THEDIR="sample"
THEDBNAME="mydatabase"
ssh -i /rsync/mirror-rsync-key sample#sample.com "tar zcvpf /$THEDIR/old-1.tar /$THEDIR/public_html/*"
mysqldump --opt -Q $THEDBNAME > mySampleDb
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/public_html/ sample#sample.com:/$THEDIR/public_html/
/usr/bin/rsync -avz --delete --exclude=**/stats --exclude=**/error -e "ssh -i /rsync/mirror-rsync-key" /$THEDIR/ sample#sample.com:/$THEDIR/
Unless you're explicitly using backgrounding (&) everything should run one-by-one, waiting until the prior finishes.
Perhaps you are actually seeing overlapping prior executions by cron? If so, you can prevent multi-execution by calling your script with flock
e.g. midnight cron entry from
0 0 * * * backup.sh
to
0 0 * * * flock -n /tmp/backup.lock -c backup.sh
If you want to run commands in a sequential order you can use ; operator.
; – semicolon operator
This operator Run multiple commands in one go, but in a sequential order. If we take three commands separated by semicolon, second command will run after first command completion, third command will run only after second command execution completes. One point we should know is that to run second command, it do not depend on first command exit status.
Execute ls, pwd, whoami commands in one line sequentially one after the other.
ls;pwd;whoami
Please correct me if i am not understanding your question correctly.

How can I trigger a delayed system shutdown from in a shell script?

On my private network I have a backup server, which runs a bacula backup every night. To save energy I use a cron job to wake the server, but I haven't found out, how to properly shut it down after the backup is done.
By the means of the bacula-director configuration I can call a script during the processing of the last backup job (i.e. the backup of the file catalog). I tried to use this script:
#!/bin/bash
# shutdown server in 10 minutes
#
# ps, 17.11.2013
bash -c "nohup /sbin/shutdown -h 10" &
exit 0
The script shuts down the server - but apparently it returns just during the shutdown,
and as a consequence that last backup job hangs just until the shutdown. How can I make the script to file the shutdown and return immediately?
Update: After an extensive search I came up with a (albeit pretty ugly) solution:
The script run by bacula looks like this:
#!/bin/bash
at -f /root/scripts/shutdown_now.sh now + 10 minutes
And the second script (shutdown_now.sh) looks like this:
#!/bin/bash
shutdown -h now
Actually I found no obvious method to add the required parameters of shutdown in the syntax of the 'at' command. Maybe someone can give me some advice here.
Depending on your backup server’s OS, the implementation of shutdown might behave differently. I have tested the following two solutions on Ubuntu 12.04 and they both worked for me:
As the root user I have created a shell script with the following content and called it in a bash shell:
shutdown -h 10 &
exit 0
The exit code of the script in the shell was correct (tested with echo $?). The shutdown was still in progress (tested with shutdown -c).
This bash function call in a second shell script worked equally well:
my_shutdown() {
shutdown -h 10
}
my_shutdown &
exit 0
No need to create a second BASH script to run the shutdown command. Just replace the following line in your backup script:
bash -c "nohup /sbin/shutdown -h 10" &
with this:
echo "/sbin/poweroff" | /usr/bin/at now + 10 min >/dev/null 2>&1
Feel free to adjust the time interval to suit your preference.
If you can become root: either log in as, or sudo -i this works (tested on ubuntu 14.04):
# shutdown -h 20:00 & //halts the machine at 8pm
No shell script needed. I can then log out, and log back in, and the process is still there. Interestingly, if I tried this with sudo in the command line, then when I log out, the process does go away!
BTW, just to note, that I also use this command to do occasional reboots after everyone has gone home.
# shutdown -r 20:00 & //re-boots the machine at 8pm

Resources