Parallelism in bash script - multithreading

I got a script that is starting up some virtual machines. After the deployment I want to install a few things on the VMs. Because these installation can take up to 6 minutes per VM. It would be much more efficient to execute these installations in parallel. In Java I would probably use Threads but in a bash script I do not know. My first approach was sth like this:
function install {
plink -ssh -i /var/lib/one/Downloads/id_rsa_ubuntu_putty..ppk root#$1 wget https://www.dropbox.com/s/xdhnx/install.sh
plink -ssh -i /var/lib/one/Downloads/id_rsa_ubuntu_putty..ppk root#$1 chmod 4500 install.sh
plink -ssh -i /var/lib/one/Downloads/id_rsa_ubuntu_putty..ppk root#$1 ./install.sh
echo $1 angestoßen
}
echo -------------------------------------------------
echo Alle VMs erfolgreich deployed
for i in "${IParray[#]}"
do
install $i &
done
wait
I created a funtion and tried to connect the function calls in the for-loop by using "&" which should create subprocesses. But for some how this is not working properly. Can anybody help me out

Maybe use GNU Parallel like this:
#!/bin/bash
IParray=(192.168.0.1 192.168.0.2)
function install {
echo $1
# plink...
}
# Make install() visible to GNU Parallel
export -f install
# Run a bunch of installs in parallel
parallel install ::: ${IParray[#]}

Related

How to run a script on server from local machine? [duplicate]

In my .bashrc I define a function which I can use on the command line later:
function mycommand() {
ssh user#123.456.789.0 cd testdir;./test.sh "$1"
}
When using this command, just the cd command is executed on the remote host; the test.sh command is executed on the local host. This is because the semicolon separates two different commands: the ssh command and the test.sh command.
I tried defining the function as follows (note the single quotes):
function mycommand() {
ssh user#123.456.789.0 'cd testdir;./test.sh "$1"'
}
I tried to keep the cd command and the test.sh command together, but the argument $1 is not resolved, independent of what I give to the function. It is always tried to execute a command
./test.sh $1
on the remote host.
How do I properly define mycommand, so the script test.sh is executed on the remote host after changing into the directory testdir, with the ability to pass on the argument given to mycommand to test.sh?
Do it this way instead:
function mycommand {
ssh user#123.456.789.0 "cd testdir;./test.sh \"$1\""
}
You still have to pass the whole command as a single string, yet in that single string you need to have $1 expanded before it is sent to ssh so you need to use "" for it.
Update
Another proper way to do this actually is to use printf %q to properly quote the argument. This would make the argument safe to parse even if it has spaces, single quotes, double quotes, or any other character that may have a special meaning to the shell:
function mycommand {
printf -v __ %q "$1"
ssh user#123.456.789.0 "cd testdir;./test.sh $__"
}
When declaring a function with function, () is not necessary.
Don't comment back about it just because you're a POSIXist.
Starting Bash version 4.4, it can also be simplified to this:
function mycommand {
ssh user#123.456.789.0 "cd testdir;./test.sh ${1#Q}"
}
See ${parameter#operator} section in Shell Parameter Expansion.
I'm using the following to execute commands on the remote from my local computer:
ssh -i ~/.ssh/$GIT_PRIVKEY user#$IP "bash -s" < localpath/script.sh $arg1 $arg2
This is an example that works on the AWS Cloud. The scenario is that some machine that booted from autoscaling needs to perform some action on another server, passing the newly spawned instance DNS via SSH
# Get the public DNS of the current machine (AWS specific)
MY_DNS=`curl -s http://169.254.169.254/latest/meta-data/public-hostname`
ssh \
-o StrictHostKeyChecking=no \
-i ~/.ssh/id_rsa \
user#remotehost.example.com \
<< EOF
cd ~/
echo "Hey I was just SSHed by ${MY_DNS}"
run_other_commands
# Newline is important before final EOF!
EOF
Reviving an old thread, but this pretty clean approach was not listed.
function mycommand() {
ssh user#123.456.789.0 <<+
cd testdir;./test.sh "$1"
+
}
Solution: you want to be able connect to machine remotely using ssh protocol and trigger/run some actions outside.
on ssh use a -t flag, from documentation:
-t Force pseudo-terminal allocation.
This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useful, e.g.
when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
formula:
ssh -i <key-path> <user>#<remote-machine> -t '<action>'
Example: as administrator I want to be able to connect remotely into ec2 machines and trigger a revert process for a bad deployment on a several machines in a raw, moreover you better implement this action as an automation script that use ips as an arguments and running on different machines in parallel.
ssh -i /home/admin/.ssh/key admin#10.20.30.40 -t 'cd /home/application && make revert'
A little trick for me, using the "bash -s" they said they allow POSITIONAL ARGS but apparently the $0 is already reserved for whatever reason... Then using twice the same args rocks like so:
ssh user#host "bash -s" < ./start_app.sh -e test -e test -f docker-compose.services.yml

How to run a script in background (linux openwrt)?

I have this script:
#!/bin/sh
while [ true ] ; do
urlfile=$( ls /root/wget/wget-download-link.txt | head -n 1 )
dir=$( cat /root/wget/wget-dir.txt )
if [ "$urlfile" = "" ] ; then
sleep 30
continue
fi
url=$( head -n 1 $urlfile )
if [ "$url" = "" ] ; then
mv $urlfile $urlfile.invalid
continue
fi
mv $urlfile $urlfile.busy
wget -b $url -P $dir -o /www/wget.log -c -t 100 -nc
mv $urlfile.busy $urlfile.done
done
The script basically checks for any new URLs at wget-download-link.txt for every 30 seconds and if there's a new URL it'll download it with wget, the problem is that when I try to run this script on Putty like this
/root/wget/wget_download.sh --daemon
it's still running in the foreground, I still can see the terminal output. How do I make it run in the background ?
In OpenWRT there is neither nohup nor screen available by default, so a solution with only builtin commands would be to start a subshell with brackets and put that one in the background with &:
(/root/wget/wget_download.sh >/dev/null 2>&1 )&
you can test this structure easily on your desktop for example with
(notify-send one && sleep 15 && notify-send two)&
... and then close your console before those 15 seconds are over, you will see the commands in the brackets continue execution after closing the console.
The following command will also work:
((/root/wget/wget_download.sh)&)&
This way you don't have to install the 'nohub' command in the tight memory space of the router used for OpenWrt.
I found this somewhere several years ago. It works.
The &at the end of script should be enough, if you see output from the script it means, that stdout and/or stderr is not closed, or not redirect to /dev/null
You can use this answer:
How to redirect all output to /dev/null
I am using openwrt merlin and the only way to get it working was using the crud cron manager[1]. Nohub and screen are not available as solutions.
cru a pinggw "0 * * * * /bin/ping -c 10 -q 192.168.2.254"
works like charm
[1][https://www.cyberciti.biz/faq/how-to-add-cron-job-on-asuswrt-merlin-wifi-router/]
https://openwrt.org/packages/pkgdata/coreutils-nohup
opkg update
opkg install coreutils-nohup
nohup yourscript.sh &
You can use nohup.
nohup yourscript.sh
or
nohup yourscript.sh &
Your script will keep running even if you close your putty session, and all the output will be written to a text file in same directory.
nohup is often used in combination with the nice command to run processes on a lower priority.
nohup nice yourscript.sh &
See: http://en.wikipedia.org/wiki/Nohup
For busybox in Openwrt Merlin system, I got a better solution which combined cru and date command
cru a YOUR_UNIQUE_CRON_NAME "`date -D '%s' +'%M %H %d %m *' -d $(( \`date +%s\`+2*60 ))` YOUR_CMD_HERE"
which add a cron job running 2 minutes later, and only run once.
Inspired by PlagTag's idea.
In another way these code would tried:
ssh admin#192.168.1.1 "/jffs/your_script.sh &"
Simple and without any programs like nohup screen...
(BTW: worked on Asus-Merlin firmware)
Try this:
nohup /root/wget/wget_download.sh >/dev/null 2>&1 &
It will go to the background so when you close your Putty session, it will be still running, and it won't send messages to the terminal.

Using Cron to make Forever.js Reboot-Proof

I'm trying to execute Forever.js on System Restarts using a bash script (named starter.sh) to check if my app is running or not:
#!/bin/sh
if [ $(ps -e -o uid,cmd | grep $UID | grep node | grep -v grep | wc -l | tr -s "\n") -eq 0 ]
then
export PATH=/usr/local/bin:$PATH
forever start --sourceDir ~/var/www/mysite app.js >> ~/var/www/mysite/log.txt 2>&1
fi
Then I've appended the following code to crontab:
#reboot ~/var/www/mysite/starter.sh
but after restarting the system (sudo reboot) Forever.js doesn't start.
In the log file I receive the following messages:
/root/var/www/mysite/starter.sh: 6:
/root/var/www/mysite/starter.sh: forever: not found
Any idea?
P.S.
if I call Forever from command line (forever start --sourceDir ~/var/www/mysite app.js) all work properly.
I would look into something like upstart to start/stop your node scripts on reboot. This post goes into a lot of detail about doing exactly what you're after, and you can possible simplify the setup a bit for your needs:
https://www.exratione.com/2013/02/nodejs-and-forever-as-a-service-simple-upstart-and-init-scripts-for-ubuntu/
But if you're not running ubuntu or similar, each environment has its own start/stop services type of thing. On Mac OS X you can use launchd instead. launchd has a lot of features, but hopefully this post can guide you in the right direction:
http://paul.annesley.cc/2012/09/mac-os-x-launchd-is-cool/
the missing piece:
n=$(which node);n=${n%/bin/node}; chmod -R 755 $n/bin/*; sudo cp -r $n/{bin,lib,share} /usr/local
This series of commands put forever into /usr/local/bin/.

how to write a bash shell script to ssh to remote machine and change user and export a env variable and do other commands

I have a webservice that runs on multiple different remote redhat machines. Whenever I want to update the service I will sync down the new webservice source code written in perl from a version control depot(I use perforce) and restart the service using that new synced down perl code. I think it is too boring to log to remote machines one by one and do that series of commands to restart the service one by one manully. So I wrote a bash script update.sh like below in order to "do it one time one place, update all machines". I will run this shell script in my local machine. But it seems that it won't work. It only execute the first command "sudo -u webservice_username -i" as I can tell from the command line in my local machine. (The code below only shows how it will update one of the remote webservice. The "export P4USER=myname" is for usage of perforce client)
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Why I know the only first command is executed? Well because after I input the password for the ssh on my local machine, it shows:
Your environment has been modified. Please check /tmp/webservice.env.
And it just gets stuck there. I mean no return.
As suggested by a commentor, I added "-t" for ssh
#!/bin/sh
ssh -t myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
This would let the local commandline return. But it seems weird, it cannot cd to that "dir", it says "cd:dir: No such file or directory" it also says "p4: command not found". So it looks like the sudo -u command executes with no effect and the export command has either not executed or excuted with no effect.
A detailed local log file is like below:
Your environment has been modified. Please check /tmp/dir/.env.
bash: line 0: cd: dir: No such file or directory
bash: p4: command not found
bash: line 0: cd: bin: No such file or directory
bash: ./prog: No such file or directory
tail: cannot open `../logs/service.log' for reading: No such file or directory
tail: no files remaining
Instead of connecting via ssh and then immediately changing users, can you not use something like ssh -t webservice_username#remotehost1 to connect with the desired username to begin with? That would avoid needing to sudo altogether.
If that isn't a possibility, try wrapping up all of the commands that you want to run in a shell script and store it on the remote machine. If you can get your task working from a script, then your ssh call becomes much simpler and should encounter fewer problems:
ssh myname#remotehost1 '/path/to/script'
For easily updating this script, you can write a short script for your local machine that uploads the most recent version via scp and then uses ssh to invoke it.
Note that when you run:
#!/bin/sh
ssh myname#remotehost1 'sudo -u webservice_username -i ; export P4USER=myname; cd dir ; p4 sync ; cd bin ; ./prog --domain=config_file restart ; tail -f ../logs/service.log'
Your ssh session runs sudo -u webservice_username -i waits for it to exit and then runs the rest of the commands; it does not execute sudo and then run the commands following. This has to do with the context in which you're running the series of commands. All the commands get executed in the shell of myname#remotehost1 and all sudo -u webservice_username - i is starts a shell for webservice_username and doesn't actually run any commands.
Really the best solution here is like bta said; write a script and then rsync/scp it to the destination and then run that using sudo.
export command simply not working with ssh like this, what you want to do is remote modify ~/.bashrc and it will source itself each time u do ssh login.

Execute Remote Perl Script on a Windows Box from a Linux Box

We recently got SSH setup on our Windows boxes so we could eliminate the need for disc mounts on our Linux machines. We are using Pentaho and I am writing a shell script that will, from a Linux box, SSH into the Windows box and execute a perl script.
I have able to write in a way to SSH into the windows box and switch to the directory that holds the Perl scripts that I need to execute, I just can't figure out how to actually execute them.
This is what I have:
#!/bin/sh
ssh -t xxxxx#xxxxx "cd /path/to/script/ /path/to/perl.exe HelloWorld.pl"
I have also tried:
#!/bin/sh
ssh -t xxxxx#xxxxx "cd /path/to/directory/with/perl/script" \
"/path/to/perl.exe HelloWorld.pl"
Both attempts result in a short delay and then a "disconnected from xxxxx" and the perl does not run. I can do all of these steps manually through a shell, but can't seem to get them working in script form. As a note, the only way I've been able to execute the perl scripts is if have the shell in the directory the perl script is in.
You need to use either a semi colon to end your statements, or execute with one statement.
try the following:
ssh xxxxx#xxxxx "cd /path/to/script/; /path/to/perl.exe HelloWorld.pl"
or:
ssh xxxxx#xxxxx "/path/to/perl.exe /path/to/script/HelloWorld.pl"
In the Windows command shell, you can use && like in a Unix-shell. If you expect the first command to succeed,
ssh -t xxxxx#xxxxx "cd /path/to/script/ && /path/to/perl.exe HelloWorld.pl"
will work.

Resources