Running Vagrant SSH causes BASH loop to terminate prematurely [duplicate] - linux

This question already has answers here:
ssh breaks out of while-loop in bash [duplicate]
(2 answers)
Closed 7 years ago.
I have a bash script that fetches running selenium nodes, grabs their ID, and SSHs into them to perform configuration tasks.
#!/bin/bash
# retrieve list of running vagrant machines, filter them to selenium nodes, and
# process provisioning for each
vagrant global-status --prune | grep "selenium_node" | while read -ra line ; do
echo -e "\033[32mBEGININNG ${line[1]} PROVISIONING\033[37m"
# adding this statement causes the loop to exit after 1 iteration
vagrant ssh ${line[0]} -- "
echo 'it runs'
"
echo -e "\033[32mEND ${line[1]} PROVISION\033[37m"
done
My problem is that running vagrant ssh causes the loop to terminate after the first iteration. I confirmed this by removing 'vagrant ssh' and the results were that both the BEGINNING and END echo commands ran successfully for every iteration (in my case - two iterations).
What's stranger is that the loop DOES complete it's first iteration (as evinced by the END echo line completing), it just doesn't run any further iterations.
Also, I've confirmed that it's not just neglecting to show the output from the other iterations. It never performs any operations on the other machines.

ssh - including vagrant ssh - consumes standard input, so if you run it inside a while read loop, it won't leave anything for the next loop iteration to read.
You can fix that by either telling ssh not to read standard input (ssh -n) or by using a different construct than while read. In this case, since vagrant ssh doesn't support the -n option, I suggest running it with its input redirected from /dev/null:
</dev/null vagrant ssh ${line[0]} -- "
echo 'it runs'
"

Related

SSH the output to different terminals [duplicate]

This question already has answers here:
how do i start commands in new terminals in BASH script
(2 answers)
Closed 17 days ago.
I am using for loop to SSH multiple hosts
#!/usr/bin/bash
bandit=$(cat /root/Desktop/bandit.txt)
for host in {1..2}
do
echo "inside the loop"
ssh bandit$host#$bandit -p 2220
echo "After the loop"
done
#ssh bandit0#bandit.labs.overthewire.org -p 2220
bandit.txt has the following content " bandit.labs.overthewire.org"
I am getting the SSH prompt but one at a time, say for example First I got "bandit1" host login prompt, and after closing the "bandit1" ssh host I am getting second ssh session for "bandit1"
I would like to get two different terminals for each SSH session.
But there is no such things as "terminal window" in bash (well, there is a tty, yours; but I mean, you can't just open a new window. Bash is not aware that it is running inside a specific program that emulate the behavior of a terminal in a GUI window).
So it can't be as easy as you would think.
Of course, you can choose a terminal emulator, and run it yourself.
For example
for host is {1..2}
do
xterm -e ssh bandit$host#$bandit -p 2220 &
done
maybe what you are looking for, if you have xterm program installed.
Maybe with some additional error checking, something like this -
scr=( /dev/stdout
$(ps -fu $USERNAME |
awk '$4~/^pty/{lst[$4]} END{for (pty in lst) print pty}' ) )
for host in {1..2}; do echo ssh bandit$host#etc... >> /dev/${scr[$host]}; done
There are a lot of variations and kinks to work out though. tty or pty? what is there's no other window open? etc... But with luck it will give you something to work from.

Taking sequentially output of multi ssh with bash scripting

I'm wrote a bash script and I don't have a chance to download pssh. However, I need to run multiple ssh commands in parallel. There is no problem so far, but when I run ssh commands, I want to see the outputs sequentially from the remote machine. I mean one ssh has multiple outputs and they get mixed up because more than one ssh is running.
#!/bin/bash
pid_list=""
while read -r list
do
ssh user#$list 'commands'&
c_pid=$!
pid_list="$pid_list $c_pid"
done < list.txt
for pid in $pid_list
do
wait $pid
done
What should I add to the code to take the output unmixed?
The most obvious way to me would be to write the outputs in a file and cat the files at the end:
#!/bin/bash
me=$$
pid_list=""
while read -r list
do
ssh user#$list 'hostname; sleep $((RANDOM%5)); hostname ; sleep $((RANDOM%5))' > /tmp/log-${me}-$list &
c_pid=$!
pid_list="$pid_list $c_pid"
done < list.txt
for pid in $pid_list
do
wait $pid
done
cat /tmp/log-${me}-*
rm /tmp/log-${me}-* 2> /dev/null
I didn't handle stderr because that wasn't in your question. Nor did I address the order of output because that isn't specified either. Nor is whether the output should appear as each host finishes. If you want those aspects covered, please improve your question.

BASH: simultaneous execution of a multiloop function without waiting

Usecase:
need to transfer binary files (1Gb) to array of IPs and start executing them upon arrival to their destinations without waiting all binaries to be transferred/executed. Sort of parallel mode.
Situation:
I have 2 functions - transfer and execution (depending on approach it can be shortened to 1 with 2 loops).
for N in "${NODES[#]}"; do
rsync -Pcz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --timeout=10 $FILE user#$N
done
and
for N in "${NODES[#]}"; do
ssh user#$N "cd ~/; ./exec.sh"
done
The point is that in this case i have to wait till all transfers finish first (and there sometimes can be tens of addresses)and just afterwards start the execution.
If i combine the loops into a single one, i have to wait again - this time for transfer+execution per node.
Expectation:
I'd like to transfer a file to the first node, start its execution, and switch to the second node with the same process, and so on. So timing would count for the transfers only, whereas each node executes the file on its own in parallel.
Obstacles:
1- need to be able to have an execution output from each node
2- additional packages, like screen are not an option.
What did i try:
i was thinking about injecting some script to the remote nodes via the loop to control the execution from there. But i'm sure there must be some less barbaric option.
What can be done here?
You should be able to use a single loop, and run the ssh command with a & suffix, which runs it in the background (i.e. without waiting for it to finish), and then after the loop use wait to wait for all of them to finish. Collecting output will be more interesting... I think you'll need to collect each run's output into a file, and then print the files at the end. Something like this (note that I have not tested this properly):
tmpdir="$(mktemp -qd -t "$(basename "$0")")" || {
echo "Error creating temporary directory" >&2
exit 1
}
for nodenum in "${!NODES[#]}"; do
# The ${!array[#]} idiom gets a list of array *indexes*, not elements; get the element by index:
N=${NODES[nodenum]}
# Copy file, and wait for copy to finish:
rsync -Pcz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --timeout=10 $FILE user#$N
# Start the script, and *don't* wait for it to finish:
ssh user#$N "cd ~/ sh exec.sh" >"$tmpdir/$nodenum.out" 2>&1 &
done
# Wait for all of the scripts to finish
wait
# Print all of the outputs (in order)
for nodenum in "${!NODES[#]}"; do
echo
echo "Output from ${NODES[nodenum]}:"
cat "$tmpdir/$nodenum.out"
done
# Clean up the temp directory
rm -R "$tmpdir"
BTW, the remote command "cd ~/ sh exec.sh" doesn't make sense. Is there supposed to be a semicolon in there? Also, I recommend using lower or mixed-case variable names to avoid conflicts with the many all-caps variables that have some sort of special meaning, and putting double-quotes around variable references (i.e. rsync ... "$FILE" "user#$N" instead of rsync ... $FILE user#$N).
EDIT: this assumes you want to start the script on each host as soon as that particular copy is done; if you want to wait until all copies are done, then fire all scripts at once, use two loops: one to do the copies, then a second that does the ssh commands in the background (collecting output as above), then wait for those to all finish, then print all of the outputs.
You could do the transfer and script as a single background task, so that the script on a particular host starts as soon as its transfer is complete
for N in "${NODES[#]}"; do
(rsync -Pcz -e "ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --timeout=10 $FILE user#$N
ssh user#$N "cd ~/; ./exec.sh") > ${N}.log 2>&1 &
done
You then collect all of the hostname.log files

Ending an mpirun process terminates a bash loop

I'm trying to schedule a series of mpi jobs on an Ubuntu 14.04 LTS machine using a bash script. Basically, I want a simulation to run on every core for a certain amount of time, then terminate and move on to the next case once that time has elapsed.
My issue arises when mpi exits at the end of the first job - it breaks the loop and returns the terminal to my control instead of heading onto the next iteration of the loop.
My script is included below. The file "case_names" is just a text file of directory names. I've tested the script with other commands and it works fine until I uncomment the mpirun call.
#!/bin/bash
while read line;
do
# Access case dierctory
cd $line
echo "Case $line accessed"
# Start simulation
echo "Case $line starting: $(date)"
mpirun -q -np 8 dsmcFoamPlus -parallel > log.dsmcFoamPlus &
# Wait for 10 hour runtime
sleep 36000
# Kill job
pkill mpirun > /dev/null
echo "Case $line terminated: $(date)"
# Return to parent directory
cd ..
done < case_names
Does anyone know of a way to stop mpirun from breaking the loop like this?
So far I've tried GNOME task scheduler and task-spooler, but neither have worked (likely due to aliases that have to be invoked before the commands I use become available). I'd really rather not have to resort to setting up slurm. I've also tried using the disown command to separate the mpi process from the shell I'm running the scheduling script in, and have even written a separate script just to kill processes which the scheduling script runs remotely.
Many thanks in advance!
I've managed to find a workaround that allows me to schedule tasks with a bash script like I wanted. Since this solves my issue, I'm posting it as an answer (although I would still welcome an explanation as to why mpi behaves in this way in loops).
The solution lay in writing a separate script for both calling and then killing mpi, which would itself be called by the scheduling script. Since this child bash process has no loops in it, there are no issues with mpi breaking them after being killed. Also, once this script has exited, the scheduling loop can continue unimpeded.
My (now working) code is included below.
Scheduling script:
while read line;
do
cd $line
echo "CWD: $(pwd)"
echo "Case $line accessed"
bash ../run_job
echo "Case $line terminated: $(date)"
cd ..
done < case_names
Execution script (run_job):
mpirun -q -np 8 dsmcFoamPlus -parallel > log.dsmcFoamPlus &
echo "Case $line starting: $(date)"
sleep 600
pkill mpirun
I hope someone will find this useful.

Shell script while only loops once [duplicate]

This question already has answers here:
ssh breaks out of while-loop in bash [duplicate]
(2 answers)
Closed 8 years ago.
I want to get all the servers' time with ssh, but my script only loops once and then exit, what's the reason?
servers.txt
192.168.1.123
192.168.1.124
192.168.1.125
192.168.1.126
gettime.sh
#!/bin/bash
while read host
do
ssh root#${host} date
done < "servers.txt"
OUTPUT
Tue Feb 3 09:56:54 CST 2015
This happens because ssh starts reading the input you intended for your loop. You can add < /dev/null to prevent it:
#!/bin/bash
while read host
do
ssh root#${host} date < /dev/null
done < "servers.txt"
The same thing tends to happen with ffmpeg and mplayer, and can be solved the same way.
PS: This and other issues are automatically caught by shellcheck:
In yourscript line 4:
ssh root#${host} date
^-- SC2095: Add < /dev/null to prevent ssh from swallowing stdin.

Resources