How to send to stdin in bash 10 seconds after starting the process? - linux

What I want to do is:
run a process
wait 10 seconds
send a string to the stdin of the process
This should be done in a bash script.
I've tried:
./script&
pid=$!
sleep 10
echo $string > /proc/${pid}/fd/0
It does work in a shell but not when I run it in a script.

( sleep 10; echo "how you doin?" ) | ./script
Your approach might work on Linux if e.g., your scripts stdin is e.g., something like a FIFO:
myscript(){ tr a-z A-Z; }
rm -f p
mkfifo p
exec 3<>p
myscript <&3 &
pid=$!
echo :waiting
sleep 0.5
echo :writing
file /proc/$pid/fd/0
echo hi > /proc/$pid/fd/0
exec 3>&-
But this /proc/$pid/fd stuff behaves differently on different Unices.
It doesn't work for your case because your scripts stdin is a terminal.
With default terminal settings, the terminal driver will put background proccesses trying to read from it to sleep (by sending them the SIGTTIN signal) and writes to a terminal filedescriptor will just get echoed -- they won't wake up the sleeping background process that's was put to sleep trying to read from the terminal.

What about this (as OP requested it to be done in the script):
#! /bin/bash
./script&
pid=$!
sleep 10
echo $string > /proc/${pid}/fd/0
just proposing the missing element not commenting on coding style ;-)

Related

nohup append the executed command at the top of the output file

Let's say that we invoke the nohup in the following way:
nohup foo.py -n 20 2>&1 &
This will write the output to the nohup.out.
How could we achieve to have the whole command nohup foo.py -n 20 2>&1 & sitting at the top of the nohup.out (or any other specified output file) after which the regular output of the executed command will be written to that file?
The reason for this is for purely debugging purpose as there will be thousands of commands like this executed and very often some of them will crash due to various reasons. It's like a basic report kept in a file with the executed command written at the top followed by the output of the executed command.
A straightforward alternative would be something like:
myNohup() {
(
set +m # disable job control
[[ -t 0 ]] && exec </dev/null # redirect stdin away from tty
[[ -t 1 ]] && exec >nohup.out # redirect stdout away from tty
[[ -t 2 ]] && exec 2>&1 # redirect stderr away from tty
set -x # enable trace logging of all commands run
"$#" # run our arguments as a command
) & disown -h "$!" # do not forward any HUP signal to the child process
}
To define a command we can test this with:
waitAndWrite() { sleep 5; echo "finished"; }
...and run:
myNohup waitAndWrite
...will return immediately and, after five seconds, leave the following in nohup.out:
+ waitAndWrite
+ sleep 5
+ echo finished
finished
If you only want to write the exact command run without the side effects of xtrace, replace the set -x with (assuming bash 5.0 or newer) printf '%s\n' "${*#Q}".
For older versions of bash, you might instead consider printf '%q ' "$#"; printf '\n'.
This does differ a little from what the question proposes:
Redirections and other shell directives are not logged by set -x. When you run nohup foo 2>&1 &, the 2>&1 is not passed as an argument to nohup; instead, it's something the shell does before nohup is started. Similarly, the & is not an argument but an instruction to the shell not to wait() for the subprocess to finish before going on to future commands.

Launch two processes simultaneously and collect results from the process finished earlier

Suppose I want to run two commands c1 and c2, which essentially process (but not modify) the same piece of data on Linux.
Right now I would like to launch them simultaneously, and see which one finishes quicker, once one process has finished, I will collect its output (could be dumpped into a file with c1 >> log1.txt), and terminate the other process.
Note that the processing time of two process could be largely different and hence observable, say one takes ten seconds, while the other takes 60 seconds.
=======================update
I tried the following script set but it causes infinite loop on my computer:
import os
os.system("./launch.sh")
launch.sh
#!/usr/bin/env bash
rm /tmp/smack-checker2
mkfifo /tmp/smack-checker2
setsid bash -c "./sleep60.sh ; echo 1 > /tmp/run-checker2" &
pid0=$!
setsid bash -c "./sleep10.sh ; echo 2 > /tmp/run-checker2" &
pid1=$!
read line </tmp/smack-checker2
printf "Process %d finished earlier\n" "$line"
rm /tmp/smack-checker2
eval kill -- -\$"pid$((line ^ 1))"
sleep60.sh
#!/usr/bin/env bash
sleep 60
sleep10.sh
#!/usr/bin/env bash
sleep 10
Use wait -n to wait for either process to exit. Ignoring race conditions and pid number wrapping,
c1 & P1=$!
c2 & P2=$!
wait -n # wait for either one to exit
if ! kill $P1; then
# failure to kill $P1 indicates c1 finished first
kill $P2
# collect c1 results...
else
# c2 finished first
kill $P1
# collect c2 results...
fi
See help wait or man bash for documentation.
I would run 2 processes and make them write to the shared named pipe
after they finish. Reading from a named pipe is a blocking operation
so you don't need funny sleep instructions inside a loop. It would
be:
#!/usr/bin/env bash
mkfifo /tmp/run-checker
(./sleep60.sh ; echo 0 > /tmp/run-checker) &
(./sleep10.sh ; echo 1 > /tmp/run-checker) &
read line </tmp/run-checker
printf "Process %d finished earlier\n" "$line"
rm /tmp/run-checker
kill -- -$$
sleep60.sh:
#!/usr/bin/env bash
sleep 60
sleep10.sh:
#!/usr/bin/env bash
sleep 10
EDIT:
If you're going to call the script form Python script like that:
#!/usr/bin/env python3
import os
os.system("./parallel.sh")
print("Done")
you'll get:
Process 1 finished earlier
./parallel.sh: line 11: kill: (-13807) - No such process
Done
This is because kill -- -$$ tries to send TERM signal to the process
group as specified in man 1 kill:
-n
where n is larger than 1. All processes in process group n are
signaled. When an argument of the form '-n' is given, and it
is meant to denote a process group, either a signal must be
specified first, or the argument must be preceded by a '--'
option, otherwise it will be taken as the signal to send.
It works when you run parallel.sh from the terminal because $$ is a
PID of the subshell and also of the process group. I used it because
it's very convenient to kill parallel.sh, process0 or process1 and all
their children in one shot. However, when parallel.sh is called from
Python script $$ does not longer denote process group and kill --
fails.
You could modify parallel.sh like that:
#!/usr/bin/env bash
mkfifo /tmp/run-checker
setsid bash -c "./sleep60.sh ; echo 0 > /tmp/run-checker" &
pid0=$!
setsid bash -c "./sleep10.sh ; echo 1 > /tmp/run-checker" &
pid1=$!
read line </tmp/run-checker
printf "Process %d finished earlier\n" "$line"
rm /tmp/run-checker
eval kill -- -\$"pid$((line ^ 1))"
It will now work also when called from Python script. The last line
eval kill -- -\$"pid$((line ^ 1))"
kills pid0 if pid1 finished earlier or pid0 if pid1 finished earlier
using ^ binary operator to convert 0 to 1 and vice versa. If you
don't like it you can use a bit more verbose form:
if [ "$line" -eq "$pid0" ]
then
echo kill "$pid1"
kill -- -"$pid1"
else
echo kill "$pid0"
kill -- -"$pid0"
fi
Can this snippet give you some idea?
#!/bin/sh
runproc1() {
sleep 5
touch proc1 # file created when terminated
exit
}
runproc2() {
sleep 10
touch proc2 # file created when terminated
exit
}
# remove flags
rm proc1
rm proc2
# run processes concurrently
runproc1 &
runproc2 &
# wait until one of them is finished
while [ ! -f proc1 -a ! -f proc2 ]; do
sleep 1
echo -n "."
done
The idea is to enclose two processes into two functions which, at the end, touch a file to signal that computing is terminated. The functions are executed in background, after having removed the files used as flags. The last step is to watch for either file to show up. At that point, anything can be done: continue to wait for the other process, or kill it.
Launching this precise script, it takes about 5 seconds, then terminates. I see that the file "proc1" is created, with no proc2. After a few seconds (5, to be precise), also "proc2" is created. This means that even when the script is terminated, any unfinished job keeps to run.

Bash: Using SSH to start a long-running remote command and collect its PID

When I do the following, then I have to press CTRL-c afterwards or the shell acts weird. Left/right arrows keys e.g. doesn't move correctly and the text is messed up.
# read -r pid < <(ssh 10.10.10.46 'sleep 50 & echo $!') ; echo $pid
2135
# Killed by signal 2.
^C
#
I need this for a script, so I'd like to know why CTRL-c is needed and is it possible to work around it?
Update
It looks like it opens an extra Bash shell, and that is the one that needs to be exited.
The command I am actually interesting in is
read -r pid < <(ssh 10.10.10.46 "mbuffer -4 -v 0 -q -I 8023 > /tmp/mtest & echo $!"); echo $pid
Try this instead:
read -r pid \
< <(ssh 10.10.10.46 'nohup mbuffer >/tmp/mtest </dev/null 2>/tmp/mtest.err & echo $!')
Three important changes:
Use of nohup (you could also get a similar effect with the bash built-in disown)
Redirection of stdin and stderr to files (preventing them from holding handles that connect, eventually, to your terminal).
Use of single quotes for the remote command (with double-quotes, expansions happen before ssh is started, so the $! you get is the PID of the most recently started local background process).

Can I detect early exit from a long-running, backgrounded process?

I'm trying to improve the startup scripts for several servers running in a cluster environment. The server processes should run indefinitely but occasionally fails on startup issuing e.g., Address already in use exceptions.
I'd like the exit code for the startup script to reflect these early terminations by, say, waiting for 1 second and telling me if the server seems to have started okay. I also need the server PID echoed.
Here's my best shot so far:
$ cat startup.sh
# start the server in the bg but if it fails in the first second,
# then kill startup.sh.
CMD="start_server -option1 foo -option2 bar"
eval "($CMD >> cc.log 2>&1 || kill -9 $$ &)"
SERVER_PID=$!
# the `kill` above only has 1 second to kill me-- otherwise my exit code is 0
sleep 1
echo $SERVER_PID
The exit code works fine but two problems remain:
If the server is long-running but eventually encounters an error, the parent startup.sh will have exited already and the $$ PID may have been reused by an unrelated process which this script will then kill off.
The SERVER_PID isn't correct since it's the PID of the subshell rather than the start_server command (which in this case is a grandchild of the startup.sh script.
Is there a simpler way to background the start_server process, get its PID, and use a timeout'ed check for error codes? I looked into bash builtins wait and timeout but they don't seem to work for processes that shouldn't exit in the end.
I can't change the server code and the startup script should not run indefinitely.
You can also use coproc (and look, I'm putting the command in an array, and also with proper quoting!):
#!/bin/bash
cmd=( start_server -option1 foo -option2 bar )
coproc mycoprocfd { "${cmd[#]}" >> cc.log 2>&1 ; }
server_pid=$!
sleep 1
if [[ -z "${mycoprocfd[#]}" ]]; then
echo >&2 "Failure detected when starting server! Server died before 1 second."
exit 1
else
echo $server_pid
fi
The trick is that coproc puts the file descriptors of the redirections of stdin and stdout in a prescribed array (here mycoprocfd) and empties the array when the process exits. So you don't need to do clumsy stuff with the PID itself.
You can hence check for the server to never exit as so:
#!/bin/bash
cmd=( start_server -option1 foo -option2 bar )
coproc mycoprocfd { "${cmd[#]}" >> cc.log 2>&1 ; }
server_pid=$!
read -u "${mycoprocfd[0]}"
echo >&2 "Oh dear, the server with PID $server_pid died after $SECONDS seconds."
exit 1
That's because read will read on the file descriptor given by coproc (but nothing is ever read here, since the stdout of your command has been redirected to a file!), and read exits when the file descriptor is closed, i.e., when the command launched by coproc exits.
I'd say this is a really elegant solution!
Now, this script will live as long as the coproc lives. I understood that's not what you want. In this case, you can timeout the read with its -t option, and then you'll use the fact that return's exit status is greater than 128 if it timed out. E.g., for a 4.5 seconds timeout
#!/bin/bash
timeout=4.5
cmd=( start_server -option1 foo -option2 bar )
coproc mycoprocfd { "${cmd[#]}" >> cc.log 2>&1 ; }
server_pid=$!
read -t $timeout -u "${mycoprocfd[0]}"
if (($?>128)); then
echo "$server_pid <-- all is good, it's still alive after $timeout seconds."
else
echo >&2 "Oh dear, the server with PID $server_pid died after $timeout seconds."
exit 1
fi
exit 0 # Yay
This is also very elegant :).
Use, extend, and adapt to your needs! (but with good practices!)
Hope this helps!
Remarks.
coproc is a bash-builtin that appeared in bash 4.0. The solutions shown here are 100% pure bash (except the first one, with sleep, which is not the best one at all!).
The use of coproc in scripts is almost always superior to putting jobs in background with & and doing clumsy and awkward stuff with sleep and checking $!.
If you want coproc to keep quiet, whatever happens (e.g., if there's an error launching the command, which is fine here since you're handling everything yourself), do:
coproc mycoprocfd { "${cmd[#]}" >> cc.log 2>&1 ; } > /dev/null 2>&1
20 minutes of more googling revealed https://stackoverflow.com/a/6756971/494983 and kill -0 $PID from https://stackoverflow.com/a/14296353/494983.
So it seems I can use:
$ cat startup.sh
CMD="start_server -option1 foo -option2 bar"
eval "$CMD >> cc.log 2>&1 &"
SERVER_PID=$!
sleep 1
kill -0 $SERVER_PID
if [ $? != 0 ]; then
echo "Failure detected when starting server! PID $SERVER_PID doesn't exist!" 1>&2
exit 1
else
echo $SERVER_PID
fi
This wouldn't work for processes that I can't send signals to but works well enough in my case (where startup.sh starts the server itself).

Bash: How do I make sub-processes of a script be terminated, when the script is terminated?

The question applies to a script such as the following:
Script
#!/bin/sh
SRC="/tmp/my-server-logs"
echo "STARTING GREP JOBS..."
for f in `find ${SRC} -name '*log*2011*' | sort --reverse`
do
(
OUT=`nice grep -ci -E "${1}" "${f}"`
if [ "${OUT}" != "0" ]
then
printf '%7s : %s\n' "${OUT}" "${f}"
else
printf '%7s %s\n' "(none)" "${f}"
fi
) &
done
echo "WAITING..."
wait
echo "FINISHED!"
Current behavior
Pressing Ctrl+C in console terminates the script but not the already running grep processes.
Write a trap for Ctrl+c and in the trap kill all of the subprocesses. Put this before your wait command.
function handle_sigint()
{
for proc in `jobs -p`
do
kill $proc
done
}
trap handle_sigint SIGINT
A simple alternative is using a cat pipe. The following worked for me:
echo "-" > test.text;
for x in 1 2 3; do
( sleep $x; echo $x | tee --append test.text; ) &
done | cat
If I press Ctrl-C before the last number is printed to stdout. It also works if the text-generating command is something that takes a long time such as "find /", i.e. it is not only the connection to stdout through cat that is killed but actually the child process.
For large scripts that make extensive use of subprocesses the easiest way to ensure the indented Ctrl-C behaviour is wrapping the whole script into such a subshell, e.g.
#!/usr/bin/bash
(
...
) | cat
I am not sure though if this has the exactly same effect as Andrew's answer (i.e. I'm not sure what signal is sent to the subprocesses). Also I only tested this with cygwin, not with a native Linux shell.

Resources