Printing the pid of just started process before its output - linux

I am controlling a remote Linux machine via SSH, I need to be able to know the pid of a process while it is running and its exit status after the run
My attempt has been to issue this command via SSH
my_cmd & echo $!; wait $!; echo $?;
The output is thus the following, exactly what I need:
pid
...stdout...
exit_status
Now sometimes it happens that apparently the command is too fast, so I get something like:
...stdout...
pid
exit_status
Is there a way to prevent this behavior?

When you run a background program then it is an independent process and it is necessary to do some synchronization if an output in a defined order is required. But this particular can be solved easily via exec and additional shell script:
First script, let say start:
#!/bin/bash
start2 &
wait $!
echo $?
second script start2:
#!/bin/bash
echo $$
exec my_cmd
Now the first script starts the second one and waits for result. And the second script prints own pid and then execs the program which will run with the same pid as the second script.

Yes, when you use & you are invoking a background process with its output to stdout as you exemplified... And that's the point. Your process execution and stdout printing is happening faster than pid query and printing and so on.... So what I recommend you is to redirect stdout to a /tmp/tempfile and then print it exactly when you need the output to be printed.... Example:
$ o=/tmp/_o; your_cmd 1>$o 2>$o & echo "pid is $!"; wait $!; r=$?; cat $o; echo "return is $r"; rm -f $o
first we set output variable 'o' to '/tmp/_o' temporary file
then we run your_cmd redirecting all output (1 and 2 which means stdout and stderr) to $o (which points to /tmp/_o).
then we show the pid
then we wait pid
then we set return var 'r' (to show it after output)
then we concatenate the output, showing it for you as you wish..
then we show the return
then we remove the tempfile 'o'..
Hope this way works for your case..
Hope you don't think it's too complex.
Hope someone to answer a simple way to do workaround this for you..

Related

How to capture error messages from a program that fails only outside the terminal?

On a Linux server, I have a script here that will work fine when I start it from the terminal, but fail when started and then detached by another process. So there is probably a difference in the script's environment to fix.
The trouble is, the other process integrating that script does not provide access to its error messages when the script fails. What is an easy (and ideally generic) way to see the output of such a script when it's failing?
Let's assume I have no easy way to change the code of the process calling this script. The failure happens right at the start of the script's run, so there is not enough time to manually attach to it with strace to see its output.
(The specifics should not matter, but for what it's worth: the failing script is the backup script of Discourse, a widespread open source forum software. Discourse and this script are written in Ruby.)
The idea is to substitute original script with wrapper which calls original script and saves its stdin and stderr to files. Wrapper may be like this:
#!/bin/bash
exec /path/to/original/script "$#" 1> >(tee /tmp/out.log) 2> >(tee /tmp/err.log >&2)
1> >(tee /tmp/out.log) redirects stdout to tee /tmp/out.log input in subshell. tee /tmp/out.log passes it to stdout but saves copy to the file.
2> >(tee /tmp/err.log) redirects stderr to tee /tmp/err.log input in subshell. tee /tmp/err.log >&2 passes it to stderr but saves copy to the file.
If script is invoked multiple times you may want to append stdout and stderr to files. Use tee -a in this case.
The problem is how to force caller to execute wrapper script instead of original one.
If caller invokes script in a way that it is searched in PATH you can put wrapper script to a separate directory and provide modified PATH to the caller. For example, script name is script. Put wrapper to /some/dir/script and run caller as
$ PATH="/some/dir:$PATH" caller
/path/to/original/script in wrapper must be absolute.
If caller invokes script from specific path then you can rename original script e.g. to original-script and name wrapper as script. In this case wrapper should call /path/to/original/original-script.
Another problem may rise if script behaves differently depending on name it's called. In this case exec -a ... may be needed.
You can use a bash script that (1) does "busy waiting" until it sees the targeted process, and then (2) immediately attaches to it with strace and prints its output to the terminal.
#!/bin/sh
# Adapt to a regex that matches only your target process' full command.
name_pattern="bin/ruby.*spawn_backup_restore.rb"
# Wait for a process to start, based on its name, and capture its PID.
# Inspiration and details: https://unix.stackexchange.com/a/410075
pid=
while [ -z "$pid" ] ; do
pid="$(pgrep --full "$name_pattern" | head -n 1)"
# Set delay for next check to 1ms to try capturing all output.
# Remove completely if this is not enough to capture from the start.
sleep 0.001
done
echo "target process has started, pid is $pid"
# Print all stdout and stderr output of the process we found.
# Source and explanations: https://unix.stackexchange.com/a/58601
strace -p "$pid" -s 9999 -e write

how can I make bash block on getting a line of stdout from a job that I have spawned

I need to launch a process within a shell script. (It is a special logging process.) It needs to live for most of the shell script, while some other processes will run, and then at the end we will kill it.
A problem that I am having is that I need to launch this process, and wait for it to "warm up", before proceeding to launch more processes.
I know that I can wait for a line of input from a pipe using read, and I know that I can spawn a child process using &. But when I use them together, it doesn't work like I expect.
As a mockup:
When I run this (sequential):
(sleep 1 && echo "foo") > read
my whole shell blocks for 1 second, and the echo of foo is consumed by read, as I expect.
I want to do something very similar, except that I run the "foo" job in parallel:
(sleep 1 && echo "foo" &) > read
But when I run this, my shell doesn't block at all, it returns instantly -- I don't know why the read doesn't wait for a line to be printed on the pipe?
Is there some easy way to combine "spawning of a job" (&) with capturing the stdout pipe within the original shell?
An example that is very close to what I actually need is, I need to rephrase this somehow,
(sleep 1 && echo "foo" && sleep 20 &) > read; echo "bar"
and I need for it to print "bar" after exactly one second, and not immediately, or 21 seconds later.
Here's an example using named pipes, pretty close to what I used in the end. Thanks to Luis for his comments suggesting named pipes.
#!/bin/sh
# Set up temporary fifo
FIFO=/tmp/test_fifo
rm -f "$FIFO"
mkfifo "$FIFO"
# Spawn a second job that writes to FIFO after some time
sleep 1 && echo "foo" && sleep 20 >$FIFO &
# Block the main job on getting a line from the FIFO
read line <$FIFO
# So that we can see when the main job exits
echo $line
Thanks also to commenter Emily E., the example that I posted that was misbehaving was indeed writing to a file called read instead of using the shell-builtin command read.

shell prompt seemingly does not reappear after running a script that uses exec with tee to send stdout output to both the terminal and a file

I have a shell script which writes all output to logfile
and terminal, this part works fine, but if I execute the script
a new shell prompt only appear if I press enter. Why is that and how do I fix it?
#!/bin/bash
exec > >(tee logfile)
echo "output"
First, when I'm testing this, there always is a new shell prompt, it's just that sometimes the string output comes after it, so the prompt isn't last. Did you happen to overlook it? If so, there seems to be a race where the shell prints the prompt before the tee in the background completes.
Unfortunately, that cannot fixed by waiting in the shell for tee, see this question on unix.stackexchange. Fragile workarounds aside, the easiest way to solve this that I see is to put your whole script inside a list:
{
your-code-here
} | tee logfile
If I run the following script (suppressing the newline from the echo), I see the prompt, but not "output". The string is still written to the file.
#!/bin/bash
exec > >(tee logfile)
echo -n "output"
What I suspect is this: you have three different file descriptors trying to write to the same file (that is, the terminal): standard output of the shell, standard error of the shell, and the standard output of tee. The shell writes synchronously: first the echo to standard output, then the prompt to standard error, so the terminal is able to sequence them correctly. However, the third file descriptor is written to asynchronously by tee, so there is a race condition. I don't quite understand how my modification affects the race, but it appears to upset some balance, allowing the prompt to be written at a different time and appear on the screen. (I expect output buffering to play a part in this).
You might also try running your script after running the script command, which will log everything written to the terminal; if you wade through all the control characters in the file, you may notice the prompt in the file just prior to the output written by tee. In support of my race condition theory, I'll note that after running the script a few times, it was no longer displaying "abnormal" behavior; my shell prompt was displayed as expected after the string "output", so there is definitely some non-deterministic element to this situation.
#chepner's answer provides great background information.
Here's a workaround - works on Ubuntu 12.04 (Linux 3.2.0) and on OS X 10.9.1:
#!/bin/bash
exec > >(tee logfile)
echo "output"
# WORKAROUND - place LAST in your script.
# Execute an executable (as opposed to a builtin) that outputs *something*
# to make the prompt reappear normally.
# In this case we use the printf *executable* to output an *empty string*.
# Use of `$ec` is to ensure that the script's actual exit code is passed through.
ec=$?; $(which printf) ''; exit $ec
Alternatives:
#user2719058's answer shows a simple alternative: wrapping the entire script body in a group command ({ ... }) and piping it to tee logfile.
An external solution, as #chepner has already hinted at, is to use the script utility to create a "transcript" of your script's output in addition to displaying it:
script -qc yourScript /dev/null > logfile # Linux syntax
This, however, will also capture stderr output; if you wanted to avoid that, use:
script -qc 'yourScript 2>/dev/null' /dev/null > logfile
Note, however, that this will suppress stderr output altogether.
As others have noted, it's not that there's no prompt printed -- it's that the last of the output written by tee can come after the prompt, making the prompt no longer visible.
If you have bash 4.4 or newer, you can wait for your tee process to exit, like so:
#!/usr/bin/env bash
case $BASH_VERSION in ''|[0-3].*|4.[0-3]) echo "ERROR: Bash 4.4+ needed" >&2; exit 1;; esac
exec {orig_stdout}>&1 {orig_stderr}>&2 # make a backup of original stdout
exec > >(tee -a "_install_log"); tee_pid=$! # track PID of tee after starting it
cleanup() { # define a function we'll call during shutdown
retval=$?
exec >&$orig_stdout # Copy your original stdout back to FD 1, overwriting the pipe to tee
exec 2>&$orig_stderr # If something overwrites stderr to also go through tee, fix that too
wait "$tee_pid" # Now, wait until tee exits
exit "$retval" # and complete exit with our original exit status
}
trap cleanup EXIT # configure the function above to be called during cleanup
echo "Writing something to stdout here"

bash "&" without printing "[1]+ Done "

I call a script in my .bashrc to print number of new messages I have when I open the terminal, I want the call to be non blocking as it accesses the network and sometimes takes a few seconds which means I can't use the terminal until it completes.
However if I put:
mailcheck &
in my bashrc, it works fine. but then prints an empty line and I have when I press enter and it prints
[1]+ Done ~/bin/mailcheck
This is very messy is there a way around this?
That message isn't coming from mailcheck, it's from bash's job control telling you about your backgrounded job. The way to avoid it is to tell bash you don't want it managed by job control:
mailcheck &
disown $!
This seems to work:
(mailcheck &)
You can call your script like this:
(exec mailcheck & )
try piping stderr to /dev/null
mailcheck & 2>/dev/null
Thinking about it for a few mins, another way might be to use write.
Pipe the output of the background task to yourself, that way it can complete at any time and you can bin any additional output from stdout as well as stderr.
mailcheck | write $(whoami) & > /dev/null
This was the last page I checked before I fixed an issue I was having so I figured I would leave my finished script that had the same issue as OP:
nohup bash -c '{anycommand};echo "CommandFinished"' 1> "output$(date +"%Y%m%d%H%M").out" 2> /dev/null & disown
This runs {anycommand} with nohup and sends the stdout to a unique file, the stderr to /dev/null, and the rest to console (the PID) for scraping. The stdout file is monitored by another process looking for the CommandFinished or whatever unique string.
Bash would later print this:
[1]+ Done nohup bash -c ....
Adding disown to the end stopped bash jobs from printing that to console.

invoking a bash script by itself

I need to invoke a bash script by itself with a different set of arguments that would make it run as a background process, so I am using something like:
if [[ $a == $b ]]
then
$0 -v &> /dev/null
fi
The issue is that though I am invoking the same script as a background process using '&' as suffix and redirecting all outputs to /dev/null, the terminal that I invoke the script is not released, I am assuming that is because of the script which initially was invoked has a process which is running as a foreground process, so the query is, how to call a bash script by itself such that when it calls itself the process which is responsible for running the script for the first time is killed and the console released and the second call to itself runs as a background process?
You're not running it as a background process using &. &> is a completely separate token that redirects stdout and stderr at the same time. If you wanted to put that command in the background it would be $0 -v &>/dev/null &.
Try something like this:
nohup $0 -v &
The nohup command does the job of detaching the background job and ignoring signals, see the man page.

Resources