Filtering shell script output within itself, the script is not terminated - linux

I want a Bash script to generate some output messages. The script is supposed to capture messages, do some filtering, transform, and then output them to the screen.
The filtered results are correct in the output, but the script is not terminated. I must press a return key to finish it. How do I fix it?
Demo script:
#!/bin/bash
exec &> >(
{
while read line; do
[ "$line" = "exit" ] && break
echo "`date +%H:%M:%S.%N` $line"
done
echo "while finish"
} )
for ((i=3;i--;)); do
echo "text $i"
done
echo "exit"

The script does terminate, but the script itself finishes before the background process that writes the output does, so the prompt is displayed first, then the output, leaving your terminal with a blank line that looks like the script is still running. You could type any command instead of hitting return, and that command would execute.
To avoid this, you need to run the while loop in an explicit background job that you can wait on before exiting your script.
mkfifo logpipe
trap 'wait $logger_pid; rm logpipe' EXIT
while read line; do
[ "$line" = "exit" ] && break
echo "$(date +%H:%M:%S.%N) $line"
done < logpipe &
logger_pid=$!
exec &> logpipe
# ==========
for ((i=3;i--;)); do
echo "text $i"
done
echo "exit"
The while loop runs in the background, reading its input from the named pipe logpipe. Once that is running, you can redirect all your output to the pipe and start your "main" script. The exit trap ensures that your script doesn't actually exit until the while loop completes; it also cleans up the named pipe for you.
You might not have noticed yet, but there is no guarantee that the while loop will receive the merged standard output and standard error in the exact order in which things are written to them. For instance,
echo out1
echo err1 >&2
echo out2
echo err2 >&2
may end up being read as
out1
err1
err2
out2
Each stream itself will remain in order, but the two could be arbitrarily merged.

Related

why the command 'exec' can remove the blocking state of fifo file?

I'am studying how to use multi thread to process tasks.And i noticed that the fifo file can help that.here is the effect:
#!/bin/bash
my_cmd(){
echo "process $1"
sleep 3
}
ff="d:/myfifo/$$.fifo"
mkfifo $ff
exec 7<>$ff
for i in {1..10};do echo;done >&7
for i in {1..1000};do {
read -u 7
my_cmd $i
echo >&7
}& done
rm $ff
wait
echo "end"
This shell script can run normally(process 1000 cmds, 10 at a time).And i modified this script slightly
#!/bin/bash
my_cmd(){
echo "process $1"
sleep 3
}
ff="d:/myfifo/$$.fifo"
mkfifo $ff
exec 7<>$ff
for i in {1..10};do echo;done >$ff # modified
for i in {1..1000};do {
read <$ff # modified
my_cmd $i
echo >$ff # modified
}& done
wait
rm $ff
echo "end"
As expected,the second script can also run normally.But i made a error when i modified it again.
#!/bin/bash
my_cmd(){
echo "process $1"
sleep 3
}
ff="d:/myfifo/$$.fifo"
mkfifo $ff
# exec 7<>$ff modified
for i in {1..10};do echo;done >$ff
for i in {1..1000};do {
read <$ff
my_cmd $i
echo >$ff
}& done
wait
rm $ff
echo "end"
The script wait a input to fifo file,because the fifo file entered a blocking state.It seems that this command 'exec 7<>$ff' lifted the blocking state of this fifo file.So is this the case?
On Linux, at least (Not sure about other OSes, and POSIX doesn't define a behavior), opening a fifo for both reading and writing will succeed at once without blocking waiting for the other end of the pipe to be opened.
So when you commented out the exec 7<>$ff line, the next line for i in {1..10};do echo;done >$ff will opening the fifo for writing, and block waiting for something else to open it for writing before going on. With the original version using the exec, it was already opened for reading so there was no need to block.
The Linux fifo(7) documentation does note
A process that uses both ends of the connection in order to communicate with itself should be very careful to avoid deadlocks.

Why can't change the input data if the variable declaration is inside of a for loop in bash

So im trying to make an infinity loop, that creates libraries.
but the file variable take input only once.
code:
for (( ; ; ))
do
file=${1?Error: no input}
mkdir "$file"
sleep 1
done
There's nothing in that loop that asks for input. $1 is provided once by the user as they run the script (before the loop even starts). The standard way to request input in a shell script is with the read command. Something like this:
while read -p "Enter a directory to create: " file; do
mkdir "$file"
done
This loop will terminate when it receives an end-of-file, which means the user must press Control-D to exit it. If you want to exit if the user just presses return without entering anything, you could do this:
while read -p "Enter a directory to create: " file; do
if [ -z "$file" ]; then
echo "Error: no input" >&2
break # This exits the while loop
fi
mkdir "$file"
done

exit from STDIN from bash script when the user want to close it

I'm automating the file creation from a bash script. I generated a file rc_notes.txt which has commit messages from two tags and want to re-write that in a new file as rc_device.txt.
I want the user to write the customer release notes and exit from the BASH STDIN that I prompt in the terminal.
The problem in my script is I'm not able to trap the close of file.
Wondering how to do. I don't want to trap the close signal. I want to enter magic string example: Done or some string that triggers the closure of STDIN, that exit from the script gracefully.
My script:
#/bin/bash
set -e
echo "Creating the release candiate text"
rc_file=rc_updater_notes.txt
echo "=========Reading the released commit message file=========="
cat $rc_file
echo "=========End of the commit message file=========="
echo "Now write the release notes"
#exec < /dev/tty
while read line
do
echo "$line"
done < "${1:-/dev/stdin}" > rc_file.txt
It does create the file but I need to exit manually by entering ctrl+D or ctrl+z. I don't want to do that. Any suggestions?
To break the loop when "Done" is entered
while read line
do
if [[ $line = Done ]]; then
break;
fi
echo "$line"
done < "${1:-/dev/stdin}" > rc_file.txt
or
while read line && [[ $line != Done ]]
do
echo "$line"
done < "${1:-/dev/stdin}" > rc_file.txt

Redirecting stdout only if command failed?

I'm writing a bash script that is supposed to be "transparent" to the user. It reads commands from the user and intercepts them, allowing only some of them to be executed by bash, depending on some criteria. It (basically) works like this:
while true; do
read COMMAND
can_be_done $COMMAND
if [ $? == 0 ]; then
eval $COMMAND
if [ $? != 0 ]; then
echo "Error: command not found"
fi
fi
done
The problem is, when the command fails, you also get stuff printed to the console. BUT, if I keep the result in a variable and only print it when it doesn't fail, like so:
RESULT=$(eval $COMMAND)
Then there's another problem: The special formatting gets lost (for example, "ls --color" doesn't show colors anymore)
My question is: Is there a way to have the command print to STDOUT if successful, but to /dev/null if it fails?
Do you really need the second part, replacing the output of the command with an error message? Linux commands print their own error messages, which aren't necessarily "command not found". You'd be hiding the true error (permission denied, file not found, out of memory, segfault, etc.) with an oftentimes incorrect error message (command not found).
If you remove that check, you could simplify the loop to something like this:
while true; do
read -e COMMAND
if can_be_done "$COMMAND"; then
eval "$COMMAND"
fi
done
read -e uses readline to obtain the command, making the prompt a lot more shell-like (&uparrow; and &downarrow; for history, for instance).
command; if [ $? == 0 ]; then is more idiomatically written as if <command>; then.
Quoting makes sure special characters and whitespace are handled properly.
I would argue strongly that you should not do this. If you do not want to see output, redirect it to /dev/null. If you do want to see errors, do not redirect stderr. If you are using a program that prints its error messages on stdout instead of stderr, FIX THE PROGRAM! Error messages belong on stderr. Note that this means your program is broken, as it ought to read:
echo "Error: command not found" >&2
I'm not sure if it is rule number 1, but it certainly belongs in the top 10, and it may be the most often violated rule: Error messages belong on stderr. A program which prints error messages on stdout is broken.
if false > /dev/null;then echo 1; else echo 2; fi 2> /dev/null
Will output 2
if true > /dev/null;then echo 1; else echo 2; fi 2> /dev/null
Will output 1
remove the > /dev/null to print the command also to stdout
for example
if echo 123;then echo 1; else echo 2; fi 2> /dev/null
Will output
123 & 1
Assuming that the command is not very expensive to run you can do this:
test `ls /mooo 2>/dev/null` || echo moo not found
test will return true only if the command exits with 0, in this case ls is the command. You could have put this in an if statement too like so:
if [ `ls /moo 2>/dev/null` ];then
echo moo is a folder
fi

Using named pipes with bash - Problem with data loss

Did some search online, found simple 'tutorials' to use named pipes. However when I do anything with background jobs I seem to lose a lot of data.
[[Edit: found a much simpler solution, see reply to post. So the question I put forward is now academic - in case one might want a job server]]
Using Ubuntu 10.04 with Linux 2.6.32-25-generic #45-Ubuntu SMP Sat Oct 16 19:52:42 UTC 2010 x86_64 GNU/Linux
GNU bash, version 4.1.5(1)-release (x86_64-pc-linux-gnu).
My bash function is:
function jqs
{
pipe=/tmp/__job_control_manager__
trap "rm -f $pipe; exit" EXIT SIGKILL
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
while true
do
if read txt <"$pipe"
then
echo "$(date +'%Y'): new text is [[$txt]]"
if [[ "$txt" == 'quit' ]]
then
break
fi
fi
done
}
I run this in the background:
> jqs&
[1] 5336
And now I feed it:
for i in 1 2 3 4 5 6 7 8
do
(echo aaa$i > /tmp/__job_control_manager__ && echo success$i &)
done
The output is inconsistent.
I frequently don't get all success echoes.
I get at most as many new text echos as success echoes, sometimes less.
If I remove the '&' from the 'feed', it seems to work, but I am blocked until the output is read. Hence me wanting to let sub-processes get blocked, but not the main process.
The aim being to write a simple job control script so I can run say 10 jobs in parallel at most and queue the rest for later processing, but reliably know that they do run.
Full job manager below:
function jq_manage
{
export __gn__="$1"
pipe=/tmp/__job_control_manager_"$__gn__"__
trap "rm -f $pipe" EXIT
trap "break" SIGKILL
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
while true
do
date
jobs
if (($(jobs | egrep "Running.*echo '%#_Group_#%_$__gn__'" | wc -l) < $__jN__))
then
echo "Waiting for new job"
if read new_job <"$pipe"
then
echo "new job is [[$new_job]]"
if [[ "$new_job" == 'quit' ]]
then
break
fi
echo "In group $__gn__, starting job $new_job"
eval "(echo '%#_Group_#%_$__gn__' > /dev/null; $new_job) &"
fi
else
sleep 3
fi
done
}
function jq
{
# __gn__ = first parameter to this function, the job group name (the pool within which to allocate __jN__ jobs)
# __jN__ = second parameter to this function, the maximum of job numbers to run concurrently
export __gn__="$1"
shift
export __jN__="$1"
shift
export __jq__=$(jobs | egrep "Running.*echo '%#_GroupQueue_#%_$__gn__'" | wc -l)
if (($__jq__ '<' 1))
then
eval "(echo '%#_GroupQueue_#%_$__gn__' > /dev/null; jq_manage $__gn__) &"
fi
pipe=/tmp/__job_control_manager_"$__gn__"__
echo $# >$pipe
}
Calling
jq <name> <max processes> <command>
jq abc 2 sleep 20
will start one process.
That part works fine. Start a second one, fine.
One by one by hand seem to work fine.
But starting 10 in a loop seems to lose the system, as in the simpler example above.
Any hints as to what I can do to solve this apparent loss of IPC data would be greatly appreciated.
Regards,
Alain.
Your problem is if statement below:
while true
do
if read txt <"$pipe"
....
done
What is happening is that your job queue server is opening and closing the pipe each time around the loop. This means that some of the clients are getting a "broken pipe" error when they try to write to the pipe - that is, the reader of the pipe goes away after the writer opens it.
To fix this, change your loop in the server open the pipe once for the entire loop:
while true
do
if read txt
....
done < "$pipe"
Done this way, the pipe is opened once and kept open.
You will need to be careful of what you run inside the loop, as all processing inside the loop will have stdin attached to the named pipe. You will want to make sure you redirect stdin of all your processes inside the loop from somewhere else, otherwise they may consume the data from the pipe.
Edit: With the problem now being that you are getting EOF on your reads when the last client closes the pipe, you can use jilles method of duping the file descriptors, or you can just make sure you are a client too and keep the write side of the pipe open:
while true
do
if read txt
....
done < "$pipe" 3> "$pipe"
This will hold the write side of the pipe open on fd 3. The same caveat applies with this file descriptor as with stdin. You will need to close it so any child processes dont inherit it. It probably matters less than with stdin, but it would be cleaner.
As said in other answers you need to keep the fifo open at all times to avoid losing data.
However, once all writers have left after the fifo has been open (so there was a writer), reads return immediately (and poll() returns POLLHUP). The only way to clear this state is to reopen the fifo.
POSIX does not provide a solution to this but at least Linux and FreeBSD do: if reads start failing, open the fifo again while keeping the original descriptor open. This works because in Linux and FreeBSD the "hangup" state is local to a particular open file description, while in POSIX it is global to the fifo.
This can be done in a shell script like this:
while :; do
exec 3<tmp/testfifo
exec 4<&-
while read x; do
echo "input: $x"
done <&3
exec 4<&3
exec 3<&-
done
Just for those that might be interested, [[re-edited]] following comments by camh and jilles, here are two new versions of the test server script.
Both versions now works exactly as hoped.
camh's version for pipe management:
function jqs # Job queue manager
{
pipe=/tmp/__job_control_manager__
trap "rm -f $pipe; exit" EXIT TERM
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
while true
do
if read -u 3 txt
then
echo "$(date +'%Y'): new text is [[$txt]]"
if [[ "$txt" == 'quit' ]]
then
break
else
sleep 1
# process $txt - remember that if this is to be a spawned job, we should close fd 3 and 4 beforehand
fi
fi
done 3< "$pipe" 4> "$pipe" # 4 is just to keep the pipe opened so any real client does not end up causing read to return EOF
}
jille's version for pipe management:
function jqs # Job queue manager
{
pipe=/tmp/__job_control_manager__
trap "rm -f $pipe; exit" EXIT TERM
if [[ ! -p "$pipe" ]]; then
mkfifo "$pipe"
fi
exec 3< "$pipe"
exec 4<&-
while true
do
if read -u 3 txt
then
echo "$(date +'%Y'): new text is [[$txt]]"
if [[ "$txt" == 'quit' ]]
then
break
else
sleep 1
# process $txt - remember that if this is to be a spawned job, we should close fd 3 and 4 beforehand
fi
else
# Close the pipe and reconnect it so that the next read does not end up returning EOF
exec 4<&3
exec 3<&-
exec 3< "$pipe"
exec 4<&-
fi
done
}
Thanks to all for your help.
Like camh & Dennis Williamson say don't break the pipe.
Now I have smaller examples, direct on the command line:
Server:
(
for i in {0,1,2,3,4}{0,1,2,3,4,5,6,7,8,9};
do
if read s;
then echo ">>$i--$s//";
else
echo "<<$i";
fi;
done < tst-fifo
)&
Client:
(
for i in {%a,#b}{1,2}{0,1};
do
echo "Test-$i" > tst-fifo;
done
)&
Can replace the key line with:
(echo "Test-$i" > tst-fifo&);
All client data sent to the pipe gets read, though with option two of the client one may need to start the server a couple of times before all data is read.
But although the read waits for data in the pipe to start with, once data has been pushed, it reads the empty string forever.
Any way to stop this?
Thanks for any insights again.
On the one hand the problem is worse than I thought:
Now there seems to be a case in my more complex example (jq_manage) where the same data is being read over and over again from the pipe (even though no new data is being written to it).
On the other hand, I found a simple solution (edited following Dennis' comment):
function jqn # compute the number of jobs running in that group
{
__jqty__=$(jobs | egrep "Running.*echo '%#_Group_#%_$__groupn__'" | wc -l)
}
function jq
{
__groupn__="$1"; shift # job group name (the pool within which to allocate $__jmax__ jobs)
__jmax__="$1"; shift # maximum of job numbers to run concurrently
jqn
while (($__jqty__ '>=' $__jmax__))
do
sleep 1
jqn
done
eval "(echo '%#_Group_#%_$__groupn__' > /dev/null; $#) &"
}
Works like a charm.
No socket or pipe involved.
Simple.
run say 10 jobs in parallel at most and queue the rest for later processing, but reliably know that they do run
You can do this with GNU Parallel. You will not need a this scripting.
http://www.gnu.org/software/parallel/man.html#options
You can set max-procs "Number of jobslots. Run up to N jobs in parallel." There is an option to set the number of CPU cores you want to use. You can save the list of executed jobs to a log file, but that is a beta feature.

Resources