Shell redirection - dialog - linux

I cannot interpret the code blow
exec 3>&1
INTERFACE=`echo $DIALOG_TAGS | xargs dialog --title 'Test' --menu ' Text comes here :' 0 0 0 2>&1 1>&3`
exec 3>&-
: > $TMPDIR/._tmp.conf
My question is; FD#3 is created with exec 3>&1 and redirected to where FD#1 goes. In the second line, we have 2>&1 1>&3 which confuses me.
Another part is the last line, where : and a > redirection take place.
Any explanations will be highly appreciated!
Thanks

exec 3>&1
FD #3 is created, writing to FD #1 (stdout).
INTERFACE=`echo $DIALOG_TAGS | xargs dialog --title 'Test' --menu ' Text comes here :' 0 0 0 2>&1 1>&3`
The command in backtics is executed. (In this case, a dialog menu displaying the tag / item pairs defined in DIALOG_TAGS.)
The FD #2 (stderr) is redirected to FD #1 (stdout, which gets assigned to INTERFACE due to the backticks). The FD #1 (stdout) is redirected to FD #3.
The idea is as follows:
dialog displays a menu from which you can select an item. The menu, obviously, needs stdout to write to. The result (selection) from the menu is, by default, written to stderr.
The author wants to capture the result, thus he redirects stderr to &1 so the result ends up in INTERFACE.
However he wants only the result, not the whole menu... so instead of capturing the menu and the result, he redirects the menu (stdout) to the previously-created copy of stdout, leaving only the result to be sent to FD #1, and thus stored in INTERFACE, while the menu is being displayed via the FD #3 (which is still pointing to the terminal, regardless of the backticks).
Once that is done...
exec 3>&-
FD #3 is closed, as it is no longer needed.
: > $TMPDIR/._tmp.conf
This, I couldn't understand. The colon is a builtin command that does nothing, except parameter expansion and redirection. It creates an empty file...? Perhaps to indicate to some other program that the selection is made...? Or to preserve the time stamp...? No idea.

Related

Is the redirect in bash hierarchical?

There's an example explaining how the order of redirect command will affect the final result in the manual of bash as
Note that the order of redirections is significant. For example, the command
ls > dirlist 2>&1
directs both standard output (file descriptor 1) and standard error (file descriptor 2) to the file dirlist, while the command
ls 2>&1 > dirlist
directs only the standard output to file dirlist, because the standard error was made a copy of the standard output before the standard output was redirected to dirlist.
In the second case, if my understanding is correct, 2>&1 means everything in stderr will be redirected to stdout and > dirlist means everything in stdout will be redirected to file dirlist. The final result is that only stderr is shown in the console.
My question is that does such a phenomenon means the redirect of bash is not hierarchical? Since if it's hierarchical, the stderr has already been put into stdout and should be put into the file dirlist together with stdout.
You need to see ">&n" (iow, "1>&n") as "fd 1 is redirected to wherever fd n is currently going to", not "fd 1 will go to wherever fd n is going even if fd n changes after that". Thus I dislike the term "copy" as it is to me a bit ambiguous: 'n>&m' make fd n 'point' to wherever fd m currently points to (and does not link fd n to fd m in any way, ie it does not make them "the same fd". they both still ar different fd's, that can change independently)
ex1 : in an interactive shell, where stdout & stderr both go to your terminal:
Every commands you type in this shell will output their STDOUT (fd 1) to the terminal as well, and their STDERR (fd 2) to the terminal as well, by default (unless you tell them to output to something else)
So when you redirect a command to output somewhere else:
command 2>&1 >/dev/null
# now `command`'s STDERR (its `fd 2`) goes to: the terminal (again), as by default its `fd 1` goes to terminal
# and then only its `fd 1` (its stdout) is redirected to /dev/null, discarding it.
command >/dev/null 2>&1
# here, fd 1 (stdout) is redirected to /dev/null
# and then fd 2 (stderr) goes to where fd 1 is *currently* going, ie /dev/null too
most commands with a '>&something' or '>something' have this redirection be temporary. A special case is 'exec >something' or 'exec >&something', which changes the redirection of the current shell or program, and thus stays in effect "until another exec changes it again" or until the current shell or program finishes. iow, 'exec someredirection' affects not a subcommand, but the current program/shell.
ex2 : in a program launched with : program >/some/log_stdout 2>/some/log_stderr
# beginning of the program
... # anything here will have its stdout go to /some/log_stdout and stderr go to /some/log_stderr
exec 3>&1 # cree a new fd, 3, that goes to where stdout currently goes (/some/log_stdout)
exec 4>&2 # cree a new fd, 4, that goes to where stderr currently goes (/some/log_stderr)
exec 1>/dev/null # redirect fd 1 to /dev/null. fd 3 still goes to /some/log_stdout
exec 2>/dev/null # redirect fd 2 to /dev/null. fd 4 still goes to /some/log_stderr
something # here, no output: this command "something" outputs to fd1 and fd2, both going to /dev/null
other thing # same for that one
exec 5>&1 # this new fd, 5, goes to where fd1 currently goes (/dev/null)
somecommand ... >&3
# here, the STDOUT (only) of somecommand goes to where fd 3 currently points to,
# ie to /some/log_stdout (and of course just for the durection of somecommand)
another_command
# no output, as it outputs by default its stdout to fd 1 and stderr to fd 2, both going to /dev/null
# etc.
exec 1>&3 # fd 1 now goes to wherever fd3 was going,
# and thus we "restore" the original fd 1
# (as no 'exec 3>...' where done to change fd 3)
exec 2>&4 # same thing for fd 2, going to where fd 4 is going
# (which was where fd 2 was originally going)
It may help to "imagine" the following table attached to your command (or your shell, or your script:
your interactive shell usually will have this by default:
fd 1(=stdout) _goes_to_ your terminal
fd 2(=stderr) _goes_to_ your terminal
Each command it launches will also have 'a copy of this table' (or rather, their process space inherit the launching command/shell's current fd's), unless told otherwise.
When you launch : somecommand >/dev/null 2>&1, before somecommand is launched, fd 1 is first pointed to "/dev/null", and fd 2 is then pointed to where fd 1 now points (/dev/null) and the process is launched with this 'table':
fd 1(=stdout) _goes_to_ /dev/null
fd 2(=stderr) _goes_to_ /dev/null
The shell that launched the command still have its own table, unchanged. (unless 'somecommand' was in fact 'exec n>somewhere', which changes the current shell's process fd n to point to 'somewhere', "permanently" (well, until the current shell finishes.).
ls 2>&1 > dirlist
Here, destination of stdout is copied to stderr first, but this still means they both point to your terminal. After that, stdout is redirected to dirlist. So dirlist's stderr will still be printed to terminal.

by default, does stderr start as a duplicate file descriptor of stdout?

Does stderr start out as a duplicate FD of stdout?
i.e. considering dup(2), is stderr initialized kind of like so?
int stderr = dup(stdout); // stdout = 1
In the BashGuide, there's a code example
$ grep proud file 'not a file' > proud.log 2> proud.log
The author states
We've created two FDs that both point to the same file, independently of each other. The results of this are not well-defined. Depending on how the operating system handles FDs, some information written via one FD may clobber information written through the other FD.
and further says
We need to prevent having two independent FDs working on the same destination or source. We can do this by duplicating FDs
So basically, 2 independent FDs on the same file = broken
Well, I know that stdout & stderr both point to my terminal by default. Since they can both function properly (i.e. i don't see mish-mashed output+error messages), does that mean that they're not independent FDs? And thus, stderr is a duplicate FD of stdout? (or vice versa)
No, stderr is not a duplicate of stdout.
They work in parallel, independent and asynchronously.
Which means that in a race condition, you might even 'mish-mash' as you say.
One practical difference is also that stderr output will be ignored when you pipe your output to a subsequent command:
Practical example:
$ cat tst.sh
#!/bin/bash
echo "written to stdout"
echo "written to stderr" 1>&2
exit 0
~$ ./tst.sh
written to stdout
written to stderr
~$ ./tst.sh | xargs -n1 -I{} echo "this came through the pipe:{}"
written to stderr
this came through the pipe:written to stdout

How to check if a perforce file is opened using bash script

So I run "p4 opened the_file", it prints the status, but I cannot capture the print:
$ a=`p4 opened file1`
file1 - file(s) not opened on this client.
$ echo $a
Variable a is empty.
What I want is, I can get hold of the string "file(s) not opened on this client" and search for "not opened".
Any way to do that?
Change the first line to:
a=`p4 opened file1 2>&1`.
That redirects stderr into stdout, so your variable will capture both normal output and error output.
If you do:
p4 -s opened file1
all the server output gets sent to stdout and prefixed with a tag saying whether it's "error" or "info" (be careful, it's not always obvious where the distinction is). For your case that should get you something like:
error: file1 - file(s) not opened on this client.
exit: 0
Another fun global option (i.e. it goes before the command name, same as the "-s") is "-e", which gives you the raw error dict; you can do interesting things with this like look for a specific error code rather than grepping human-readable message strings.
if [[ -z "$(p4 -ztag opened $myfile)" ]]; then echo "Not opened"; fi
p4 -ztag opened won't return anything for a file that isn't open, so you can just test for empty output.

Difference between `exec n<&0 < file` and `exec n<file` commands and some general questions regarding exec command

As I am a newbie in shell scripting, exec command always confuses me and while exploring this topic with while loop had triggered following 4 questions:
What is the difference between the below syntax 1 and 2
syntax 1:
while read LINE
do
: # manipulate file here
done < file
syntax 2:
exec n<&0 < file
while read LINE
do
: # manipulate file here
done
exec 0<&n n<&-
Kindly elaborate the operation of exec n<&0 < file lucidly
Is this exec n<&0 < file command equivalent to exec n<file ? (if not then what's the difference between two?)
I had read some where that in Bourne shell and older versions of ksh, a problem with the while loop is that it is executed in a subshell. This means that any changes to the script environment,such as exporting variables and changing the current working directory, might not be present after the while loop completes.
As an example, consider the following script:
#!/bin/sh
if [ -f “$1” ] ; then
i=0
while read LINE
do
i=`expr $i + 1`
done < “$1”
echo $i
fi
This script tries to count the number of lines in the file specified to it as an argument.
On executing this script on the file
$ cat dirs.txt
/tmp
/usr/local
/opt/bin
/var
can produce the following incorrect result:
0
Although you are incrementing the value of $i using the command
i=expr $i + 1
when the while loop completes, the value of $i is not preserved.
In this case, you need to change a variable’s value inside the while loop and then use that value outside the loop.
One way to solve this problem is to redirect STDIN prior to entering the loop and then restore STDIN after the loop completes.
The basic syntax is
exec n<&0 < file
while read LINE
do
: # manipulate file here
done
exec 0<&n n<&-
My question here is:
In Bourne shell and older versions of ksh,ALTHOUGH WHILE LOOP IS EXECUTED IN SUBSHELL, how this exec command here helps in retaining variable value even after while loop completion i.e. how does here exec command accomplishes the task change a variable’s value inside the while loop and then use that value outside the loop.
The difference should be nothing in modern shells (both should be POSIX compatible), with some caveats:
There are likely thousands of unique shell binaries in use, some of which are missing common features or simply buggy.
Only the first version will behave as expected in an interactive shell, since the shell will close as soon as standard input gets EOF, which will happen once it finishes reading file.
The while loop reads from FD 0 in both cases, making the exec pointless if the shell supports < redirection to while loops. To read from FD 9 you have to use done <&9 (POSIX) or read -u 9 (in Bash).
exec (in this case, see help exec/man exec) applies the redirections following it to the current shell, and they are applied left-to-right. For example, exec 9<&0 < file points FD 9 to where FD 0 (standard input) is currently pointing, effectively making FD 9 a "copy" of FD 0. After that file is sent to standard input (both FD 0 and 9).
Run a shell within a shell to see the difference between the two (commented to explain):
$ echo foo > file
$ exec "$SHELL"
$ exec 9<&0 < file
$ foo # The contents of the file is executed in the shell
bash: foo: command not found
$ exit # Because the end of file, equivalent to pressing Ctrl-d
$ exec "$SHELL"
$ exec 9< file # Nothing happens, simply sends file to FD 9
This is a common misconception about *nix shells: Variables declared in subshells (such as created by while) are not available to the parent shell. This is by design, not a bug. Many other answers here and on USE refer to this.
So many questions... but all of them seem variants of the same one, so I'll go on...
exec without a command is used to do redirection in the current process. That is, it changes the files attached to different file descriptors (FD).
Question #1
I think it should be this way. In may system the {} are mandadory:
exec {n}<&0 < file
This line dups FD 0 (standard input) and stores the new FD into the n variable. Then it attaches file to the standard input.
while read LINE ; do ... done
This line reads lines into variable LINE from the standard input, that will be file.
exec 0<&n {n}<&-
And this line dups back the FD from n into 0 (the original standard input), that automatically closes file and then closes n (the dupped original stdin).
The other syntax:
while read LINE; do ... done < file
does the same, but in a less convoluted way.
Question #2
exec {n}<&0 < file
These are redirections, and they are executed left to right. The first one n<&0 does a dup(0) (see man dup) and stores the result new FD in variable n. Then the <file does a open("file"...) and assigns it to the FD 0.
Question #3
No. exec {n}<file opens the file and assigns the new FD to variable n, leaving the standard input (FD 0) untouched.
Question #4
I don't know about older versions of ksh, but the usual problem is when doing a pipe.
grep whatever | while read LINE; do ... done
Then the while command is run in a subshell. The same is true if it is to the left of the pipe.
while read LINE ; do ... done | grep whatever
But for simple redirects there is no subshell:
while read LINE ; do ... done < aaa > bbb
Extra unnumbered question
About your example script, it works for me once I've changed the typographic quotes to normal double quotes ;-):
#!/bin/sh
if [ -f "$1" ] ; then
i=0
while read LINE
do
i=`expr $i + 1`
done < "$1"
echo $i
fi
For example, if the file is test:
$ ./test test
9
And about your latest question, the subshell is not created by while but by the pipe | or maybe in older versions of ksh, by the redirection <. What the exec trick does is to prevent that redirection so no subshell is created.
Let me answer your questions out-of-order.
Q #2
The command exec n<&0 < file is not valid syntax. Probably the n stands for "some arbitrary number". That said, for example
exec 3<&0 < file
executes two redirections, in sequence: it duplicates/copies the standard input file descriptor, which is 0, as file descriptor 3. Next, it "redirects" file descriptor 0 to read from file file.
Later, the command
exec 0<&3 3<&-
first copies back the standard input file descriptor from the saved file descriptor 3, redirecting standard input back to its previous source. Then it closes file descriptor 3, which has served its purpose to backup the initial stdin.
Q #1
Effectively, the two examples do the same: they temporarily redirect stdin within the scope of the while loop.
Q #3
Nope: exec 3<filename opens file filename using file descriptor 3. exec 3<&0 <filename I described in #2.
Q #4
I guess those older shells mentioned effectively executed
while ...; do ... ; done < filename
as
cat filename | while ...
thereby executing the while loop in a subshell.
Doing the redirections beforehand with those laborious exec commands avoids the redirection of the while block, and thereby the implicit sub-shell.
However, I never heard of that weird behavior, and I guess you won't have to deal with it unless you're working with truly ancient shells.

exec n<&m versus exec n>&m -- based on Sobell's Linux book

In Mark Sobell's A Practical Guide to Linux Commands, Editors, and Shell Programming, Second Edition he writes (p. 432):
The <& token duplicates an input file
descriptor; >& duplicates an output
file descriptor.
This seems to be inconsistent with another statement on the same page:
Use the following format to open or
redirect file descriptor n as a
duplicate of file descriptor m:
exec n<&m
and with an example also on the same page:
# File descriptor 3 duplicates standard input
# File descriptor 4 duplicates standard output
exec 3<&0 4<&1
If >& duplicates an output file descriptor then should we not say
exec 4>&1
to duplicate standard output?
The example is right in practice. The book's original explanation is an accurate description of what the POSIX standard says, but the POSIX-like shells I have handy (bash and dash, the only ones I believe are commonly seen on Linux) are not that picky.
The POSIX standard says the same thing as the book about input and output descriptors, and goes on to say this: for n<&word, "if the digits in word do not represent a file descriptor already open for input, a redirection error shall result". So if you want to be careful about POSIX compatibility, you should avoid this usage.
The bash documentation also says the same thing about <& and >&, but without the promise of an error. Which is good, because it doesn't actually give an error. Instead, empirically n<&m and n>&m appear to be interchangeable. The only difference between <& and >& is that if you leave off the fd number on the left, <& defaults to 0 (stdin) and >& to 1 (stdout).
For example, let's start a shell with fd 1 pointing at a file bar, then try out exactly the exec 4<&1 example, try to write to the resulting fd 4, and see if it works:
$ sh -c 'exec 4<&1; echo foo >&4' >bar; cat bar
foo
It does, and this holds using either dash or bash (or bash --posix) for the shell.
Under the hood, this makes sense because <& and >& are almost certainly just calling dup2(), which doesn't care whether the fds are opened for reading or writing or appending or what.
[EDIT: Added reference to POSIX after discussion in comments.]
If stdout is a tty, then it can safely be cloned for reading or writing. If stdout is a file, then it may not work. I think the example should be 4>&1. I agree with Greg that you can both read and write the clone descriptor, but requesting a redirection with <& is supposed to be done with source descriptors that are readable, and expecting stdout to be readable doesn't make sense. (Although I admit I don't have a reference for this claim.)
An example may make it clearer. With this script:
#!/bin/bash
exec 3<&0
exec 4<&1
read -p "Reading from fd 3: " <&3
echo From fd 3: $REPLY >&2
REPLY=
read -p "Reading from fd 4: " <&4
echo From fd 4: $REPLY >&2
echo To fd 3 >&3
echo To fd 4 >&4
I get the following output (the stuff after the : on "Reading from" lines is typed at the terminal):
$ ./5878384b.sh
Reading from fd 3: foo
From fd 3: foo
Reading from fd 4: bar
From fd 4: bar
To fd 3
To fd 4
$ ./5878384b.sh < /dev/null
From fd 3:
Reading from fd 4: foo
From fd 4: foo
./5878384b.sh: line 12: echo: write error: Bad file descriptor
To fd 4
$ ./5878384b.sh > /dev/null
Reading from fd 3: foo
From fd 3: foo
./5878384b.sh: line 9: read: read error: 0: Bad file descriptor
From fd 4:
To fd 3
Mind the difference between file descriptors and IO streams such as stderr and stdout.
The redirecting operators are just redirecting IO streams via different file descriptors (IO stream handling mechanisms); they do not do any copying or duplicating of IO streams (that's what tee(1) is for).
See: File Descriptor 101
Another test to show that n<&m and n>&m are interchangeable would be "to use either style of 'n<&-' or 'n>&-' for closing a file descriptor, even if it doesn't match the read/write mode that the file descriptor was opened with" (http://www.gnu.org/s/hello/manual/autoconf/File-Descriptors.html).

Resources