'&&' vs. '&' with the 'test' command in Bash - linux

Consider:
gndlp#ubuntu:~$ test -x examples.desktop && echo $?
gndlp#ubuntu:~$ test -x examples.desktop & echo $?
[1] 2992
0
Why is Bash acting the way it is in this situation?
Is the test command simply not finishing and thus the echo command isn't processed?

The meaning of && and & are intrinsically different.
What is && in Bash? In Bash—and many other programming languages—&& means “AND”. And in command execution context like this, it means items to the left as well as right of && should be run in sequence in this case.
What is & in Bash? And a single & means that the preceding commands—to the immediate left of the &—should simply be run in the background.
So looking at your example:
gndlp#ubuntu:~$ test -x examples.desktop && echo $?
gndlp#ubuntu:~$ test -x examples.desktop & echo $?
[1] 2992
0
The first command—as it is structured—actually does not return anything. But second command returns a [1] 2992 in which the 2992 refers to the process ID (PID) that is running in the background and the 0 is the output of the first command.
Since the second command is just running test -x examples.desktop in the background it happens quite quickly, so the process ID is spawned and gone pretty immediately.

& executes a command in the background, and will return 0 regardless of its status.
From the man page:
If a command is terminated by the control operator &, the shell executes the command in the background in a subshell. The shell does not wait for the command to finish, and the return status is 0. Commands separated by a ; are executed sequentially; the shell waits for each command to terminate in turn. The return status is the exit status of the last command executed.

Look at what your commands are:
test -x examples.desktop && echo $?
This means check to see if examples.desktop is executable and if it is then execute echo $?.
test -x examples.desktop & echo $?
means check to see if examples.desktop is executable in the "background". Then execute echo $?.

Related

Is there a way in the shell script that if [ <script exits with non-0 value> ] then; do <something>

In the shell script, I want to do that
if the shell script failed ( exited with non zero value), then before exiting the process, do something.
How could I insert such a if statement block in my shell script.
Is that feasible?
For example,
set -e
echo $password > confidential.txt
rm <file-that-does-not-exist>
rm confidential.txt
I want to make sure that the confidential.txt is made sure to be removed anyways
Use the trap command:
trap 'if [ $? -ne 0 ]; then echo failed; fi' EXIT
The EXIT trap is run when the script exits, and $? contains the status of the last command before it exited.
Note that a shell script's exit status is the status of the last command that it executed. So in your script, it will be the status of
rm confidential.txt
not the error from
rm filethatdoesnotexist
Unless you use set -e in the script, which makes it exit as soon as any command gets an error.
Use trap with the EXIT pseudo signal:
remove_secret () {
rm -f /path/to/confidential.txt
}
trap remove_secret EXIT
You probably don't want the file to remain if the script exits with 0, so EXIT happens regardless of the exit code.
Note that without set -e, rm on a non-existent file doesn't stop the script.
Assuming you're on Linux (or another operating system with /proc/*/fd), you have an even better option: Delete confidential.txt before putting the password into it at all.
That can look something like the following:
exec 3<>confidential.txt
rm -f -- confidential.txt
printf '%s\n' "$password" >&3
...and then, to read from that deleted file:
cat "/proc/$$/fd/3" ## where $$ is the PID of the shell that ran the exec command above
Because the file is already deleted, it's guaranteed to be eligible for garbage collection by your filesystem the moment your script (or the last program it started inheriting its file descriptors) exits or is killed, even if it's killed in a way that doesn't permit traps or signal processing to take place.

Bash 'swallowing' sub-shell children process when executing a single command

Bumped into an unexpected bash/sh behavior and I wonder someone can explain the rationale behind it, and provide a solution to the question below.
In an interactive bash shell session, I execute:
$ bash -c 'sleep 10 && echo'
With ps on Linux it looks like this:
\_ -bash
\_ bash -c sleep 10 && echo
\_ sleep 10
The process tree is what I would expect:
My interactive bash shell process ($)
A children shell process (bash -c ...)
a sleep children process
However, if the command portion of my bash -c is a single command, e.g.:
$ bash -c 'sleep 10'
Then the middle sub-shell is swallowed, and my interactive terminal session executes sleep "directly" as children process.
The process tree looks like this:
\_ -bash
\_ sleep 10
So from process tree perspective, these two produce the same result:
$ bash -c 'sleep 10'
$ sleep 10
What is going on here?
Now to my question: is there a way to force the intermediate shell, regardless of the complexity of the expression passed to bash -c ...?
(I could append something like ; echo; to my actual command and that "works", but I'd rather not. Is there a more proper way to force the intermediate process into existence?)
(edit: typo in ps output; removed sh tag as suggested in comments; one more typo)
There's actually a comment in the bash source that describes much of the rationale for this feature:
/* If this is a simple command, tell execute_disk_command that it
might be able to get away without forking and simply exec.
This means things like ( sleep 10 ) will only cause one fork.
If we're timing the command or inverting its return value, however,
we cannot do this optimization. */
if ((user_subshell || user_coproc) && (tcom->type == cm_simple || tcom->type == cm_subshell) &&
((tcom->flags & CMD_TIME_PIPELINE) == 0) &&
((tcom->flags & CMD_INVERT_RETURN) == 0))
{
tcom->flags |= CMD_NO_FORK;
if (tcom->type == cm_simple)
tcom->value.Simple->flags |= CMD_NO_FORK;
}
In the bash -c '...' case, the CMD_NO_FORK flag is set when determined by the should_suppress_fork function in builtins/evalstring.c.
It is always to your benefit to let the shell do this. It only happens when:
Input is from a hardcoded string, and the shell is at the last command in that string.
There are no further commands, traps, hooks, etc. to be run after the command is complete.
The exit status does not need to be inverted or otherwise modified.
No redirections need to be backed out.
This saves memory, causes the startup time of the process to be slightly faster (since it doesn't need to be forked), and ensures that signals delivered to your PID go direct to the process you're running, making it possible for the parent of sh -c 'sleep 10' to determine exactly which signal killed sleep, should it in fact be killed by a signal.
However, if for some reason you want to inhibit it, you need but set a trap -- any trap will do:
# run the noop command (:) at exit
bash -c 'trap : EXIT; sleep 10'

Launching a bash shell from a sudo-ed environment

Apologies for the confusing Question title. I am trying to launch an interactive bash shell from a shell script ( say shel2.sh) which has been launched by a parent script (shel1.sh) in a sudo-ed environment. ( I am creating a guided deployment
script for my software which needs to be installed as super-user , hence the sudo, but may need the user to access the shell. )
Here's shel1.sh
#!/bin/bash
set -x
sudo bash << EOF
echo $?
./shel2.sh
EOF
echo shel1 done
And here's shel2.sh
#!/bin/bash
set -x
bash --norc --verbose --noprofile -i
echo $?
echo done
I expected this to launch an interactive bash shell which waits for my input before returning to shel1.sh. This is what I see:
+ ./shel1.sh
+ sudo bash
0
+ bash --norc --verbose --noprofile -i
bash-4.3# exit
+ echo 0
0
+ echo done
done
+ echo shel1 done
shel1 done
The bash-4.3# calls an exit automatically and quits. Interestingly if I invoke the bash shell with -l (or --login) the automatic entry is logout !
Can someone explain what is happening here ?
When you use a here document, you are tying up the shell's -- and its spawned child processes' -- standard input to the here document input.
You can avoid using a here document in many situations. For example, replace the here document with a single-quoted string.
#!/bin/bash
set -x
sudo bash -c '
# Aside: How is this actually useful?
echo $?
# Spawned script inherits the stdin of "sudo bash"
./shel2.sh'
echo shel1 done
Without more details, it's hard to see where exactly you want to go with this, but most modern Linux platforms have package managers which allow all kinds of hooks for installation, so that you would typically not need to do this sort of thing. Have you looked into that?

ksh su -c return value

inside of my script I need to run two scripts as another user, so I used the following line:
su otherUser -c "firstScript;secondScript;status=$?"
echo "returning $status"
return $status
The problem is that $status will always return 0.I did test with secondScript failing (wrong argument). Not sure if it's because I exited otherUser or if the $status is actually the result of the su command. Any suggestions?!
You need to capture status inside your outer shell, not in the inner shell invoked by su; otherwise, the captured value is thrown away as soon as that inner shell exits.
This is made much easier because su passes through the exit status of the command it runs -- if that command exits with a nonzero status, so will su.
su otherUser -c 'firstScript; secondScript'; status=$?
echo "returning $status"
return $status
Note that this only returns the exit status of secondScript (as would your original code have done, were it working correctly). You might think about what you want to do if firstScript fails.
Now, it's a little more interesting if you only want to return the exit code of firstScript; in that case, you need to capture the exit status in both shells:
su otherUser -c 'firstScript; status=$?; secondScript; exit $status'); status=$?
echo "returning $status"
return $status
If you want to run secondScript only if firstScript succeeds, and return a nonzero value if either of them fails, it becomes easy again:
su otherUser -c 'firstScript && secondScript'); status=$?
echo "returning $status"
return $status

is .bashrc getting run twice when entering a new bash instance?

I want to display the number of nested sub-shells in my bash prompt.
I often type ":sh" during a vim editing session in order to do something, then exit back to the editor. Sometimes I attempt to exit back to the editor out of habit, forgetting that I am not in any editing session and my terminal closes!
To avoid this, I added a bit of code to my .bashrc that would keep a count of the number of nested sub-shells and display it in the prompt.
Here is the code:
echo "1: SHLVL=$SHLVL"
if [[ -z $SHPID ]] ; then
echo "2: SHLVL=$SHLVL"
SHPID=$$
let "SHLVL = ${SHLVL:0} + 1"
fi
echo "3: SHLVL=$SHLVL"
(For those who may wonder, the test "-z $SHPID" insures that $SHLVL won't get incremented again if I run ". .bashrc" again in the same shell, perhaps to test something.)
But the output looks like this:
lsiden#morpheus ~ (morpheus) (2) $ bash
1: SHLVL=3
2: SHLVL=3
3: SHLVL=4
lsiden#morpheus ~ (morpheus) (4) $ ps
PID TTY TIME CMD
10421 pts/2 00:00:00 bash
11363 pts/2 00:00:00 bash
11388 pts/2 00:00:00 ps
As you can see, there are now two instances of bash on the stack, but the variable $SHLVL has been incremented twice. The output shows that before this snippet of code even executes in my .bashrc, SHLVL has already been incremented by 1!
Is it possible for .bashrc to get run twice somehow without seeing the output of the echo commands?
SHLVL is incremented automatically whenever you fire up a shell:
~$ echo $SHLVL
1
~$ bash -c 'echo $SHLVL'
2
and then you're incrementing it again in the .bashrc.

Resources