is .bashrc getting run twice when entering a new bash instance? - linux

I want to display the number of nested sub-shells in my bash prompt.
I often type ":sh" during a vim editing session in order to do something, then exit back to the editor. Sometimes I attempt to exit back to the editor out of habit, forgetting that I am not in any editing session and my terminal closes!
To avoid this, I added a bit of code to my .bashrc that would keep a count of the number of nested sub-shells and display it in the prompt.
Here is the code:
echo "1: SHLVL=$SHLVL"
if [[ -z $SHPID ]] ; then
echo "2: SHLVL=$SHLVL"
SHPID=$$
let "SHLVL = ${SHLVL:0} + 1"
fi
echo "3: SHLVL=$SHLVL"
(For those who may wonder, the test "-z $SHPID" insures that $SHLVL won't get incremented again if I run ". .bashrc" again in the same shell, perhaps to test something.)
But the output looks like this:
lsiden#morpheus ~ (morpheus) (2) $ bash
1: SHLVL=3
2: SHLVL=3
3: SHLVL=4
lsiden#morpheus ~ (morpheus) (4) $ ps
PID TTY TIME CMD
10421 pts/2 00:00:00 bash
11363 pts/2 00:00:00 bash
11388 pts/2 00:00:00 ps
As you can see, there are now two instances of bash on the stack, but the variable $SHLVL has been incremented twice. The output shows that before this snippet of code even executes in my .bashrc, SHLVL has already been incremented by 1!
Is it possible for .bashrc to get run twice somehow without seeing the output of the echo commands?

SHLVL is incremented automatically whenever you fire up a shell:
~$ echo $SHLVL
1
~$ bash -c 'echo $SHLVL'
2
and then you're incrementing it again in the .bashrc.

Related

running sudo -i command in bash script? [duplicate]

I have a script where I need to start a command, then pass some additional commands as commands to that command. I tried
su
echo I should be root now:
who am I
exit
echo done.
... but it doesn't work: The su succeeds, but then the command prompt is just staring at me. If I type exit at the prompt, the echo and who am i etc start executing! And the echo done. doesn't get executed at all.
Similarly, I need for this to work over ssh:
ssh remotehost
# this should run under my account on remotehost
su
## this should run as root on remotehost
whoami
exit
## back
exit
# back
How do I solve this?
I am looking for answers which solve this in a general fashion, and which are not specific to su or ssh in particular. The intent is for this question to become a canonical for this particular pattern.
Adding to tripleee's answer:
It is important to remember that the section of the script formatted as a here-document for another shell is executed in a different shell with its own environment (and maybe even on a different machine).
If that block of your script contains parameter expansion, command substitution, and/or arithmetic expansion, then you must use the here-document facility of the shell slightly differently, depending on where you want those expansions to be performed.
1. All expansions must be performed within the scope of the parent shell.
Then the delimiter of the here document must be unquoted.
command <<DELIMITER
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=leon
a=0
mylogin=leon
2. All expansions must be performed within the scope of the child shell.
Then the delimiter of the here document must be quoted.
command <<'DELIMITER'
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<'END'
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=1
mylogin=root
a=0
mylogin=leon
3. Some expansions must be performed in the child shell, some - in the parent.
Then the delimiter of the here document must be unquoted and you must escape those expansion expressions that must be performed in the child shell.
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=\$(whoami)
echo a=$a
echo mylogin=\$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=root
a=0
mylogin=leon
A shell script is a sequence of commands. The shell will read the script file, and execute those commands one after the other.
In the usual case, there are no surprises here; but a frequent beginner error is assuming that some commands will take over from the shell, and start executing the following commands in the script file instead of the shell which is currently running this script. But that's not how it works.
Basically, scripts work exactly like interactive commands, but how exactly they work needs to be properly understood. Interactively, the shell reads a command (from standard input), runs that command (with input from standard input), and when it's done, it reads another command (from standard input).
Now, when executing a script, standard input is still the terminal (unless you used a redirection) but the commands are read from the script file, not from standard input. (The opposite would be very cumbersome indeed - any read would consume the next line of the script, cat would slurp all the rest of the script, and there would be no way to interact with it!) The script file only contains commands for the shell instance which executes it (though you can of course still use a here document etc to embed inputs as command arguments).
In other words, these "misunderstood" commands (su, ssh, sh, sudo, bash etc) when run alone (without arguments) will start an interactive shell, and in an interactive session, that's obviously fine; but when run from a script, that's very often not what you want.
All of these commands have ways to accept commands by ways other than in an interactive terminal session. Typically, each command supports a way to pass it commands as options or arguments:
su root -c 'who am i'
ssh user#remote uname -a
sh -c 'who am i; echo success'
Many of these commands will also accept commands on standard input:
printf 'uname -a; who am i; uptime' | su
printf 'uname -a; who am i; uptime' | ssh user#remote
printf 'uname -a; who am i; uptime' | sh
which also conveniently allows you to use here documents:
ssh user#remote <<'____HERE'
uname -a
who am i
uptime
____HERE
sh <<'____HERE'
uname -a
who am i
uptime
____HERE
For commands which accept a single command argument, that command can be sh or bash with multiple commands:
sudo sh -c 'uname -a; who am i; uptime'
As an aside, you generally don't need an explicit exit because the command will terminate anyway when it has executed the script (sequence of commands) you passed in for execution.
If you want a generic solution which will work for any kind of program, you can use the expect command.
Extract from the manual page:
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be. An interpreted language provides branching and high-level control structures to direct the dialogue. In addition, the user can take control and interact directly when desired, afterward returning control to the script.
Here is a working example using expect:
set timeout 60
spawn sudo su -
expect "*?assword" { send "*secretpassword*\r" }
send_user "I should be root now:"
expect "#" { send "whoami\r" }
expect "#" { send "exit\r" }
send_user "Done.\n"
exit
The script can then be launched with a simple command:
$ expect -f custom.script
You can view a full example in the following page: http://www.journaldev.com/1405/expect-script-example-for-ssh-and-su-login-and-running-commands
Note: The answer proposed by #tripleee would only work if standard input could be read once at the start of the command, or if a tty had been allocated, and won't work for any interactive program.
Example of errors if you use a pipe
echo "su whoami" |ssh remotehost
--> su: must be run from a terminal
echo "sudo whoami" |ssh remotehost
--> sudo: no tty present and no askpass program specified
In SSH, you might force a TTY allocation with multiple -t parameters, but when sudo will ask for the password, it will fail.
Without the use of a program like expect any call to a function/program which might get information from stdin will make the next command fail:
ssh use#host <<'____HERE'
echo "Enter your name:"
read name
echo "ok."
____HERE
--> The `echo "ok."` string will be passed to the "read" command

shell script is stop after SSH (logged into kubernetes pod) [duplicate]

I have a script where I need to start a command, then pass some additional commands as commands to that command. I tried
su
echo I should be root now:
who am I
exit
echo done.
... but it doesn't work: The su succeeds, but then the command prompt is just staring at me. If I type exit at the prompt, the echo and who am i etc start executing! And the echo done. doesn't get executed at all.
Similarly, I need for this to work over ssh:
ssh remotehost
# this should run under my account on remotehost
su
## this should run as root on remotehost
whoami
exit
## back
exit
# back
How do I solve this?
I am looking for answers which solve this in a general fashion, and which are not specific to su or ssh in particular. The intent is for this question to become a canonical for this particular pattern.
Adding to tripleee's answer:
It is important to remember that the section of the script formatted as a here-document for another shell is executed in a different shell with its own environment (and maybe even on a different machine).
If that block of your script contains parameter expansion, command substitution, and/or arithmetic expansion, then you must use the here-document facility of the shell slightly differently, depending on where you want those expansions to be performed.
1. All expansions must be performed within the scope of the parent shell.
Then the delimiter of the here document must be unquoted.
command <<DELIMITER
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=leon
a=0
mylogin=leon
2. All expansions must be performed within the scope of the child shell.
Then the delimiter of the here document must be quoted.
command <<'DELIMITER'
...
DELIMITER
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<'END'
a=1
mylogin=$(whoami)
echo a=$a
echo mylogin=$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=1
mylogin=root
a=0
mylogin=leon
3. Some expansions must be performed in the child shell, some - in the parent.
Then the delimiter of the here document must be unquoted and you must escape those expansion expressions that must be performed in the child shell.
Example:
#!/bin/bash
a=0
mylogin=$(whoami)
sudo sh <<END
a=1
mylogin=\$(whoami)
echo a=$a
echo mylogin=\$mylogin
END
echo a=$a
echo mylogin=$mylogin
Output:
a=0
mylogin=root
a=0
mylogin=leon
A shell script is a sequence of commands. The shell will read the script file, and execute those commands one after the other.
In the usual case, there are no surprises here; but a frequent beginner error is assuming that some commands will take over from the shell, and start executing the following commands in the script file instead of the shell which is currently running this script. But that's not how it works.
Basically, scripts work exactly like interactive commands, but how exactly they work needs to be properly understood. Interactively, the shell reads a command (from standard input), runs that command (with input from standard input), and when it's done, it reads another command (from standard input).
Now, when executing a script, standard input is still the terminal (unless you used a redirection) but the commands are read from the script file, not from standard input. (The opposite would be very cumbersome indeed - any read would consume the next line of the script, cat would slurp all the rest of the script, and there would be no way to interact with it!) The script file only contains commands for the shell instance which executes it (though you can of course still use a here document etc to embed inputs as command arguments).
In other words, these "misunderstood" commands (su, ssh, sh, sudo, bash etc) when run alone (without arguments) will start an interactive shell, and in an interactive session, that's obviously fine; but when run from a script, that's very often not what you want.
All of these commands have ways to accept commands by ways other than in an interactive terminal session. Typically, each command supports a way to pass it commands as options or arguments:
su root -c 'who am i'
ssh user#remote uname -a
sh -c 'who am i; echo success'
Many of these commands will also accept commands on standard input:
printf 'uname -a; who am i; uptime' | su
printf 'uname -a; who am i; uptime' | ssh user#remote
printf 'uname -a; who am i; uptime' | sh
which also conveniently allows you to use here documents:
ssh user#remote <<'____HERE'
uname -a
who am i
uptime
____HERE
sh <<'____HERE'
uname -a
who am i
uptime
____HERE
For commands which accept a single command argument, that command can be sh or bash with multiple commands:
sudo sh -c 'uname -a; who am i; uptime'
As an aside, you generally don't need an explicit exit because the command will terminate anyway when it has executed the script (sequence of commands) you passed in for execution.
If you want a generic solution which will work for any kind of program, you can use the expect command.
Extract from the manual page:
Expect is a program that "talks" to other interactive programs according to a script. Following the script, Expect knows what can be expected from a program and what the correct response should be. An interpreted language provides branching and high-level control structures to direct the dialogue. In addition, the user can take control and interact directly when desired, afterward returning control to the script.
Here is a working example using expect:
set timeout 60
spawn sudo su -
expect "*?assword" { send "*secretpassword*\r" }
send_user "I should be root now:"
expect "#" { send "whoami\r" }
expect "#" { send "exit\r" }
send_user "Done.\n"
exit
The script can then be launched with a simple command:
$ expect -f custom.script
You can view a full example in the following page: http://www.journaldev.com/1405/expect-script-example-for-ssh-and-su-login-and-running-commands
Note: The answer proposed by #tripleee would only work if standard input could be read once at the start of the command, or if a tty had been allocated, and won't work for any interactive program.
Example of errors if you use a pipe
echo "su whoami" |ssh remotehost
--> su: must be run from a terminal
echo "sudo whoami" |ssh remotehost
--> sudo: no tty present and no askpass program specified
In SSH, you might force a TTY allocation with multiple -t parameters, but when sudo will ask for the password, it will fail.
Without the use of a program like expect any call to a function/program which might get information from stdin will make the next command fail:
ssh use#host <<'____HERE'
echo "Enter your name:"
read name
echo "ok."
____HERE
--> The `echo "ok."` string will be passed to the "read" command

Bash 'swallowing' sub-shell children process when executing a single command

Bumped into an unexpected bash/sh behavior and I wonder someone can explain the rationale behind it, and provide a solution to the question below.
In an interactive bash shell session, I execute:
$ bash -c 'sleep 10 && echo'
With ps on Linux it looks like this:
\_ -bash
\_ bash -c sleep 10 && echo
\_ sleep 10
The process tree is what I would expect:
My interactive bash shell process ($)
A children shell process (bash -c ...)
a sleep children process
However, if the command portion of my bash -c is a single command, e.g.:
$ bash -c 'sleep 10'
Then the middle sub-shell is swallowed, and my interactive terminal session executes sleep "directly" as children process.
The process tree looks like this:
\_ -bash
\_ sleep 10
So from process tree perspective, these two produce the same result:
$ bash -c 'sleep 10'
$ sleep 10
What is going on here?
Now to my question: is there a way to force the intermediate shell, regardless of the complexity of the expression passed to bash -c ...?
(I could append something like ; echo; to my actual command and that "works", but I'd rather not. Is there a more proper way to force the intermediate process into existence?)
(edit: typo in ps output; removed sh tag as suggested in comments; one more typo)
There's actually a comment in the bash source that describes much of the rationale for this feature:
/* If this is a simple command, tell execute_disk_command that it
might be able to get away without forking and simply exec.
This means things like ( sleep 10 ) will only cause one fork.
If we're timing the command or inverting its return value, however,
we cannot do this optimization. */
if ((user_subshell || user_coproc) && (tcom->type == cm_simple || tcom->type == cm_subshell) &&
((tcom->flags & CMD_TIME_PIPELINE) == 0) &&
((tcom->flags & CMD_INVERT_RETURN) == 0))
{
tcom->flags |= CMD_NO_FORK;
if (tcom->type == cm_simple)
tcom->value.Simple->flags |= CMD_NO_FORK;
}
In the bash -c '...' case, the CMD_NO_FORK flag is set when determined by the should_suppress_fork function in builtins/evalstring.c.
It is always to your benefit to let the shell do this. It only happens when:
Input is from a hardcoded string, and the shell is at the last command in that string.
There are no further commands, traps, hooks, etc. to be run after the command is complete.
The exit status does not need to be inverted or otherwise modified.
No redirections need to be backed out.
This saves memory, causes the startup time of the process to be slightly faster (since it doesn't need to be forked), and ensures that signals delivered to your PID go direct to the process you're running, making it possible for the parent of sh -c 'sleep 10' to determine exactly which signal killed sleep, should it in fact be killed by a signal.
However, if for some reason you want to inhibit it, you need but set a trap -- any trap will do:
# run the noop command (:) at exit
bash -c 'trap : EXIT; sleep 10'

Launching a bash shell from a sudo-ed environment

Apologies for the confusing Question title. I am trying to launch an interactive bash shell from a shell script ( say shel2.sh) which has been launched by a parent script (shel1.sh) in a sudo-ed environment. ( I am creating a guided deployment
script for my software which needs to be installed as super-user , hence the sudo, but may need the user to access the shell. )
Here's shel1.sh
#!/bin/bash
set -x
sudo bash << EOF
echo $?
./shel2.sh
EOF
echo shel1 done
And here's shel2.sh
#!/bin/bash
set -x
bash --norc --verbose --noprofile -i
echo $?
echo done
I expected this to launch an interactive bash shell which waits for my input before returning to shel1.sh. This is what I see:
+ ./shel1.sh
+ sudo bash
0
+ bash --norc --verbose --noprofile -i
bash-4.3# exit
+ echo 0
0
+ echo done
done
+ echo shel1 done
shel1 done
The bash-4.3# calls an exit automatically and quits. Interestingly if I invoke the bash shell with -l (or --login) the automatic entry is logout !
Can someone explain what is happening here ?
When you use a here document, you are tying up the shell's -- and its spawned child processes' -- standard input to the here document input.
You can avoid using a here document in many situations. For example, replace the here document with a single-quoted string.
#!/bin/bash
set -x
sudo bash -c '
# Aside: How is this actually useful?
echo $?
# Spawned script inherits the stdin of "sudo bash"
./shel2.sh'
echo shel1 done
Without more details, it's hard to see where exactly you want to go with this, but most modern Linux platforms have package managers which allow all kinds of hooks for installation, so that you would typically not need to do this sort of thing. Have you looked into that?

count processes in shell script [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Quick-and-dirty way to ensure only one instance of a shell script is running at a time
I am new to shell script.
what I wanna do is to avoid running multiple instances of a script.
I have this shell script cntps.sh
#!/bin/bash
cnt=`ps -e|grep "cntps"|grep -v "grep"`
echo $cnt >> ~/cntps.log
if [ $cnt < 1 ];
then
#do something.
else
exit 0
fi
if I run it this way $./cntps.sh, it echoes 2
if I run it this way $. ./cntps.sh, it echoes 0
if I run it with crontab, it echoes 3
Could somebody explain to me why is this happening?
And what is the proper way to avoid running multiple instances of a script?
I changed your command slightly to output ps to a log file so we can see what is going on.
cnt=`ps -ef| tee log | grep "cntps"|grep -v "grep" | wc -l`
This is what I saw:
32427 -bash
20430 /bin/bash ./cntps.sh
20431 /bin/bash ./cntps.sh
20432 ps -ef
20433 tee log
20434 grep cntps
20435 grep -v grep
20436 wc -l
As you can see, my terminal's shell (32427) spawns a new shell (20430) to run the script. The script then spawns another child shell (20431) for command substitution (`ps -ef | ...`).
So, the count of two is due to:
20430 /bin/bash ./cntps.sh
20431 /bin/bash ./cntps.sh
In any case, this is not a good way to ensure that only one process is running. See this SO question instead.
Firstly, I would recommend using pgrep rather than this method. Secondly I presume you're missing a wc -l to count the number of instances from the script
In answer to your counting problems:
if I run it this way $./cntps.sh, it echoes 2
This is because the backtick call: ps -e ... is triggering a subshell which is also called cntps.sh and this triggers two items
if I run it this way $. ./cntps.sh, it echoes 0
This is caused as you're not running, it but are actually sourcing it into the currently running shell. This causes there to be no copies of the script running by the name cntps
if I run it with crontab, it echoes 3
Two from the invocation, one from the crontab invocation itself which spawns sh -c 'path/to/cntps.sh'
Please see this question for how to do a single instance shell script.
Use a "lock" file as a mutex.
if(exists("lock") == false)
{
touch lock file // create a file named "lock" in the current dir
execute_script_body // execute script commands
remove lock file // delete the file
}
else
{
echo "another instance is running!"
}
exit

Resources