Linux .profile overrides other bash commands - linux

I am loading a script (whiptail) when the root user logs into their Linux server, which works fine. The thing is, now, when I attempt to run other scripts from the command prompt (or crontab) the initial script is loaded instead, and it looks like the script that I want to run is not.
This is what ~/.profile looks like:
if [ "$BASH" ]; then
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
fi
mesg n
source /root/menu.sh
So, when I try to run bash -lc 'ruby some/other/script.rb I'm taken into the script that runs at the end of ~/.profile which is menu.sh. How can I keep this from happening?
Here's what I need to have happen in the long run:
The server boots up and takes the user to /root/menu.sh
There are background scripts that run via crontab such as a check in script, job script, etc.

Best-practices: Don't use a login shell unless you need one
When you pass the -l argument to bash, you're telling it to behave as a login shell; this includes running the user's .profile.
If you don't want that behavior, don't pass -l. Thus:
bash -c 'ruby some/other/script.rb'
That said, there's no advantage to doing that over just invoking ruby directly, without any enclosing shell:
ruby some/other/script.rb
If you must use a login shell...
If you want other effects of running the user's .profile, you might set an environment variable to indicate that you want to bypass this behavior:
# in the user's login scripts
[ -n "$skip_menu" ] || source /root/menu.sh
...and then...
skip_menu=1 bash -lc '...your command here...'
...or, if being executed without an enclosing shell...
env skip_menu=1 bash -lc '...your command here...'

Related

How to Change my default shell on server?

I was assigned an account for log in to a remote server, and I want to change my default shell.
I tried chsh command but it says: chsh: "/public/home/{my_id}/bin/zsh" is not listed in /etc/shells.
If you don't have permission to install zsh system wide, a quick fix is to append exec ~/bin/zsh -l to ~/.bash_profile (if bash is the current shell), or an equivalent rc file for the current login shell.
zsh -l starts zsh as a login shell.
exec COMMAND replaces the current process with COMMAND, so you'll only have to type exit (or press ctrl+d) once.
~/.bash_profile is executed when bash starts as a login shell, you can still run command bash normally.
Depending what is in ~/.bash_profile (or equivalent), you may wish to avoid executing its other contents, by putting exec ~/bin/zsh -l at the start of the file (not the end), and copy/port anything important over to the zsh equivalent, $ZDOTDIR/.zprofile.
I might also do export SHELL="$HOME/bin/zsh", although I'm unsure of the full effects of setting SHELL differently to that specified for your user in /etc/passwd, to a shell not in /etc/shells, and to a shell binary in your home path.
First check all the shells available on your linux system
cat /etc/shells
Use chsh command line utility for changing a login shell with the -s or –shell option like this.
# chsh --shell /bin/sh tecmint

shopt -s extdebug in .bashrc not working in script files

I am writing a a bash script (echoo.sh) with the intention of echoing the command before it is executed. I source this script (echoo.sh) inside .bashrc. But it does not execute for commands run in script file(tmp.sh) with the bash shebang. Below is the code I have so far
echoo.sh
#!/usr/bin/env bash
shopt -s extdebug; get_hacked () {
[ -n "$COMP_LINE" ] && return # not needed for completion
[ "$BASH_COMMAND" = "$PROMPT_COMMAND" ] && return # not needed for prompt
local this_command=$BASH_COMMAND;
echo $this_command;
};
trap 'get_hacked' DEBUG
When I open a shell and run any command - It works. But for stuff in a script file it doesn't work.
SOME FURTHER TRIES:
I tried sourcing the .bashrc file within the script file (tmp.sh) - didn't work.
I sourced echoo.sh inside tmp.sh and it worked.
SO, I am trying to understand
Why doesn't it work if I just source my script in .bashrc for stuff that runs in scripts?
Why doesn't further try #1 work when #2 does.
AND finally what can I do such that I don't have to source echoo.sh in all script files for this to work. Can source my script in one place and change some setting such that it works in all scenarios.
I source this script (echoo.sh) inside .bashrc. But it does not execute for commands run in script file(tmp.sh) with the bash shebang
Yes it won't because you are invoking the shell non-interactively!
The shell can be spawned either interactively or non-interactively. When bash is invoked as an interactive login shell it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable.
When an interactive shell that is not a login shell is started, bash reads and executes commands from ~/.bashrc, if that file exists.
When you run a shell script with an interpreter set, it opens a new sub-shell that is non-interactive and does not have the option -i set in the shell options.
Looking into ~/.bashrc closely you will find a line saying
# If not running interactively, don't do anything
[[ "$-" != *i* ]] && return
which means in the script you are calling, e.g. consider the case below which am spawning a non-interactive shell explicitly using the -c option and -x is just to enable debug mode
bash -cx 'source ~/.bashrc'
+ source /home/foobaruser/.bashrc
++ [[ hxBc != *i* ]]
++ return
which means the rest of the the ~/.bashrc was not executed because of this guard. But there is one such option to use here to read a start-up file for such non-interactive cases, as defined by BASH_ENV environment variable. The behavior is as if this line is executed
if [ -n "$BASH_ENV" ]; then . "$BASH_ENV"; fi
You can define a file and pass its value to the local environment variable
echo 'var=10' > non_interactive_startup_file
BASH_ENV=non_interactive_startup_file bash -x script.sh
Or altogether run your shell script as if an interactive non login shell is spawned. Run the script with an -i flag. Re-using the above example, with the -i flag passed now the ~/.bashrc file will be sourced.
bash -icx 'source ~/.bashrc'
You could also set this option when setting your interpreter she-bang in bash to #!/bin/bash -i
So to answer your questions from the above inferences,
Why doesn't it work if I just source my script in .bashrc for stuff that runs in scripts?
It won't because ~/.bashrc cannot be sourced from a shell that is launched non-interactively. By-pass it by passing -i to the script i.e. bash -i <script>
Why doesn't further try #1 work when #2 does.
Because you are depending on reading up the ~/.bashrc at all here. When you did source the echoo.sh inside tmp.sh, all its shell configurations are reflected in the shell launched by tmp.sh

Wrong BASH-Variable return from a bash script

I'd like to check the value of $HISTFILE (or any similar BASH-Variable) by a bash script. On the command console 'echo $HISTFILE' is the way I normally go, but from inside a bash script, which only includes:
#!/bin/bash
echo $HISTFILE
gives an empty line instead of showing $HOME/$USER/.bash_history (or similar return values). My questions are:
What is the reason for doing so (since I never had such trouble using bash scripts) and
how can I check the value of BASH-Variables like $HISTFILE from inside a bash script?
Many thanks in advance. Cheers, M.
HISTFILE is only set in interactive shells; scripts run in non-interactive shells. Compare
$ bash -c 'echo $HISTFILE' # non-interactive, no output
$ bash -ic 'echo $HISTFILE' # interactive, produces output
/home/me/.bash_history
However, forcing the script to run in an interactive shell will also cause your .bashrc file to be sourced, which may or may not be desirable.

Command NOT found when called from inside bash script

I have an application named puppet installed on my Linux box. It is installed at location /usr/test/bin/puppet
This is how .bash_profile looks
export PATH=/usr/test/bin
if I run command puppet apply from console, it works fine but when I call puppet command from inside bash script, it says command not found
#!/bin/bash
puppet apply x.pp
Any ideas on what is wrong ?
.bash_profile is loaded only if bash is invoked as login shell (bash -l or from a real tty), at least in Debian based distributions bash in a virtual tty (for example when using xterm, gnome-terminal, etc...) is invoked as interactive shell.
Interactive shells loads the configuration from ~/.bashrc.
bash manpage:
~/.bash_profile
The personal initialization file, executed for login shells
~/.bashrc
The individual per-interactive-shell startup file
Shellscripts don't load any of these.
You can check which files are opened by any program with strace:
strace ./s.sh 2>&1 | grep -e stat -e open
Possible solutions:
You can export the variable at the beginning of every script:
#!/bin/bash
export PATH=$PATH:...
Or you can have another file with the desired variables and source it from any script that need those:
/etc/special_vars.sh:
export PATH=$PATH:...
script:
#!/bin/bash
. /etc/special_vars.sh
puppet ...
Configure the PATH in in ~/.bashrc, ~/.bash_profile and ~/.profile for the user running the script (sub-processes will inherit the environment variables) to have some warranty that the user can run the script from different environments and shells (some bourne compatible shells others than bash do load ~/.profile)
Maybe the export of PATH is wrong?
export PATH=$PATH:/usr/test/bin/puppet
You could try using an alias, like so
in your .bash_profile:
alias puppet='bash puppet.fileextension'
you can also do
alias puppet='bash path/to/puppet.fileextension'
which will let you run the script from anywhere in Terminal.
EDIT:
OP has stated in the comments that there will be two different systems running, and he asked how to check the file path to the bash file.
If you do
#!/bin/bash
runPuppet(){
if [ -e path/to/system1/puppet.fileextension]
then
bash path/to/system1/puppet.fileextension $1 $2
elif [ -e path/to/system2/puppet.fileextension]
then
bash path/to/system2/puppet.fileextension $1 $2
fi
}
runPuppet apply x.pp
and change the runPuppet input to whatever you'd like.
To clarify/explain:
-e is to check if the file exists
$1 & $2 are the first two input parameters, respectively.

Pipe to program fails, but runs OK in SSH console

I'm trying to get Rails 4.1 to receive bounceback emails but it's been really difficult to even get to this point. I can run the command below in an SSH console when logged in as root, but when I put it in my /etc/valiases file, I get a bounceback from the script saying "the following addresses failed".
runuser -l useraccount -c "cd /home/useraccount/rails_deployments/dev.www/current/bin && rails runner -e development 'EBlast.receive(STDIN.read)'"
/etc/valiases/dev.mydomain.com
eblast-bounce#dev.mydomain.com: "|runuser -l useraccount -c "cd /home/useraccount/rails_deployments/dev.www/current/bin && rails runner -e development 'EBlast.receive(STDIN.read)'""
I've also tried escaping the double-quotes to no avail.
I need to run as useraccount because the RVM environment variables don't exist for root. Running the 1st command in an SSH console when logged in as root works, but not when exim receives an email.
You can't doublequote inside of doublequotes without doing some escaping. Once you start escaping quotes, it can get complicated knowing when you also need to escape other characters as well. Your example doesn't appear to get too complicated, but I suggest a different method.
IMHO you should create a shell script, for example eblast-bounce-script, with the piped commands you want to run. Then set your alias to:
eblast-bounce#dev.mydomain.com: "|/path/to/eblast-bounce-script"
Make sure the make the script executable, and runnable by the user that exim will be calling it as. If you make the script mode 755, owned by root, that should be sufficient.
There are a few things I had to do to work around the problem:
1) Move the runner script into its own file as Todd suggested; nested quotes were causing the script to fail to run.
2) Make the file executable; the permissions were already set to 755.
3) Even though exim was using my username to execute the script, the environment variables such as PATH and HOME were not set at all! This caused ruby to be an unknown command. This caused many other issues because most of the app relies upon RVM and its gemsets. So I couldn't get ruby to run, much less rails. Even if I were to explicitly call the ruby wrapper, spring would break because $HOME wasn't set. Just a cascade of issues because the user environment wasn't being set. I also couldn't just issue su - username -c 'whatever' because the account that exim was using didn't have authority to use su.
So the working setup looks like this:
/etc/valiases/dev.mydomain.com
eblast-bounce#dev.mydomain.com: "|/bin/bash -l -c '/home/useraccount/rails_deployments/dev.www/current/script/receive_eblast_bounce'"
*: ":fail: No Such User Here"
/home/useraccount/rails_deployments/dev.www/current/script/receive_eblast_bounce
D=`pwd`
HOME=/home/useraccount
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
if [[ -s "/home/useraccount/.rvm/scripts/rvm" ]] ; then
source "/home/useraccount/.rvm/scripts/rvm"
fi
cd /home/useraccount/rails_deployments/dev.www/current
./bin/rails runner -e development 'EBlast.receive(STDIN.read)'
cd $D
I'm now having problems with ActionMailer using SSL when it shouldn't, and I don't know if that's related to something I did here, but at least it executes the rails script.

Resources