User environment is not sourced with chroot - linux

I have a little problem with a chroot environment and I hope you could help me :)
Here's my story:
1 - I created a user demo (with a home like /home/demo) and I chrooted him thanks to a script /bin/chrootshell which is as following:
#!/bin/bash
exec -c /usr/sbin/chroot /home/$USER /bin/bash --login -i
2 - Usual login authentication are disabled for this user, so I have to use su - demo to be logged as him
Everything works well (like all the chrooted system commands or my java configuration). But each time I become user demo, it seems my .bashrc or /etc/profile are not sourced... And I don't know why.
But if I launch a manual bash it works as you can see here:
root#test:~# su - demo
bash-4.1$ env
PWD=/
SHELL=/bin/chrootshell
SHLVL=1
_=/bin/env
bash-4.1$ bash
bash-4.1$ env
PWD=/
SHLVL=2
SHELL=/bin/chrootshell
PLOP=export variable test
_=/bin/env
As you can see, my $PLOP variable (describes in /.bashrc == /home/demo/.bashrc) is well set in the second bash, but I don't know why
Thanks in advance if you have any clue about my issue :)
edit: What I actually don't understand is why SHELL=/bin/chrootshell ? in my chroot env I declare my demo user with /bin/bash shell

As far as I can tell the behaviour that you are experiencing is bash working as designed.
In short: when bash is started as a login shell (that is what happens when you call bash with --login) it will read .profile but not .bashrc. When bash is started as a non login shell then bash will read .bashrc but not .profile.
Read the bash manual chapter about startup files for more information.
My suggestion to work around this design decision is to create a .bash_profile with the following content:
if [ -f "~/.profile" ]; then
source "~/.profile"
fi
if [ -f "~/.bashrc" ]; then
source "~/.bashrc"
fi
That will make bash read .profile and .bashrc if started as login shell, and read only .bashrc if started as non login shell. Now you can put the stuff which needs to be done once (during login) in .profile, and the stuff which needs to be done everytime in .bashrc.

Related

My Bash script ends after entering chroot environment

My question:
After the following lines in my script, the script ends unexpectedly. I am trying to enter chroot inside of a bash script. How can I make this work
I am writing a script that installs Gentoo
echo " Entering the new environment"
chroot /mnt/gentoo /bin/bash
source /etc/profile
export PS1="(chroot) ${PS1}"
chroot command will start new child bash process, so rest of your script will not be executed until you quit from child bash process.
So instead of /bin/bash just run your script in chroot:
chroot /mnt/gentoo myscript.sh
myscript.sh:
#!/bin/bash
echo " Entering the new environment"
source /etc/profile
export PS1="(chroot) ${PS1}"

Execute shell script whithin another script prompts: No such file or directory

(I'm new in shell script.)
I've been stuck with this issue for a while. I've tried different methods but without luck.
Description:
When my script attempt to run another script (SiebelMessageCreator.sh, which I don't own) it prompts:
-bash: ./SiebelMessageCreator.sh: No such file or directory
But the file exists and has execute permissions:
-rwxr-xr-x 1 owner ownergrp 322 Jun 11 2015 SiebelMessageCreator.sh
The code that is performing the script execution is:
(cd $ScriptPath; su -c './SiebelMessageCreator.sh' - owner; su -c 'nohup sh SiebelMessageSender.sh &' - owner;)
It's within a subshell because I first thought that it was throwing that message because my script was running in my home directory (When I run the script I'm root and I've moved to my non-root home directory to run the script because I can't move my script [ policies ] to the directory where the other script resides).
I've also tried with the sh SCRIPT.sh ./SCRIPT.sh. And changing the shebang from bash to ksh because the SiebelMessageCreator.sh has that shell.
The su -c 'sh SCRIPT.sh' - owner is necessary. If the script runs as root and not as owner it brokes something (?) (that's what my partners told me from their experience executing it as root). So I execute it as the owner.
Another thing that I've found in my research is that It can throw that message if it's a Symbolic link. I'm really not sure if the content of the script it's a symbolic link. Here it is:
#!/bin/ksh
BASEDIRROOT=/path/to/file/cpp-plwsutil-c
ore-runtime.jar (path changed on purpose for this question)
java -classpath $BASEDIRROOT com.hp.cpp.plwsutil.SiebelMessageCreator
exitCode=$?
echo "`date -u '+%Y-%m-%d %H:%M:%S %Z'` - Script execution finished with exit code $exitCode."
exit $exitCode
As you can see it's a very siple script that just call a .jar. But also I can't add it to my script [ policies ].
If I run the ./SiebelMessageCreator.sh manually it works just fine. But not with my script. I suppose that discards the x64 x32 bits issue that I've also found when I googled?
By the way, I'm automating some tasks, the ./SiebelMessageCreator.sh and nohup sh SiebelMessageSender.sh & are just the last steps.
Any ideas :( ?
did you try ?
. ./SiebelMessageCreator.sh
you can also perform which sh or which ksh, then modify the first line #!/bin/ksh

docker ubuntu container: shell linked to bash still starts shell

Alright guys, so I try to install rvm in a docker container based on ubuntu:14.04. During the process, I discovered that some people do something like this to ensure docker commands are also run with the bash:
RUN ln -fs /bin/bash /bin/sh
Now The weirdness happens and I hope someone of you can explain it to me:
→ docker run -it --rm d81ff50de1ce /bin/bash
root#e93a877ab3dc:/# ls -lah /bin
....
lrwxrwxrwx 1 root root 9 Mar 1 16:15 sh -> /bin/bash
lrwxrwxrwx 1 root root 9 Mar 1 16:15 sh.distrib -> /bin/bash
...
root#e93a877ab3dc:/# /bin/sh
sh-4.3# echo $0
/bin/sh
Can someone explain what's going on here? I know I could just prefix my commands in the dockerfile w/ bash -c, but I would like to understand what is happening here and if possible still ditch the bash -c prefix in the dockerfile.
Thanks a lot,
Robin
It's because bash has a compatibility mode where it tries to emulate sh if it is started via the name sh, as the manpage says:
If bash is invoked with the name sh, it tries to mimic the startup
behavior of historical versions of sh as closely as possible, while
conforming to the POSIX standard as well. When invoked as an
interactive login shell, or a non-interactive shell with the --login
option, it first attempts to read and execute commands from
/etc/profile and ~/.profile, in that order. The --noprofile option
may be used to inhibit this behavior. When invoked as an interactive
shell with the name sh, bash looks for the variable ENV, expands its
value if it is defined, and uses the expanded value as the name of a
file to read and execute. Since a shell invoked as sh does not
attempt to read and execute commands from any other startup files, the
--rcfile option has no effect. A non-interactive shell invoked with the name sh does not attempt to read any other startup files. When
invoked as sh, bash enters posix mode after the startup files are
read.

bash doesn't load node on remote ssh command

Excuse me if the subject is vague, but I tried to describe my problem to the best of my possibilities. I have my raspberry pi which I want to deploy to using codeship. Rsyncing the files works perfectly, but when I am to restart my application using pm2 my problem occurs.
I have installed node and pm2 using the node version manager NVM.
ssh pi#server.com 'source /home/pi/.bashrc; cd project; pm2 restart app.js -x -- --prod'0 min 3 sec
bash: pm2: command not found
I have even added:
shopt -s expand_aliases in the bottom of my bashrc but it doesn't help.
How can I make it restart my application after I have done a deploy? Thanks in advance for your sage advice and better wisdom!
EDIT 1: My .bashrc http://pastie.org/10529200
My $PATH: /home/pi/.nvm/versions/node/v4.2.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/games:/usr/games
EDIT 2: I added /home/pi/.nvm/versions/node/v4.2.0/bin/pm2 which is the full path to pm2 and now I get the following error: /usr/bin/env: node: No such file or directory
It seems that even if I provide the full path, node isn't executed.
I think the problem is the misinterpretation that the shell executing node has a full environment like an interactive ssh session does. Most likely this is not the case.
When a SSH session spawns a shell it goes through a lot of gyrations to build an environment suitable to work with interactively. Things like inheriting from the login process, reading /etc/profile, reading ~/.profile. But in the cases where your executing bash directly this isn't always guaranteed. In fact the $PATH might be completely empty.
When /usr/bin/env node executes it looks for node in your $PATH which in a non-interactive shell could be anything or empty.
Most systems have a default PATH=/bin:/usr/bin typically /usr/local/bin is not included in the default environment.
You could attempt to force a login with ssh using ssh … '/bin/bash -l -c "…"'.
You can also write a specialized script on the server that knows how the environment should be when executed outside of an interactive shell:
#!/bin/bash
# Example shell script; filename: /usr/local/bin/my_script.sh
export PATH=$PATH:/usr/local/bin
export NODE_PATH=/usr/local/share/node
export USER=myuser
export HOME=/home/myuser
source $HOME/.nvm/nvm.sh
cd /usr/bin/share/my_script
nvm use 0.12
/usr/bin/env node ./script_name.js
Then call it through ssh: ssh … '/usr/local/bin/my_script.sh'.
Beyond these ideas I don't see how to help further.
Like Sukima said, the likelihood is that this is due to an environment issue - SSH'ing into a server does not set up a full environment. You can, however, get around much of this by simply calling /etc/profile yourself at the start of your command using the . operator (which is the same as the "source" command):
ssh pi#server.com '. /etc/profile ; cd project; pm2 restart app.js -x -- --prod'
/etc/profile should itself be set up to call the .bashrc of the relevant user, which is why I have removed that part. I used to have to do this quite a lot for quick proof-of-concept scripts at a previous workplace. I don't know if it would be considered a nasty hack for a more permanent script, but it certainly works, and would require minimal modification to your existing script should that be an issue.
For me I have to load :nvm as I installed node and yarn using :nvm
To load :nvm when ssh remote execution, we call
ssh :user#:host 'source ~/.nvm/nvm.sh; :other_commands_here'
Try:
ssh pi#server.com 'bash -l -c "source /home/pi/.bashrc; cd project; pm2 restart app.js -x -- --prod"'
You should enable some environment values by "source" or dot command ".". Here is an example.
ssh pi#server.com '. /home/pi/.nvm/nvm.sh; cd project; pm2 restart app.js -x -- --prod'
What worked for me was adding this to my .bash_profile:
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
Source: https://stackoverflow.com/a/820533/1824444

"LANG: Undefined variable" when running a ssh command

I have found a problem in our FC8 linux machines with the LANG variable when running a command through ssh.
When in a terminal, I can see that my LANG variable is "es_ES"
[angelv#italia ~]$ echo $LANG
es_ES
If I connect back to my machine through ssh, there are no problems and $LANG is still "es_ES"
[angelv#italia ~]$ ssh italia
Last login: Mon Jul 26 12:51:12 2010 from XXXXXXXXXXXX
[angelv#italia ~]$ echo $LANG
es_ES
[angelv#italia ~]$
But if I try to run a command with ssh, then that variable is undefined...
[angelv#italia ~]$ ssh italia 'echo $LANG'
LANG: Undefined variable.
[angelv#italia ~]$
Does anybody know where I should look to find the culprit?
Quoth the SSH manual:
If command is specified, it is
executed on the remote host instead of
a login shell.
Login shells behave quite differently than non-login shells, most notably here in that they don't usually source the the login .profile files. See your shell documentation for more detail.
On linux, your locale variable is generally listed in /usr/share/locale. You should check on the server machin what local they use. It may differ from your machine.
EDIT: sorry I mistake the question.
in bash you should do
export LANG="es_ES"
in other shell you may have to use setenv instead of export
You may be able to work around this feature of ssh by invoking your shell and asking it to act like a login shell:
ssh italia "sh -l -c 'echo $LANG'"
Depending on the actual shell you're using, the required option might be -l or something else.

Resources