How to set Environment Variables on EC2 instance via User Data - linux

I am trying to set environment variables with EC2s user data, but nothing i do seems to work
here are the User data scripts i tried
#!/bin/bash
echo "export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-23235232.us-east-1.elb.amazonaws.com" >> /env.sh
source /env.sh
And another:
#!/bin/bash
echo "#!/bin/bash" >> /env.sh
echo "export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-67323523.us-east-1.elb.amazonaws.com" >> /env.sh
chmod +x /env.sh
/env.sh
They both do absolutly nothing, and if i log in and issue the command source /env.sh or /env.sh it works. so this must be something forbidden that i am trying to do.
Here is the output from /var/log/cloud-init-output.log using -e -x
+ echo 'export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-2141709021.us-east-1.elb.amazonaws.com'
+ source /env.sh
++ export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-2141709022.us-east-1.elb.amazonaws.com
++ HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-2141709022.us-east-1.elb.amazonaws.com
Still, echo $HOST_URL is empty
As requested, the full UserData script
#!/bin/bash
set -e -x
echo "export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-2141709021.us-east-1.elb.amazonaws.com" >> /env.sh
source /env.sh
/startup.sh staging 2649

One of the more configurable approach to define environment variables for EC2 instances, is to use Systems Manager Parameter Store. This approach will make it easier to manage different parameters for large number of EC2 instances, both encrypted using AWS KMS as well as in plain text. It will also allows to change the parameter values with minimal changes in EC2 instance level. The steps are as follows.
Define string parameters (Encrypted with KMS or Unencrypted) in EC2 Systems Manager Parameter Store.
In the IAM role EC2 assumes, give required permission to access the parameter store.
Using the AWS CLI commands for EC2 System Manager, read the parameters and export to environment variables in User Data section using Get-Parameter or Get-Parameters AWS CLI commands and controlling command output as required.
e.g Using Get-Parameter command to retrieve db_connection_string parameter(Unencrypted).
export DB_CONNECTION=$(aws --region=us-east-2 ssm get-parameter --name 'db_connection' --query 'Value')
Note: For more details in setting up AWS KMS Keys, defining encrypted strings, managing IAM policies & etc., refer the following articles.
Securing Application Secrets with EC2 Parameter Store
Simple Secrets Management via AWS’ EC2 Parameter Store

I find this to be a pretty easy way to set environment variables for all users using User Data. It allows me to configure applications so the same AMI can work with multiple scenarios:
#!/bin/bash
echo export DB_CONNECTION="some DB connection" >> /etc/profile
echo export DB_USERNAME="my user" >> /etc/profile
echo export DB_PASSWORD="my password" >> /etc/profile
Now, all users will have DB_CONNECTION, DB_USERNAME and DB_PASSWORD set as environment variables.

The user data script on EC2 executes at after boot in its own process. The environment variables get set in that process and disappear when the process exits. You will not see the environment variables in other processes, i.e., login shell or other programs for that matter.
You will have to devise a way to get these environment variables into whatever program needs to see them.
Where do you need these variables to be available? In /startup.sh staging 2649?
EDIT
Try this:
#!/bin/bash
set -e -x
export HOST_URL="checkEmai-LoadBala-ICHJ82KG5C7P-2141709021.us-east-1.elb.amazonaws.com"
/startup.sh staging 2649
Then edit /startup.sh, and put the following line on the top:
echo $HOST_URL > /tmp/var
Boot the instance, and then paste /tmp/var here.

You can add another shell script in /etc/profile.d/yourscript.sh which will contain the set of environment variables you want to add.
This script will run at every bootup and your variable will be available to all users.
#!/bin/sh
echo 'export AWS_DEFAULT_REGION=ap-southeast-2' > ~/myconfiguration.sh
chmod +x ~/myconfiguration.sh
sudo cp ~/myconfiguration.sh /etc/profile.d/myconfiguration.sh
The above code creates a shell script to set environment variable for aws default region and copies it to profile.d .

You can use this script:
#!/bin/bash
echo HOST_URL=\"checkEmai-LoadBala-ICHJ82KG5C7P-23235232.us-east-1.elb.amazonaws.com\" >> /etc/environment
I created an EC2 instance with Amazon Linux AMI 2018.03.0 and added this user data to it and it works fine.
Refer to this answer for more details.

After doing the stuffs in the user data script, the process exits.
So, whatever environment variable you export will not be there in the next process. One way is to to put exports in the .bashrc file so that it gets available in the next session also.
echo "export HOST_URL=checkEmai-LoadBala-ICHJ82KG5C7P-23235232.us-east-1.elb.amazonaws.com" >> ~/.bashrc

Adding this to the init script of the node will add environment variables on launch. They won't show up in the node configuration page but they will be able to use in any job.
#!/bin/bash
echo 'JAVA_HOME="/usr/lib/jvm/java-8-oracle/"' | sudo tee -a /etc/profile#
This answer is similar to what hamx0r proposed however, jenkins doesn't have permission to echo to /etc/profiles with or without sudo.

This maynot be the exact answer to the OP's question but similar. I've thought of sharing this as I've wasted enough time searching for the answer and finally figured it out.
Example assuming - AWS EC2 running ubuntu.
If there is a scenario where you need to define the environment variables as well use it in the same bash session (same user-data process), then either you can add the variables to /etc/profile, /etc/environment or /home/ubuntu/.zshrc file. I have not tried /home/ubuntu/.profile file BTW.
Assuming adding to .zshrc file,
sudo su ubuntu -c "$(cat << EOF
echo 'export PATH="/tmp:\$PATH"' >> /home/ubuntu/.zshrc
echo 'export ABC="XYZ"' >> /home/ubuntu/.zshrc
echo 'export PY_VERSION=3.8.1' >> /home/ubuntu/.zshrc
source /home/ubuntu/.zshrc
echo printenv > /tmp/envvars # To test
EOF
)"
Once the user data is finished running, you can see the environment variables which you have added in the script are echoed to the envvars file. Reloading the bash with source /home/ubuntu/.zshrc made the newly added variables available in the bash session.
(additional info) How to install zsh and oh-my-zsh?
sudo apt-get install -y zsh
sudo su ubuntu -c "$(cat << EOF
ssh-keyscan -H github.com >> /home/ubuntu/.ssh/known_hosts
git clone https://github.com/robbyrussell/oh-my-zsh.git /home/ubuntu/.oh-my-zsh
cp /home/ubuntu/.oh-my-zsh/templates/zshrc.zsh-template /home/ubuntu/.zshrc
echo DISABLE_AUTO_UPDATE="true" >> /home/ubuntu/.zshrc
cp /home/ubuntu/.oh-my-zsh/themes/robbyrussell.zsh-theme /home/ubuntu/.oh-my-zsh/custom
EOF
)"
sudo chsh -s /bin/zsh ubuntu
Wondering why I didn't added the environment variable in .bashrc? The scenario which I mentioned above (using the environment variables in the same user-data session) adding to .bashrc won't work. .bashrc is only sourced for interactive Bash shells so there should be no need for .bashrc to check if it is running in an interactive shell. So just like above,
source /home/ubuntu/.bashrc
won't reload the bash. You can check this out written right in the beginning of the .bashrc file,
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac

From this Medium.com article, you can put a script in UserData that writes to a file in /etc/profile.d that will get run automatically when a new shell is run.
Here is an example cloudformation.yaml
Parameters:
SomeOtherResourceData:
default: Fn::ImportValue: !Sub "SomeExportName"
Resources:
WebApi:
Type: AWS::EC2::Instance
Properties:
# ...
UserData:
Fn::Base64: !Sub
- |
#!/bin/bash
cat > /etc/profile.d/load_env.sh << 'EOF'
export ACCOUNT_ID=${AWS::AccountId}
export REGION=${AWS::Region}
export SOME_OTHER_RESOURCE_DATA=${SomeOtherResourceData}
EOF
chmod a+x /etc/profile.d/load_env.sh
And a YAML that exports something
# ...
Outputs:
SomeExportName:
Value: !Sub "${WebDb.Endpoint.Address}"
Export:
Name: SomeExportName

Here is what working for me
UserData:
Fn::Base64:
!Sub |
#!/bin/bash
echo "TEST=THISISTEST" >> /etc/environment

The easiest way is definitely to use AWS Elastic Beanstalk, it creates for you everything you need with very small effort and have the easiest way in the entire AWS eco-system to set your environment variables.
check it out, there are also some exhaustive tutorials based on different languages
https://docs.aws.amazon.com/elastic-beanstalk/index.html

Related

Environment variables not loaded when executing command via `ssh` on remote machine

I want to write a script that executes several commands on a remote server, which includes executing some applications. The .bashrc file on the remote machine defines the PATH variable so that these applications are under it.
But when I tried to use ssh <host> <command> , it seems that .bashrc was not loaded. ssh <host> 'echo $PATH' only show a few pathes like /usr/bin .
What confused me even more is that, even ssh <host> "source ~/.bashrc; <command>" not worked. ssh <host> 'source ~/.bashrc; echo $PATH' still only printed a few pathes. I checked by ssh <host> 'cat ~/.bashrc' and confirmed that this file does contains all the PATH definition, which was not affected in the environment where the command executed.
So after two hours of troubleshooting, I finally find the problem. It is because the following command in the beginning of .bashrc .
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
This prevents anything after that being executed in the environment of ssh <host> <command> .
If you have a similar issue try checking the beginning of the .bashrc file.

Running bash script over SSH [duplicate]

I have to run a local shell script (windows/Linux) on a remote machine.
I have SSH configured on both machine A and B. My script is on machine A which will run some of my code on a remote machine, machine B.
The local and remote computers can be either Windows or Unix based system.
Is there a way to run do this using plink/ssh?
If Machine A is a Windows box, you can use Plink (part of PuTTY) with the -m parameter, and it will execute the local script on the remote server.
plink root#MachineB -m local_script.sh
If Machine A is a Unix-based system, you can use:
ssh root#MachineB 'bash -s' < local_script.sh
You shouldn't have to copy the script to the remote server to run it.
This is an old question, and Jason's answer works fine, but I would like to add this:
ssh user#host <<'ENDSSH'
#commands to run on remote host
ENDSSH
This can also be used with su and commands which require user input. (note the ' escaped heredoc)
Since this answer keeps getting bits of traffic, I would add even more info to this wonderful use of heredoc:
You can nest commands with this syntax, and that's the only way nesting seems to work (in a sane way)
ssh user#host <<'ENDSSH'
#commands to run on remote host
ssh user#host2 <<'END2'
# Another bunch of commands on another host
wall <<'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
You can actually have a conversation with some services like telnet, ftp, etc. But remember that heredoc just sends the stdin as text, it doesn't wait for response between lines
I just found out that you can indent the insides with tabs if you use <<-END!
ssh user#host <<-'ENDSSH'
#commands to run on remote host
ssh user#host2 <<-'END2'
# Another bunch of commands on another host
wall <<-'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<-'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
(I think this should work)
Also see
http://tldp.org/LDP/abs/html/here-docs.html
Also, don't forget to escape variables if you want to pick them up from the destination host.
This has caught me out in the past.
For example:
user#host> ssh user2#host2 "echo \$HOME"
prints out /home/user2
while
user#host> ssh user2#host2 "echo $HOME"
prints out /home/user
Another example:
user#host> ssh user2#host2 "echo hello world | awk '{print \$1}'"
prints out "hello" correctly.
This is an extension to YarekT's answer to combine inline remote commands with passing ENV variables from the local machine to the remote host so you can parameterize your scripts on the remote side:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo $ARG1 $ARG2
ENDSSH
I found this exceptionally helpful by keeping it all in one script so it's very readable and maintainable.
Why this works. ssh supports the following syntax:
ssh user#host remote_command
In bash we can specify environment variables to define prior to running a command on a single line like so:
ENV_VAR_1='value1' ENV_VAR_2='value2' bash -c 'echo $ENV_VAR_1 $ENV_VAR_2'
That makes it easy to define variables prior to running a command. In this case echo is our command we're running. Everything before echo defines environment variables.
So we combine those two features and YarekT's answer to get:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'...
In this case we are setting ARG1 and ARG2 to local values. Sending everything after user#host as the remote_command. When the remote machine executes the command ARG1 and ARG2 are set the local values, thanks to local command line evaluation, which defines environment variables on the remote server, then executes the bash -s command using those variables. Voila.
<hostA_shell_prompt>$ ssh user#hostB "ls -la"
That will prompt you for password, unless you have copied your hostA user's public key to the authorized_keys file on the home of user .ssh's directory. That will allow for passwordless authentication (if accepted as an auth method on the ssh server's configuration)
I've started using Fabric for more sophisticated operations. Fabric requires Python and a couple of other dependencies, but only on the client machine. The server need only be a ssh server. I find this tool to be much more powerful than shell scripts handed off to SSH, and well worth the trouble of getting set up (particularly if you enjoy programming in Python). Fabric handles running scripts on multiple hosts (or hosts of certain roles), helps facilitate idempotent operations (such as adding a line to a config script, but not if it's already there), and allows construction of more complex logic (such as the Python language can provide).
cat ./script.sh | ssh <user>#<host>
chmod +x script.sh
ssh -i key-file root#111.222.3.444 < ./script.sh
Try running ssh user#remote sh ./script.unx.
Assuming you mean you want to do this automatically from a "local" machine, without manually logging into the "remote" machine, you should look into a TCL extension known as Expect, it is designed precisely for this sort of situation. I've also provided a link to a script for logging-in/interacting via SSH.
https://www.nist.gov/services-resources/software/expect
http://bash.cyberciti.biz/security/expect-ssh-login-script/
ssh user#hostname ". ~/.bashrc;/cd path-to-file/;. filename.sh"
highly recommended to source the environment file(.bashrc/.bashprofile/.profile). before running something in remote host because target and source hosts environment variables may be deffer.
I use this one to run a shell script on a remote machine (tested on /bin/bash):
ssh deploy#host . /home/deploy/path/to/script.sh
if you wanna execute command like this
temp=`ls -a`
echo $temp
command in `` will cause errors.
below command will solve this problem
ssh user#host '''
temp=`ls -a`
echo $temp
'''
If the script is short and is meant to be embedded inside your script and you are running under bash shell and also bash shell is available on the remote side, you may use declare to transfer local context to remote. Define variables and functions containing the state that will be transferred to the remote. Define a function that will be executed on the remote side. Then inside a here document read by bash -s you can use declare -p to transfer the variable values and use declare -f to transfer function definitions to the remote.
Because declare takes care of the quoting and will be parsed by the remote bash, the variables are properly quoted and functions are properly transferred. You may just write the script locally, usually I do one long function with the work I need to do on the remote side. The context has to be hand-picked, but the following method is "good enough" for any short scripts and is safe - should properly handle all corner cases.
somevar="spaces or other special characters"
somevar2="!##$%^"
another_func() {
mkdir -p "$1"
}
work() {
another_func "$somevar"
touch "$somevar"/"$somevar2"
}
ssh user#server 'bash -s' <<EOT
$(declare -p somevar somevar2) # transfer variables values
$(declare -f work another_func) # transfer function definitions
work # call the function
EOT
The answer here (https://stackoverflow.com/a/2732991/4752883) works great if
you're trying to run a script on a remote linux machine using plink or ssh.
It will work if the script has multiple lines on linux.
**However, if you are trying to run a batch script located on a local
linux/windows machine and your remote machine is Windows, and it consists
of multiple lines using **
plink root#MachineB -m local_script.bat
wont work.
Only the first line of the script will be executed. This is probably a
limitation of plink.
Solution 1:
To run a multiline batch script (especially if it's relatively simple,
consisting of a few lines):
If your original batch script is as follows
cd C:\Users\ipython_user\Desktop
python filename.py
you can combine the lines together using the "&&" separator as follows in your
local_script.bat file:
https://stackoverflow.com/a/8055390/4752883:
cd C:\Users\ipython_user\Desktop && python filename.py
After this change, you can then run the script as pointed out here by
#JasonR.Coombs: https://stackoverflow.com/a/2732991/4752883 with:
`plink root#MachineB -m local_script.bat`
Solution 2:
If your batch script is relatively complicated, it may be better to use a batch
script which encapsulates the plink command as well as follows as pointed out
here by #Martin https://stackoverflow.com/a/32196999/4752883:
rem Open tunnel in the background
start plink.exe -ssh [username]#[hostname] -L 3307:127.0.0.1:3306 -i "[SSH
key]" -N
rem Wait a second to let Plink establish the tunnel
timeout /t 1
rem Run the task using the tunnel
"C:\Program Files\R\R-3.2.1\bin\x64\R.exe" CMD BATCH qidash.R
rem Kill the tunnel
taskkill /im plink.exe
This bash script does ssh into a target remote machine, and run some command in the remote machine, do not forget to install expect before running it (on mac brew install expect )
#!/usr/bin/expect
set username "enterusenamehere"
set password "enterpasswordhere"
set hosts "enteripaddressofhosthere"
spawn ssh $username#$hosts
expect "$username#$hosts's password:"
send -- "$password\n"
expect "$"
send -- "somecommand on target remote machine here\n"
sleep 5
expect "$"
send -- "exit\n"
You can use runoverssh:
sudo apt install runoverssh
runoverssh -s localscript.sh user host1 host2 host3...
-s runs a local script remotely
Useful flags:
-g use a global password for all hosts (single password prompt)
-n use SSH instead of sshpass, useful for public-key authentication
If it's one script it's fine with the above solution.
I would set up Ansible to do the Job. It works in the same way (Ansible uses ssh to execute the scripts on the remote machine for both Unix or Windows).
It will be more structured and maintainable.
It is unclear if the local script uses locally set variables, functions, or aliases.
If it does this should work:
myscript.sh:
#!/bin/bash
myalias $myvar
myfunction $myvar
It uses $myvar, myfunction, and myalias. Let us assume they is set locally and not on the remote machine.
Make a bash function that contains the script:
eval "myfun() { `cat myscript.sh`; }"
Set variable, function, and alias:
myvar=works
alias myalias='echo This alias'
myfunction() { echo This function "$#"; }
And "export" myfun, myfunction, myvar, and myalias to server using env_parallel from GNU Parallel:
env_parallel -S server -N0 --nonall myfun ::: dummy
Extending answer from #cglotr. In order to write inline command use printf, it useful for simple command and it support multiline using char escaping '\n'
example :
printf "cd /to/path/your/remote/machine/log \n tail -n 100 Server.log" | ssh <user>#<host> 'bash -s'
See don't forget to add bash -s
There is another approach ,you can copy your script in your host with scp command then execute it easily .
First, copy the script over to Machine B using scp
[user#machineA]$ scp /path/to/script user#machineB:/home/user/path
Then, just run the script
[user#machineA]$ ssh user#machineB "/home/user/path/script"
This will work if you have given executable permission to the script.

JENKINS pass parameters as inputs to remote host script

We have a (3) tier system for Jenkins projects and builds.
We have setup a trust relationship between worker node and remote host
via SSH plugin and keys.
jenkinsrh-01 = main jenkins server (gui for projects/builds, dashboard, etc)
sys-07 = worker node (projects / builds are run from this remote node)
raloda10 = remote host (target of builds)
In the gui configuration screen at main Jenkins server (jenkinsrh-01), we have this code below for our project with hardcoded parameter values et al.
The $SCRIPT is located at remote host and we want to pass the parameter values for use by remote host script.
Build --> Execute Shell --> Command
#!/bin/bash
export ORACLE_USER="oracle"
export ODA_HOST="raloda10"
export DATABASE="DEV11G05"
export SCHEMA="ASA14101X5"
export COMMENT="good state archive"
export SCRIPT="/u01/app/oracle/databases/dev11g05/bod/jenky_test.sh"
sudo -i -u ${ORACLE_USER} ssh ${ODA_HOST} ${SCRIPT}
On the target remote host (raloda10) the contents of the target script
is a simple test to echo the values of the parameters passed to it via "export" verbiage in the build steps above. The target remote host script (jenky_test.sh) contents are;
#!/bin/bash
#
#
echo
echo This is correct target script on remote host
echo
echo 1. Source database: ${DATABASE}
echo 2. Name of schema: ${SCHEMA}
echo 3. Comments: ${COMMENT}
echo
echo ${DATABASE}
echo ${SCHEMA}
echo ${COMMENT}
#
exit
The worker build can access the remote host, find the remote host script, and run the shell script, "jenky_test.sh".
But the issue is that none of the build parameters are passed into the jenky_test.sh script when it runs. It does not echo back the values as they are blank as evidenced in the "Console Output", below;
Started by user Donald Collins
[EnvInject] - Loading node environment variables.
Building remotely on sys-07 (SYS-07) in workspace /var/lib/jenkins/workspace/fails_Send_Jenkins_Parameters_fromSlave_as_Inputs_for_Script_on_Remote_Host
[fails_Send_Jenkins_Parameters_fromSlave_as_Inputs_for_Script_on_Remote_Host] $ /bin/bash /tmp/hudson7103389345936604753.sh
This is correct target script on remote host
1. Source database:
2. Name of schema:
3. Comments:
The console output should be showing the values of the exported variables (parameters) for lines 1,2, and 3 above. Instead they are blank.
I've tried all sorts of various combinations of different coding for the "sudo" call in the Command step of Execute Shell. "Nothing" seems to be able to get the parameter values to be passed as inputs for jenky_test.sh on remote target host.
I'm sure I'm missing something here that is obvious as what I'm trying to do with Jenkins is "Jenkins 101" stuff ;) ...
Any help or advice is greatly appreciated.
Best Regards,
Donald
In your local script, you a "heredoc" and so action the ssh command and read in the command to be executed on the local host to the EOF marker.
sudo -i -u ${ORACLE_USER} ssh ${ODA_HOST} << EOF
export ORACLE_USER="oracle"
export ODA_HOST="raloda10"
export DATABASE="DEV11G05"
export SCHEMA="ASA14101X5"
export COMMENT="good state archive"
/u01/app/oracle/databases/dev11g05/bod/jenky_test.sh
EOF

non-shell command running inside shell script

I am trying to run a non shell script command inside a .sh script.
Current code looks like this :
#!/bin/bash
echo "Enter name of the folder you want to join!"
read folder
cd ~/domains/name/public_html/$folder/sites/default/
echo "enabling u7seven theme!"
drush en u7seven -y;
echo "disabling overlay!"
drush dis overlay -y;
echo "running all-folder script!"
u7d7up all-folder
Code that is not a shell code(which is local script somewhere on the server is ):
u7d7up all-folder
However, if I go and manually call this functionu7d7up all-folder from site root it works.
Since I am having more than 10 sites, i'd like to just call the script without entering and doing all these commands manually.
You might be facing an issue due to non-presence of u7d7up in the $PATH variable.
A robust way to write your script will be put the absolute path of the u7d7up; also you would need to check for the permissions on it.
#!/bin/bash
echo "Enter name of the folder you want to join!"
read folder
cd ~/domains/name/public_html/$folder/sites/default/
echo "enabling u7seven theme!"
drush en u7seven -y;
echo "disabling overlay!"
drush dis overlay -y;
echo "running all-folder script!"
/absolute-path-to/u7d7up /absolute-path-to/all-folder

User environment is not sourced with chroot

I have a little problem with a chroot environment and I hope you could help me :)
Here's my story:
1 - I created a user demo (with a home like /home/demo) and I chrooted him thanks to a script /bin/chrootshell which is as following:
#!/bin/bash
exec -c /usr/sbin/chroot /home/$USER /bin/bash --login -i
2 - Usual login authentication are disabled for this user, so I have to use su - demo to be logged as him
Everything works well (like all the chrooted system commands or my java configuration). But each time I become user demo, it seems my .bashrc or /etc/profile are not sourced... And I don't know why.
But if I launch a manual bash it works as you can see here:
root#test:~# su - demo
bash-4.1$ env
PWD=/
SHELL=/bin/chrootshell
SHLVL=1
_=/bin/env
bash-4.1$ bash
bash-4.1$ env
PWD=/
SHLVL=2
SHELL=/bin/chrootshell
PLOP=export variable test
_=/bin/env
As you can see, my $PLOP variable (describes in /.bashrc == /home/demo/.bashrc) is well set in the second bash, but I don't know why
Thanks in advance if you have any clue about my issue :)
edit: What I actually don't understand is why SHELL=/bin/chrootshell ? in my chroot env I declare my demo user with /bin/bash shell
As far as I can tell the behaviour that you are experiencing is bash working as designed.
In short: when bash is started as a login shell (that is what happens when you call bash with --login) it will read .profile but not .bashrc. When bash is started as a non login shell then bash will read .bashrc but not .profile.
Read the bash manual chapter about startup files for more information.
My suggestion to work around this design decision is to create a .bash_profile with the following content:
if [ -f "~/.profile" ]; then
source "~/.profile"
fi
if [ -f "~/.bashrc" ]; then
source "~/.bashrc"
fi
That will make bash read .profile and .bashrc if started as login shell, and read only .bashrc if started as non login shell. Now you can put the stuff which needs to be done once (during login) in .profile, and the stuff which needs to be done everytime in .bashrc.

Resources