Set environment variables in an AWS instance - linux

I create an AMI in EC2 with terraform with this resource:
resource "aws_instance" "devops-demo" {
ami = "jnkdjsndjsnfsdj"
instance_type = "t2.micro"
key_name = "demo-devops"
user_data = "${file("ops_setup.sh")}"
}
The user data executes a shell script that install Java JRE:
sudo yum remove java-1.7.0-openjdk -y
sudo wget -O /opt/server-jre-8u172-linux-x64.tar.gz --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u172-b11/a58eab1ec242421181065cdc37240b08/server-jre-8u172-linux-x64.tar.gz"
sudo tar xzf /opt/server-jre-8u172-linux-x64.tar.gz
export JAVA_HOME=/jdk1.8.0_172
export JRE_HOME=/jdk1.8.0_171/jre
export PATH=$JAVA_HOME/bin:$PATH
But none of the environment variables work. However, if I connect by ssh to the instance and I execute the export command, it works fine.
Is there any way to define the environment variables with terraform?

Using the export command only sets those variables for the current shell and all processes that start from that shell. It is not a persistent setting. Anything you wish to make permanent should be set in /etc/environment.
For example in userdata:
echo "JAVA_HOME=/jdk1.8.0_172" >> /etc/environment
This would add the JAVA_HOME=/jdk1.8.0_172 line to that file. Note, you should not use export inside that file.
The PATH variable is likely already defined in the /etc/environment file and you'll need to overwrite that appropriately if you are going to append additional paths to it.
There is really great details on setting environment variables available in this answer.

If you are using one of the Amazon Linux 2 AMIs, then /etc/environment will not work for you. However, you can add the environment variables to a new file at /etc/profile.d/ and this will work. Something like this would go in your user_data:
echo "JAVA_HOME=/jdk1.8.0_172" | sudo tee /etc/profile.d/java_setup.sh
echo "JRE_HOME=/jdk1.8.0_171/jre" | sudo tee -a /etc/profile.d/java_setup.sh
echo "PATH=$JAVA_HOME/bin:$PATH" | sudo tee -a /etc/profile.d/java_setup.sh

Related

How SSH Environment Variables works with shell script file?

If I run the command bellow and my install.sh has the following section:
export S3_URL=$PRD_URL
export S3_ACCESS_KEY=$PRD_S3_ACCESS_KEY
export S3_SECRET_KEY=$PRD_S3_SECRET_KEY
cat install.sh | ssh $PRD_USER#$PRD_HOST
The $PRD_S3_ACCESS_KEY is going to be resolved from my host or the environment variables from the remote server?
Assuming you have gettext installed (which contains envsubst), you can do
envsubst < install.sh | ssh $PRD_USER#$PRD_HOST

Set environmental variables for different user in Docker

I am aware that we can specify the option -e during the run command to set environment variables in a docker. This only sets the PATH for the root user. Let us say if I have another user called admin and want to set the environment variables for that user as well, how can I achieve that?
This is the command I tried to set environment variables.
docker run -t -d -v /usr/hdp:/usr/hdp -v /usr/lib/jvm/:/usr/lib/jvm/ -e JAVA_HOME="${java_home}" -e HADOOP_HOME="${hadoop_home}" -e PATH=$PATH:$JAVA_HOME/bin -e PATH=$PATH:$HADOOP_HOME/bin gtimage
This only sets the PATH under root user but not for my admin user which a software that I installed during docker build has created.
I don't have a perfect solution for my question above but I tried something like below to login as user and set environment variables for that user. I don't recommend the below way unless you could not find a solution for your problem. Please let me know if you find a better approach than this
docker exec $containervalue bash -c 'env | grep PATH >> temp && chmod 775 temp && mv temp /opt/nagios'
docker exec --user ngadmin $containervalue bash -c 'cat ~/temp >> ~/.bashrc && source ~/.bashrc'

How to sudo a bash script whose parent dir is in the $PATH?

For example
~/Desktop/scripts is in $PATH
cat ~/Desktop/scripts/hi
#!/bin/bash
echo hi
What I have tried(The current dir is ~):
hi # CLI said "hi"
sudo -E hi # sudo: hi: command not found
se hi # sudo: hi: command not found # alias se="sudo -E "
How to sudo the script?
Try the following:
sudo PATH="${PATH}" bash -c "hi"
For the explanation please see man sudoers(5):
By default, the env_reset option is enabled. This causes commands to be executed with a new, minimal environment. On AIX (and Linux systems without PAM), the environment is initialized with the contents of the /etc/environment file. The new environment contains the TERM, PATH, HOME, MAIL, SHELL, LOGNAME, USER, USERNAME and SUDO_* variables in addition to variables from the invoking process permitted by the env_check and env_keep options. This is effectively a whitelist for environment variables.

variable in bash script not working as expected

I have this script:
#!/bin/bash
VAR="eric.sql"
sudo mysqldump -c -u username -p1234 dbname > $VAR
But if i run this script I get this error:
: Protocol error 3: mysql-export.sh: cannot create eric.sql
But if I don't use the variable, but just this:
#!/bin/bash
VAR="eric.sql"
sudo mysqldump -c -u username -p1234 dbname > eric.sql
... it is working well. What do I wrong?
The problem was that the script had Windows style line breaks (I used notepad). After I used Nano the write the script it was solved.
Thanks for the answers!
sudo can change $PATH variable, depend on your security policy.
-E The -E (preserve environment) option will override the env_reset
option in sudoers(5)). It is only available when either the match-
ing command has the SETENV tag or the setenv option is set in sudo-
ers(5).
You could add the full path of the file, or remove sudo in that script.
This should also work:
sudo PATH="$PATH" mysqldump -c -u username -p1234 dbname > $VAR

bash & s3cmd not working properly

Hi I have a shell script which contains s3cmd command on ubuntu 12.04 LTS.
I configured cron for this shell script which works fine for local environment but don't push the file to s3. But when i run shell script manually, It pushes the file to s3 without any error. I checked log and found nothing for this. Here is my shell script.
#!/bin/bash
User="abc"
datab="abc_xyz"
pass="abc#123"
Host="abc1db.instance.com"
FILE="abc_rds`date +%d_%b_%Y`.tar.gz"
S3_BKP_PATH="s3://abc/db/"
cd /abc/xyz/scripts/
mysqldump -u $User $datab -h $Host -p$pass | gzip -c > $FILE | tee -a /abc/xyz/logs/app-bkp.log
s3cmd --recursive put /abc/xyz/scripts/$FILE $S3_BKP_PATH | tee -a /abc/xyz/logs/app-bkp.log
mv /abc/xyz/scripts/$FILE /abc/xyz/backup2015/Database/
#END
This is really weird. Any suggestion would be a great help.
Check if the user running configured in crontab has correct permissions and keys in the environment.
I am guessing the keys are configured in env file as they are not here in the script.

Resources