How SSH Environment Variables works with shell script file? - linux

If I run the command bellow and my install.sh has the following section:
export S3_URL=$PRD_URL
export S3_ACCESS_KEY=$PRD_S3_ACCESS_KEY
export S3_SECRET_KEY=$PRD_S3_SECRET_KEY
cat install.sh | ssh $PRD_USER#$PRD_HOST
The $PRD_S3_ACCESS_KEY is going to be resolved from my host or the environment variables from the remote server?

Assuming you have gettext installed (which contains envsubst), you can do
envsubst < install.sh | ssh $PRD_USER#$PRD_HOST

Related

How to load environment path during executing command via ssh?

I'm trying to run a script (let's call it test.sh) via ssh as follows
ssh ${user}#${ip} "python3 test.py"
and the test.py is as follows
import os
# Do sth
...
os.system("emulator xxx")
...
The android environemnt paths are exported in ~/.bashrc, but the above cmd failed due to missing ${ANDROID_SDK_ROOT}. I know it's because ssh ${user}#{ip} cmd will setup a non-login and non-interactive shell, but I wonder if there is any solution?
PS:
I have tried #!/bin/bash --login and it failed.
Try this:
ssh -t ${user}#${ip} "bash -i -c 'python3 test.py'"

GITLAB CI: File shell can not get environment variables

I have file shell scripe deploy.sh:
#!/bin/sh
echo "CI_COMMIT_SHORT_SHA=$CI_COMMIT_SHORT_SHA" >> .env
exit
I create a file gitlab-ci.yml with script:
...
script:
- ...
- ssh -T -i "xxx.pem" -o "StrictHostKeyChecking=no" $EC2_ADDRESS 'bash -s' < deploy.sh
I connect to EC2 and check file .env result:
CI_COMMIT_SHORT_SHA=
deploy.sh file can not get value of variable CI_COMMIT_SHORT_SHA.
I want to result:
CI_COMMIT_SHORT_SHA=xxxx
How can I do that? Please help me!
The script appears to be executed on another server using ssh. Because of that the environment variables are not present.
You may pass the env variables directly using the ssh command:
ssh <machine> CI_COMMIT_SHORT_SHA=$CI_COMMIT_SHORT_SHA <command>

ssh sudo to a different user execute commands on remote Linux server

We have a password less authentication between the server for root user, I am trying to run the alias on remote server as below
#ssh remoteserver runuser -l wasadmin wasstart
But it is not working. Any suggestions or any other method to achieve it
Based on your comments as you need to sudo to wasadmin in order to run wasadmin, you can try this:
ssh remoteserver 'echo /path/to/wasadmin wasstart | sudo su - wasadmin'
For add an alias in linux you must run
alias youcommandname=‘command’
Notice:
This will work until you close or exit from current shell . To fix this issue just add this to you .bash_profile and run source .bash_profile
Also your profile file name depending on which shell you using . bash , zsh ,...

Set environment variables in an AWS instance

I create an AMI in EC2 with terraform with this resource:
resource "aws_instance" "devops-demo" {
ami = "jnkdjsndjsnfsdj"
instance_type = "t2.micro"
key_name = "demo-devops"
user_data = "${file("ops_setup.sh")}"
}
The user data executes a shell script that install Java JRE:
sudo yum remove java-1.7.0-openjdk -y
sudo wget -O /opt/server-jre-8u172-linux-x64.tar.gz --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u172-b11/a58eab1ec242421181065cdc37240b08/server-jre-8u172-linux-x64.tar.gz"
sudo tar xzf /opt/server-jre-8u172-linux-x64.tar.gz
export JAVA_HOME=/jdk1.8.0_172
export JRE_HOME=/jdk1.8.0_171/jre
export PATH=$JAVA_HOME/bin:$PATH
But none of the environment variables work. However, if I connect by ssh to the instance and I execute the export command, it works fine.
Is there any way to define the environment variables with terraform?
Using the export command only sets those variables for the current shell and all processes that start from that shell. It is not a persistent setting. Anything you wish to make permanent should be set in /etc/environment.
For example in userdata:
echo "JAVA_HOME=/jdk1.8.0_172" >> /etc/environment
This would add the JAVA_HOME=/jdk1.8.0_172 line to that file. Note, you should not use export inside that file.
The PATH variable is likely already defined in the /etc/environment file and you'll need to overwrite that appropriately if you are going to append additional paths to it.
There is really great details on setting environment variables available in this answer.
If you are using one of the Amazon Linux 2 AMIs, then /etc/environment will not work for you. However, you can add the environment variables to a new file at /etc/profile.d/ and this will work. Something like this would go in your user_data:
echo "JAVA_HOME=/jdk1.8.0_172" | sudo tee /etc/profile.d/java_setup.sh
echo "JRE_HOME=/jdk1.8.0_171/jre" | sudo tee -a /etc/profile.d/java_setup.sh
echo "PATH=$JAVA_HOME/bin:$PATH" | sudo tee -a /etc/profile.d/java_setup.sh

bash & s3cmd not working properly

Hi I have a shell script which contains s3cmd command on ubuntu 12.04 LTS.
I configured cron for this shell script which works fine for local environment but don't push the file to s3. But when i run shell script manually, It pushes the file to s3 without any error. I checked log and found nothing for this. Here is my shell script.
#!/bin/bash
User="abc"
datab="abc_xyz"
pass="abc#123"
Host="abc1db.instance.com"
FILE="abc_rds`date +%d_%b_%Y`.tar.gz"
S3_BKP_PATH="s3://abc/db/"
cd /abc/xyz/scripts/
mysqldump -u $User $datab -h $Host -p$pass | gzip -c > $FILE | tee -a /abc/xyz/logs/app-bkp.log
s3cmd --recursive put /abc/xyz/scripts/$FILE $S3_BKP_PATH | tee -a /abc/xyz/logs/app-bkp.log
mv /abc/xyz/scripts/$FILE /abc/xyz/backup2015/Database/
#END
This is really weird. Any suggestion would be a great help.
Check if the user running configured in crontab has correct permissions and keys in the environment.
I am guessing the keys are configured in env file as they are not here in the script.

Resources