How to load environment path during executing command via ssh? - linux

I'm trying to run a script (let's call it test.sh) via ssh as follows
ssh ${user}#${ip} "python3 test.py"
and the test.py is as follows
import os
# Do sth
...
os.system("emulator xxx")
...
The android environemnt paths are exported in ~/.bashrc, but the above cmd failed due to missing ${ANDROID_SDK_ROOT}. I know it's because ssh ${user}#{ip} cmd will setup a non-login and non-interactive shell, but I wonder if there is any solution?
PS:
I have tried #!/bin/bash --login and it failed.

Try this:
ssh -t ${user}#${ip} "bash -i -c 'python3 test.py'"

Related

I can't start a virtualenv from a shell script

The script is to open a new terminal and access a directory, activate the virtual environment inside him, and then run my service within that environment:
#!/bin/bash
gnome-terminal -- bash -c "cd mydirectory/project && source ~/myenv/bin/activate && python3 run.py runserver; exec bash"
But when you run it, don't activate the environment. The other commands works.
You can create a shell file ~/mydirectory/project/runserver.sh as :
cd $HOME/mydirectory/project
source ~/myenv/bin/activate
python3 run.py runserver
Then run:
gnome-terminal -- bash --rcfile ~/mydirectory/project/runserver.sh

ssh sudo to a different user execute commands on remote Linux server

We have a password less authentication between the server for root user, I am trying to run the alias on remote server as below
#ssh remoteserver runuser -l wasadmin wasstart
But it is not working. Any suggestions or any other method to achieve it
Based on your comments as you need to sudo to wasadmin in order to run wasadmin, you can try this:
ssh remoteserver 'echo /path/to/wasadmin wasstart | sudo su - wasadmin'
For add an alias in linux you must run
alias youcommandname=‘command’
Notice:
This will work until you close or exit from current shell . To fix this issue just add this to you .bash_profile and run source .bash_profile
Also your profile file name depending on which shell you using . bash , zsh ,...

How to do ssh in a bash script with lot of commands?

I tried to run the following command in a bash script but only till ./install.sh rereplica is called . Rest of the commands are not called at all.
ssh $node2user#$node2 "cd /tmp; tar -xf $mmCS.tar; cd $mmCS; ./install.sh csreplica; ./install.sh keepalived $vip low;./install.sh haproxy $node1:8080 $node2:8080 $vip:8080; ./install.sh confmongo $dbPath"
You can give ssh a script on standard input:
ssh $node2user#$node2 < my_script.sh
If I have to do execute a complex script using SSH, I usually write a script locally, copy it on the target machine with SSH and then execute it there:
scp foo.sh $node2user#$node2:
ssh $node2user#$node2 "bash ./foo.sh"
That way, I can debug the script simply by invoking it with bash -x and I can use the full power of BASH.
Alternatively, you can use
ssh $node2user#$node2 "set +x; cd /tmp; ..."

How to cd on remote server via SSH bash script?

I have a bash script I'm using to connect to a remote server via ssh. That works, however I want the script to immediately pass the command of cd /some/dir after connecting. That doesn't seem to be working. Here's my code:
#!/bin/bash
echo "SSHing.."
ssh -i ~/.ssh/some-site.pem xxx#yyy.com
cd /some/dir
read
How can I have the cd command be executed right after SSH connection is established?
There are two easy ways to execute commands via SSH from inside the script:
1) ssh user#host 'command'
2)
ssh user#host <<<EOF
command1
command2
<...>
commandn
EOF
Normally you'd just edit your ~/.profile on the remote machine.
If that is not an option, you could do something like this:
ssh -t theserver.com 'cd /some/dir && bash -i'
You can use the following command
ssh user#watevr <the_cmd_to_be_executed>
You can try this :
ssh abc#hostname :/pathto/specific directory

SSH works in Terminal but nor in shell script

I am trying to execute a script I uploaded to an AWS instance. If I run the following command in my MacBook Terminal, it succeeds:
ssh -o StrictHostKeyChecking=no -i ~/.ec2/my.pem ec2-user#ec2-<address>.amazonaws.com "chmod u+x ./myScript.sh"
I ported the same command to a simple shell script on my local machine, where I pass in the information:
#!/bin/sh
# myLocalScript.sh
host=$1
pem=$2
fileName=$3
ssh -o StrictHostKeyChecking=no -i $pemkey ec2-user#$host "chmod u+x ./$fileName"
When I run it using this command:
sh myLocalScript.sh ec2-user#ec2-<address>.amazonaws.com ~/.ec2/my.pem myScript.sh
I get the following error:
Warning: Identity file ec2-user#ec2-<address>.amazonaws.com not accessible: No such file or directory.
ssh: Could not resolve hostname chmod u+x ./myScript.sh: nodename nor servname provided, or not known
What am I doing wrong?
You need $pem not $pemkey.
Additionally, you should get into the habit of double-quoting variables, except in very special situations where you really want an empty variable to "disappear".

Resources