SSH run commands from local file and also pass local env variables - linux

I need to run SSH on Linux and execute commands from a local file into the remote machine. This is working fine, but I also need to pass local environment variables to the remote machine, so the commands can use the values.
Here is the command I'm running:
ssh -i ${SSH_PRIV_KEY} ${SSH_USER}#${IP} < setup.sh
I have a bunch of environment variables set and when the remote machine runs the commands in setup.sh file it needs be able to use the env vars from the local machine.
I tried many things, this but solutions from other threads like this don't work correctly:
myVar='4.0.23'
export $myVar
ssh -i ${SSH_PRIV_KEY} ${SSH_USER}#${IP} myVar=myVar < setup.sh
Only thing I can come up with is to append the start of the file and hardcode the values there before executing ssh, but if possible I would like to find a cleaner solution because I want this to be reusable and the only thing that changes for me between runs is the env vars.

I ended up using this code to get the env vars I need to be stored in a file, then combine the files into one and pass that to the ssh as command:
envvars="
envvar='$envvar'
envvar2='$envvar2'
"
echo $envvars > envfile
cat envfile setup.sh > finalScript
cat $()
ssh -i ${SSH_PRIV_KEY} ${SSH_USER}#${IP} < finalScript

Related

use environment variables of remote server in ssh command

I have two servers X(host) and Y(remote). A script abc.sh is saved on Y server. I am running abc.sh from server X using ssh command.
Script is running successfully but the commands which contain the environment variables(kept in /.bash_profile) of server Y are giving blank output.
When I am running abc.sh from server Y itself, all commands running successfully.
My question is how to include the environment variables of remote server(Y) so that full script will execute successfully.
NOTE : I don't have write access to etc directory so I can't change anything in it.
Thanks in advance!
You can include your environment variables like following:
ssh user#host_y 'source ~/.bash_profile; /path/to/your/script/abc.sh'
Since direct command run is not an interactive shell, your variable will not work there. source will run script in your current shell and make visible environment variables in that file to your script.
I run some cron jobs using ssh connect a remote machine. The code can't load all the environment variables. It works if I run the code in that machine.
I tried several ways that doesn't work for me.
Finally, I comment below lines in the ~/.bashrc file. It works for me.
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
The ssh connect open an no-interactive shell. So it will load the environment variables for an non-interactive shell if you comment or delete this block codes.
Alternatively, you can just put all of your required environment lines code above that line.

Execute shell script in remote machine using ssh command with config file

I want to execute a shell script in remote machine and i achieved this using the below command,
ssh user#remote_machine "bash -s" < /usr/test.sh
The shell script executed properly in the remote machine. Now i have made some changes in script to get some values from the config file. The script contains the below lines,
#!bin/bash
source /usr/property.config
echo "testName"
property.config :
testName=xxx
testPwd=yyy
Now if i run the shell script in remote machine, i am getting no such file error since /usr/property.config will not be available in remote machine.
How to pass the config file along with the shell script to be executed in remote machine ?
Only way you can reference to your config file that you created and still run your script is you need to put the config file at the required path there are two ways to do it.
If config is almost always fixed and you need not to change it, make the config locally on the host machine where you need to run the script then put the absolute path to the config file in your script and make sure the user running the script has permission to access it.
If need to ship your config file every time you want to run that script, then may just simply scp the file before you send and call the script.
scp property.config user#remote_machine:/usr/property.config
ssh user#remote_machine "bash -s" < /usr/test.sh
Edit
as per request if you want to forcefully do it in a single line this is how it can be done:
property.config
testName=xxx
testPwd=yyy
test.sh
#!bin/bash
#do not use this line source /usr/property.config
echo "$testName"
Now you can run your command as John has suggested:
ssh user#remote_machine "bash -s" < <(cat /usr/property.config /usr/test.sh)
Try this:
ssh user#remote_machine "bash -s" < <(cat /usr/property.config /usr/test.sh)
Then your script should not source the config internally.
Second option, if all you need to pass are environment variables:
There are a few techniques described here: https://superuser.com/questions/48783/how-can-i-pass-an-environment-variable-through-an-ssh-command
My favorite one is perhaps the simplest:
ssh user#remote_machine VAR1=val1 VAR2=val2 bash -s < /usr/test.sh
This of course means you'll need to build up the environment variable assignments from your local config file, but hopefully that's straightforward.

Inherit environment variable in ssh session?

I need to deal with a lot of remote machines, and each machine shares a global environment variable (
like CONTROLLER_IP). When I try to ssh to the remote machine, I would like to set the CONTROLLER_IP
according to current localhost setting. is there any way to make it happen?
Example:
In localhost host, I set ofc1=192.168.0.1, and ofc2=192.168.1.1
and I need to ssh to ofs1, ofs2.
I would like to do something like:
CONTROLLER_IP=$ofc1 ssh root#ofs1; CONTROLLER_IP=$ofc2 ssh root#ofs2
then I will get the CONTROLLER_IP setting in each ssh session.
(the code shown above does not work...)
In /etc/sshd_config on the server you can define the list of accepted environment variables using the AcceptEnv setting, and then you can send environment variables like this:
CONTROLLER_IP=$ofc1 ssh -o SendEnv=CONTROLLER_IP root#ofs1
But this seems a bit overkill for your purposes.
The alternative is to pass the variables in the remote command, like this:
ssh root#ofs1 "CONTROLLER_IP=$ofc1 somecmd"
Or if you run multiple remote commands then like this:
ssh root#ofs1 "export CONTROLLER_IP=$ofc1; cmd1; cmd2; cmd3; ..."
If you need to quote the value of the variable, you can do like this:
ssh root#ofs1 "CONTROLLER_IP='$ofc1' somecmd"
Try
ssh root#ofs1 "env CONTROLLER_IP=$ofc1 somescript"
(assuming $ofc1 is evaluated to some IP address like 12.234.56.178 without spaces or naughty characters)
or perhaps
ssh root#ofs1 "env CONTROLLER_IP='$ofc1' somescript"
if $ofc1 could contain spaces or naughty characters
where somescript is a script on the remote machine ofs1; if you want an interactive shell try
ssh root#ofs1 "env CONTROLLER_IP='$ofc1' /bin/bash"
At last, ssh is usually setting some environment variables (on the remote machine), notably the SSH_CONNECTION one. You could use it on the remote machine. Its third field is the IP address of the origin host (the one on which you do the ssh ...). So perhaps the .bashrc on the remote host might contain
if [ -n "$SSH_CONNECTION" ]; then
export CONTROLLER_IP=$(echo $SSH_CONNECTION|cut -f3 -d' ')
fi
better yet, replace the CONTROLLER_IP occurrences in your remote scripts with something using SSH_CONNECTION

shell script, for loop, ssh and alias

I'm trying to do something like this, I need to take backup from 4 blades, and
all should be stored under the /home/backup/esa location, which contains 4
directories with the name of the nodes (like sc-1, sc-2, pl-1, pl-2). Each
directory should contain respective node's backup information.
But I see that "from which node I execute the command, only that data is being
copied to all 4 directories". any idea why this happens? My script is like this:
for node in $(grep "^node" /cluster/etc/cluster.conf | awk '{print $4}');
do echo "Creating backup fornode ${node}";
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
done
Your problem is this piece of the code:
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
It does:
Create a remote shell on $node
Execute the command source /etc/profile.d/bkUp.sh in the remote shell
Close the remote shell and forget about anything done in that shell!!
Run asBackup on the local host.
This is not what you want. Change it to:
ssh "$node" "source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}'"
This does:
Create a remote shell on $node
Execute the command(s) source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}' on the remote host
Make sure that /home/backup/esa/${node} is a NFS mount (otherwise, the files will only be backed up in a directory on the remote host).
Note that /etc/profile is a very bad place for backup scripts (or their config). Consider moving the setup/config to /home/backup/esa which is (or should be) shared between all nodes of the cluster, so changing it in one place updates it everywhere at once.
Also note the usage of quotes: The single and double quotes make sure that spaces in the variable node won't cause unexpected problems. Sure, it's very unlikely that there will be spaces in "$node" but if there are, the error message will mislead you.
So always quote properly.
The formatting of your question is a bit confusing, but it looks as if you have a quoting problem. If you do
ssh $node source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}
then the command source is executed on $node. After the command finishes, the remote connection is closed and with it, the shell that contains the result of sourcing /etc/profile.d/bkUp.sh. Now esaBackup command is run on the local machine. It won't see anything that you keep in `bkUp.sh
What you need to do is put quotes around all the commands you want the remote shell to run -- something like
ssh $node "source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}"
That will make ssh run the full list of commands on the remote node.

Pass variables to remote script through SSH

I am running scripts on a remote server from a local server via SSH. The script gets copied over using SCP in a first place, then called while being passed some arguments as follows:
scp /path/to/script server.example.org:/another/path/
ssh server.example.org \
MYVAR1=1 \
MYVAR2=2 \
/another/path/script
This works fine and on the remote server, the variables MYVAR1 and MYVAR2 are available with their corresponding value.
The issue is that these scripts are in constant development which requires the SSH command to be changed every-time a variable is renamed, added, or removed.
I'm looking for a way of passing all the local environment variables to the remote script (since MYVAR1 and MYVAR2 are actually local environment variables) which would address the SSH command maintenance issue.
Since MYVAR1=1 \ and MYVAR1=1 \ are lines which follow the env command output I tried replacing them with the actual command as follows:
ssh server.example.org \
`env`
/another/path/script
This seems to work for "simple" env output lines (e.g. SHELL=/bin/bash or LOGNAME=sysadmin), however I get errors for more "complex" output lines (e.g. LS_COLORS=rs=0:di=01;34:ln=01;[...] which gives errors such as -bash: 34:ln=01: command not found ). I can get rid of these errors by unsetting the variables corresponding to those complex output lines before running the SSH command (e.g. unset LS_COLORS, then ssh [...]) however I don't find this very solution very reliable.
Q: Does anybody know how to pass all the local environment variables to a remote script via SSH?
PS: the local environment variables are not environment variables available on the remote machine so I cannot use this solution.
Update with solution
I ended using sed to format the env command output from VAR=VALUE to VAR="VALUE" (and concatenating all lines in to 1) which prevents bash from interpreting some of the output as commands and fixes my problem.
ssh server.example.org \
`env | sed 's/\([^=]*\)=\(.*\)/\1="\2"/' | tr '\n' ' '` \
"/another/path/script"
I happened to read the sshd_config man page unrelated to this and found the option AcceptEnv:
AcceptEnv
Specifies what environment variables sent by the client
will be
copied into the session's environ(7). See SendEnv in
ssh_config(5) for how to configure the client. Note that
envi-
ronment passing is only supported for protocol 2.
Variables are
specified by name, which may contain the wildcard
characters *'
and?'. Multiple environment variables may be separated
by
whitespace or spread across multiple AcceptEnv directives.
Be
warned that some environment variables could be used to
bypass
restricted user environments. For this reason, care
should be
taken in the use of this directive. The default is not to
accept
any environment variables.
Maybe you could use this with AcceptEnv: *? I haven't got a box with sshd handy, but try it out!
The problem is that ; mark the end of your command. You must escape them:
Try whit this command:
env | sed 's/;/\\;/g'
Update:
I tested the command whit a remote host and it worked for me using this command:
var1='something;using semicolons;'
ssh hostname "`env | sed 's/;/\\\\;/g' | sed 's/.*/set &\;/g'` echo \"$var1\""
I double escape ; whit \\\\; and then I use an other sed substitution to output variables in the form of set name=value;. Doing this ensure every variables get setted correclty on the remote host before executing the command.
You should use set instead of env.
From the bash manual:
Without options, the name and value of each shell variable are displayed in a format that can be reused as input for setting or resetting the currently-set variables.
This will take care of all your semi-colon and backslash issues.
scp /path/to/script server.example.org:/another/path/
set > environment
scp environment server.example.org:/another/path/
ssh server.example.org "source environment; /another/path/script"
If there are any variables you don't want to send over you can filter them out with something like:
set | grep -v "DONT_NEED" > environment
You could also update the ~/.bash_profile on the remote system to run the environment script as you log in so you wouldn't have to run the environment script explicit:
ssh server.example.org "/another/path/script"
How about uploading the environment at the same time?
scp /path/to/script server.example.org:/another/path/
env > environment
scp environment server.example.org:/another/path
ssh server.example.org "source environment; /another/path/script"
Perl to the rescue:
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;
use Getopt::Long;
my $usage = "Usage:\n $0 --env=FOO --env=BAR ... [user\#]host command args\n\n";
my #envs;
GetOptions("env=s" => \#envs)
or die $usage;
my $host = shift #ARGV;
die $usage unless defined $host and #ARGV;
my $ssh = Net::OpenSSH->new($host);
$ssh->error and die "Unable to connect to remote host: " . $ssh->error;
my #cmds;
for my $env (#envs) {
next unless defined $ENV{$env};
push #cmds, "export " . $ssh->shell_quote($env) .'='.$ssh->shell_quote($ENV{$env})
}
my $cmd = join('&&', #cmds, '('. join(' ', #ARGV) .')');
warn "remote command: $cmd\n";
$ssh->system($cmd);
And it will not break in case your environment variables contain funny things as quotes.
This solution works well for me.
Suppose you have script which takes two params or have two variables:
#!/bin/sh
echo "local"
echo "$1"
echo "$2"
/usr/bin/ssh root#192.168.1.2 "/path/test.sh \"$1\" \"$2\";"
And script test.sh on 192.168.1.2:
#!/bin/bash
echo "remote"
echo "$1"
echo "$2"
Output will be:
local
This is first params
And this is second
remote
This is first params
And this is second

Resources