Inherit environment variable in ssh session? - linux

I need to deal with a lot of remote machines, and each machine shares a global environment variable (
like CONTROLLER_IP). When I try to ssh to the remote machine, I would like to set the CONTROLLER_IP
according to current localhost setting. is there any way to make it happen?
Example:
In localhost host, I set ofc1=192.168.0.1, and ofc2=192.168.1.1
and I need to ssh to ofs1, ofs2.
I would like to do something like:
CONTROLLER_IP=$ofc1 ssh root#ofs1; CONTROLLER_IP=$ofc2 ssh root#ofs2
then I will get the CONTROLLER_IP setting in each ssh session.
(the code shown above does not work...)

In /etc/sshd_config on the server you can define the list of accepted environment variables using the AcceptEnv setting, and then you can send environment variables like this:
CONTROLLER_IP=$ofc1 ssh -o SendEnv=CONTROLLER_IP root#ofs1
But this seems a bit overkill for your purposes.
The alternative is to pass the variables in the remote command, like this:
ssh root#ofs1 "CONTROLLER_IP=$ofc1 somecmd"
Or if you run multiple remote commands then like this:
ssh root#ofs1 "export CONTROLLER_IP=$ofc1; cmd1; cmd2; cmd3; ..."
If you need to quote the value of the variable, you can do like this:
ssh root#ofs1 "CONTROLLER_IP='$ofc1' somecmd"

Try
ssh root#ofs1 "env CONTROLLER_IP=$ofc1 somescript"
(assuming $ofc1 is evaluated to some IP address like 12.234.56.178 without spaces or naughty characters)
or perhaps
ssh root#ofs1 "env CONTROLLER_IP='$ofc1' somescript"
if $ofc1 could contain spaces or naughty characters
where somescript is a script on the remote machine ofs1; if you want an interactive shell try
ssh root#ofs1 "env CONTROLLER_IP='$ofc1' /bin/bash"
At last, ssh is usually setting some environment variables (on the remote machine), notably the SSH_CONNECTION one. You could use it on the remote machine. Its third field is the IP address of the origin host (the one on which you do the ssh ...). So perhaps the .bashrc on the remote host might contain
if [ -n "$SSH_CONNECTION" ]; then
export CONTROLLER_IP=$(echo $SSH_CONNECTION|cut -f3 -d' ')
fi
better yet, replace the CONTROLLER_IP occurrences in your remote scripts with something using SSH_CONNECTION

Related

SSH run commands from local file and also pass local env variables

I need to run SSH on Linux and execute commands from a local file into the remote machine. This is working fine, but I also need to pass local environment variables to the remote machine, so the commands can use the values.
Here is the command I'm running:
ssh -i ${SSH_PRIV_KEY} ${SSH_USER}#${IP} < setup.sh
I have a bunch of environment variables set and when the remote machine runs the commands in setup.sh file it needs be able to use the env vars from the local machine.
I tried many things, this but solutions from other threads like this don't work correctly:
myVar='4.0.23'
export $myVar
ssh -i ${SSH_PRIV_KEY} ${SSH_USER}#${IP} myVar=myVar < setup.sh
Only thing I can come up with is to append the start of the file and hardcode the values there before executing ssh, but if possible I would like to find a cleaner solution because I want this to be reusable and the only thing that changes for me between runs is the env vars.
I ended up using this code to get the env vars I need to be stored in a file, then combine the files into one and pass that to the ssh as command:
envvars="
envvar='$envvar'
envvar2='$envvar2'
"
echo $envvars > envfile
cat envfile setup.sh > finalScript
cat $()
ssh -i ${SSH_PRIV_KEY} ${SSH_USER}#${IP} < finalScript

Is there any way to run a script on any SSH connect/disconnect?

I'd like to change my terminal color depending on ssh connected HOSTNAME.
I know how to modify the terminal, but how can I instrument ssh to add hooks?
I could wrap the ssh command with a shell function, or replace the binary, but its used as a dependency by other apps, and I would rather not do that.
You can use LocalCommand feature of OpenSSH when connecting to a remote server:
LocalCommand
Specifies a command to execute on the local machine after successfully connecting to the server. The command string extends to the end of the line, and is executed
with the user's shell. The following escape character substitutions will be performed: ‘%d’ (local user's home directory), ‘%h’ (remote host name), ‘%l’ (local host
name), ‘%n’ (host name as provided on the command line), ‘%p’ (remote port), ‘%r’ (remote user name) or ‘%u’ (local user name).
The command is run synchronously and does not have access to the session of the ssh(1) that spawned it. It should not be used for interactive commands.
This directive is ignored unless PermitLocalCommand has been enabled.
There is probably no easy way to execute a command when ending a connection with a remote server though apart from writing a ssh wrapper.
A Wrapper around SSH may be your best bet, even though you have discounted it. However almost every program lets you specify the 'ssh' command they use.
In my I own case I have a 'r' command to replace 'ssh' which performs account lookups (username and host aliases and DNS expansion without relying on the 'DNS resolver domain list', amoungst other things). These Are my notes to get various programs to call 'r' instead of 'ssh'.
scp
No config or environment variables, you must use option "-Sr"
scp -Sr ...
rsync
set environment variable
RSYNC_RSH="r -x"
unison
In ".unison/default.prf"
sshcmd = r
sshargs = -q -x
cssh
In ".clusterssh/config"
ssh=r
ssh_args= -x -o ConnectTimeout=10
multixterm
Use multixterm -xc "r %n" hostname...
vim netrw (file explorer)
Have it use the "rsync" command (set above)...
vim rsync://hostname/
OR for vim scp://hostname/...
In ".vimrc" configuration, redefine scp command to use.
let g:netrw_list_cmd="r USEPORT HOSTNAME ls -Fa1"
let g:netrw_scp_cmd="scp -Sr -q"
My 'r' script decodes all SSH arguments, I have seen used, and even handles very OLD "rsync" commands that placed more options AFTER the hostname! (Yes, I have been using this script a long time)

Do you need a certain shell program for Proxy Forwarding?

I am trying to set up a secure proxy for my work. This article perm linksuggests I should be using SSH Tunnel + SOCKS Proxy Forwarding. Do I need to have access to a particular shell program on the server, or any shell program will do? I have bash, tcsh, and zsh available.
Long version of the question here.
You run ssh -D 9999 username#ip-address-of-ssh-server on your local machine. You don't need anything else on the remote end except a shell to login to. So bash, tcsh, or zsh is fine.
Once you run your ssh -D command, it's going to look just like you've ssh'ed to "ip-address-of-ssh-server" without the -D 9999 flag. You'll be in a shell and you'll see a command prompt on the remote machine. You can just leave it alone. You'll just need to setup your browser to use SOCKS proxy at localhost:9999.
Use bash because of its popularity/ubiquity. The majority of examples you'll find will use this syntax.
But ssh's feature set are independent of the shell used.

shell script, for loop, ssh and alias

I'm trying to do something like this, I need to take backup from 4 blades, and
all should be stored under the /home/backup/esa location, which contains 4
directories with the name of the nodes (like sc-1, sc-2, pl-1, pl-2). Each
directory should contain respective node's backup information.
But I see that "from which node I execute the command, only that data is being
copied to all 4 directories". any idea why this happens? My script is like this:
for node in $(grep "^node" /cluster/etc/cluster.conf | awk '{print $4}');
do echo "Creating backup fornode ${node}";
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
done
Your problem is this piece of the code:
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
It does:
Create a remote shell on $node
Execute the command source /etc/profile.d/bkUp.sh in the remote shell
Close the remote shell and forget about anything done in that shell!!
Run asBackup on the local host.
This is not what you want. Change it to:
ssh "$node" "source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}'"
This does:
Create a remote shell on $node
Execute the command(s) source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}' on the remote host
Make sure that /home/backup/esa/${node} is a NFS mount (otherwise, the files will only be backed up in a directory on the remote host).
Note that /etc/profile is a very bad place for backup scripts (or their config). Consider moving the setup/config to /home/backup/esa which is (or should be) shared between all nodes of the cluster, so changing it in one place updates it everywhere at once.
Also note the usage of quotes: The single and double quotes make sure that spaces in the variable node won't cause unexpected problems. Sure, it's very unlikely that there will be spaces in "$node" but if there are, the error message will mislead you.
So always quote properly.
The formatting of your question is a bit confusing, but it looks as if you have a quoting problem. If you do
ssh $node source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}
then the command source is executed on $node. After the command finishes, the remote connection is closed and with it, the shell that contains the result of sourcing /etc/profile.d/bkUp.sh. Now esaBackup command is run on the local machine. It won't see anything that you keep in `bkUp.sh
What you need to do is put quotes around all the commands you want the remote shell to run -- something like
ssh $node "source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}"
That will make ssh run the full list of commands on the remote node.

Pass variables to remote script through SSH

I am running scripts on a remote server from a local server via SSH. The script gets copied over using SCP in a first place, then called while being passed some arguments as follows:
scp /path/to/script server.example.org:/another/path/
ssh server.example.org \
MYVAR1=1 \
MYVAR2=2 \
/another/path/script
This works fine and on the remote server, the variables MYVAR1 and MYVAR2 are available with their corresponding value.
The issue is that these scripts are in constant development which requires the SSH command to be changed every-time a variable is renamed, added, or removed.
I'm looking for a way of passing all the local environment variables to the remote script (since MYVAR1 and MYVAR2 are actually local environment variables) which would address the SSH command maintenance issue.
Since MYVAR1=1 \ and MYVAR1=1 \ are lines which follow the env command output I tried replacing them with the actual command as follows:
ssh server.example.org \
`env`
/another/path/script
This seems to work for "simple" env output lines (e.g. SHELL=/bin/bash or LOGNAME=sysadmin), however I get errors for more "complex" output lines (e.g. LS_COLORS=rs=0:di=01;34:ln=01;[...] which gives errors such as -bash: 34:ln=01: command not found ). I can get rid of these errors by unsetting the variables corresponding to those complex output lines before running the SSH command (e.g. unset LS_COLORS, then ssh [...]) however I don't find this very solution very reliable.
Q: Does anybody know how to pass all the local environment variables to a remote script via SSH?
PS: the local environment variables are not environment variables available on the remote machine so I cannot use this solution.
Update with solution
I ended using sed to format the env command output from VAR=VALUE to VAR="VALUE" (and concatenating all lines in to 1) which prevents bash from interpreting some of the output as commands and fixes my problem.
ssh server.example.org \
`env | sed 's/\([^=]*\)=\(.*\)/\1="\2"/' | tr '\n' ' '` \
"/another/path/script"
I happened to read the sshd_config man page unrelated to this and found the option AcceptEnv:
AcceptEnv
Specifies what environment variables sent by the client
will be
copied into the session's environ(7). See SendEnv in
ssh_config(5) for how to configure the client. Note that
envi-
ronment passing is only supported for protocol 2.
Variables are
specified by name, which may contain the wildcard
characters *'
and?'. Multiple environment variables may be separated
by
whitespace or spread across multiple AcceptEnv directives.
Be
warned that some environment variables could be used to
bypass
restricted user environments. For this reason, care
should be
taken in the use of this directive. The default is not to
accept
any environment variables.
Maybe you could use this with AcceptEnv: *? I haven't got a box with sshd handy, but try it out!
The problem is that ; mark the end of your command. You must escape them:
Try whit this command:
env | sed 's/;/\\;/g'
Update:
I tested the command whit a remote host and it worked for me using this command:
var1='something;using semicolons;'
ssh hostname "`env | sed 's/;/\\\\;/g' | sed 's/.*/set &\;/g'` echo \"$var1\""
I double escape ; whit \\\\; and then I use an other sed substitution to output variables in the form of set name=value;. Doing this ensure every variables get setted correclty on the remote host before executing the command.
You should use set instead of env.
From the bash manual:
Without options, the name and value of each shell variable are displayed in a format that can be reused as input for setting or resetting the currently-set variables.
This will take care of all your semi-colon and backslash issues.
scp /path/to/script server.example.org:/another/path/
set > environment
scp environment server.example.org:/another/path/
ssh server.example.org "source environment; /another/path/script"
If there are any variables you don't want to send over you can filter them out with something like:
set | grep -v "DONT_NEED" > environment
You could also update the ~/.bash_profile on the remote system to run the environment script as you log in so you wouldn't have to run the environment script explicit:
ssh server.example.org "/another/path/script"
How about uploading the environment at the same time?
scp /path/to/script server.example.org:/another/path/
env > environment
scp environment server.example.org:/another/path
ssh server.example.org "source environment; /another/path/script"
Perl to the rescue:
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;
use Getopt::Long;
my $usage = "Usage:\n $0 --env=FOO --env=BAR ... [user\#]host command args\n\n";
my #envs;
GetOptions("env=s" => \#envs)
or die $usage;
my $host = shift #ARGV;
die $usage unless defined $host and #ARGV;
my $ssh = Net::OpenSSH->new($host);
$ssh->error and die "Unable to connect to remote host: " . $ssh->error;
my #cmds;
for my $env (#envs) {
next unless defined $ENV{$env};
push #cmds, "export " . $ssh->shell_quote($env) .'='.$ssh->shell_quote($ENV{$env})
}
my $cmd = join('&&', #cmds, '('. join(' ', #ARGV) .')');
warn "remote command: $cmd\n";
$ssh->system($cmd);
And it will not break in case your environment variables contain funny things as quotes.
This solution works well for me.
Suppose you have script which takes two params or have two variables:
#!/bin/sh
echo "local"
echo "$1"
echo "$2"
/usr/bin/ssh root#192.168.1.2 "/path/test.sh \"$1\" \"$2\";"
And script test.sh on 192.168.1.2:
#!/bin/bash
echo "remote"
echo "$1"
echo "$2"
Output will be:
local
This is first params
And this is second
remote
This is first params
And this is second

Resources