Use the same command in shell script, just with a prefix - linux

I have a command I run to check if a certain db exists.
I want to do it locally and via ssh on a remote server.
The command is as so:
mysqlshow -uroot | grep -o $DB_NAME
My question is if I can use the same command for 2 variables,
the only difference being ssh <remote-server> before one?
Something along the lines of !! variable expansion in the CLI:
LOCAL_DB=mysqlshow -uroot | grep -o $DB_NAME
REMOTE_DB=ssh <remote-host> !!

something like this perhaps?
cmd="whoami"
eval $cmd
ssh remote#host $cmd
eval will run the command in the string $cmd locally
also, for checking tables, it's safer to ask for the table name explicitly via a query
SHOW TABLES LIKE 'yourtable';
and for databases:
SHOW DATABASES LIKE 'yourdb';

You can create a function in .bashrc something like:
function showrdb() {
ssh remote#host "$1"
}
export -f showrdb
and then source .bashrc and call the function like;
showrdb "command you want to run on remote host"
Or alternately you can create a shell script contains the same function(or only the ssh line) and call the script as
./scriptname "command to execute of remote host"
But the level of comfort for me is more in first approach.

Related

Reading environment variables with ad-hoc ansible command

I am trying to run a simple ad-hoc ansible command on various hosts to find out if a directory exists.
The following command works correctly by returning exists although it does not print the environment variable beforehand:
ansible all -i hosts.list -m shell -a "if test -d /the/dir/; then echo '$HOSTNAME exists'; fi"
Can anyone please tell me why only exists is returned instead of HOSTNAME exists?
Because it's included in double quotes ("), your $HOSTNAME is being interpreted by your local shell. You probably want to write instead:
ansible all -i hosts.list -m shell -a \
'if test -d /the/dir/; then echo "$HOSTNAME exists"; fi'
In most cases you will want to use single quotes (') for your argument to -a when you're using the shell module to prevent your local shell from expanding variables, etc, that are intended to be expanded on the remote host.

Having trouble with running a command over ssh

This command isn't working as expected:
ssh root#<machineIP> -- sh -c "echo \$\(cat /tmp/testfile\) > /testfile"
My intent is to copy the contents of /tmp/testfile to /testfile. Real simple. But I find that /testfile has nothing in it. The file is created (in the case it doesn't exist).
echo command works fine (minus the escapes) if run from command line on the remote server. But doesn't work when running it through ssh. Originally I actually had a more complex command with 'sed', but simplified what wasn't working down to this.
Both remote and local servers are CentOS7.
I found I had to escape the $ and (). Is this causing me problems? Is not the correct way to run this command?
ssh will take all the command parameters, join them on spaces and then run that in the remote shell.
That means that the command being executed is basically what you'd get if you replaced ssh with echo:
$ echo sh -c "echo \$(cat /tmp/testfile) > /testfile"
sh -c echo $(cat /tmp/testfile) > /testfile
The quoting is lost, so the resulting command is equivalent to sh -c echo > /testfile which makes it empty.
Instead, just take the command you want to run and wrap it in single quotes:
ssh root#host 'echo $(cat /tmp/testfile) > /testfile'
Make sure never to use echo or $() when copying files though. I'm assuming this is a stand-in for something better.

Is it possible to create a BASH script which will ssh into a remote machine and continue doing things there?

I've tried doing it myself but after the script logs into the remote machine, the script stops, which is understandable as the remote machine is not aware of the script, but can it be done?
Thanks
Try a here-doc
ssh user#remote << 'END_OF_COMMANDS'
echo all this will be executed remotely
user=$(whoami)
echo I am $user
pwd
END_OF_COMMANDS
When you say "continue doing stuff there", you might mean simple interacting with the remote session, then:
expect -c 'spawn ssh user#host; interact'
There are multiple ways:
ssh user#remote < script.txt
scp script user#remote:/tmp/somescript.sh ; ssh user#remote /tmp/somescript.sh
Write an expect script.
For first 2 options, I would recommend using public/private key pair for logging in, for automation sake.
You need to provide the remote command at the end of the ssh invocation:
$ ssh user#remote somecommand
If you need to achieve a series of commands, then it's easier to write a script, copy it to the remote machine (using, e.g. scp) and call it as shown above.
I prefer perl in such cases:
use Net::SSH::Perl;
my $ssh = Net::SSH::Perl->new($host);
$ssh->login($user, $pass);
my($stdout, $stderr, $exit) = $ssh->cmd($cmd);
It is less error-prone and gives me better control while capturing stdout, stderr and exit status of the command.
Something like this in your ~/.profile (or ~/.bash_profile for instance) should do the trick :
function remote {
ssh -t -t -t user#remote_server "$*'"
}
and then call
remote somecommandofyours
I solved this problem by passing a whole function over ssh using declare -f to the remote server and then executing it there. This can actually be done quite simply. The only caveat is that you have to make sure that any variables used by the function are either defined inside of it or passed in as arguments. If you function uses any sort of environment variables, aliases, other functions, or any other variables that were defined external to it, it will not function on the remote machine because those definitions will not exist there.
So, here's how I did it:
somefunction() {
host=$1
user=$2
echo "I'm running a function remotely on $(hostname) that was sent from $host by $user"
}
ssh $someserver "$(declare -f somefunction);somefunction $(hostname) $(whoami)"
Note that if your function does use any sort of 'global' variables, these can be substituted in after the declare function by doing pattern substitution with sed or, as I prefer, perl.
declare -f somefunction | perl -pe "s/(\\$)global_var/$global_var/g"
This will replace any reference to the global_var in the function with the value of the variable.
Cheers!

shell script, for loop, ssh and alias

I'm trying to do something like this, I need to take backup from 4 blades, and
all should be stored under the /home/backup/esa location, which contains 4
directories with the name of the nodes (like sc-1, sc-2, pl-1, pl-2). Each
directory should contain respective node's backup information.
But I see that "from which node I execute the command, only that data is being
copied to all 4 directories". any idea why this happens? My script is like this:
for node in $(grep "^node" /cluster/etc/cluster.conf | awk '{print $4}');
do echo "Creating backup fornode ${node}";
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
done
Your problem is this piece of the code:
ssh $node source /etc/profile.d/bkUp.sh;
asBackup -b /home/backup/esa/${node};
It does:
Create a remote shell on $node
Execute the command source /etc/profile.d/bkUp.sh in the remote shell
Close the remote shell and forget about anything done in that shell!!
Run asBackup on the local host.
This is not what you want. Change it to:
ssh "$node" "source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}'"
This does:
Create a remote shell on $node
Execute the command(s) source /etc/profile.d/bkUp.sh; asBackup -b '/home/backup/esa/${node}' on the remote host
Make sure that /home/backup/esa/${node} is a NFS mount (otherwise, the files will only be backed up in a directory on the remote host).
Note that /etc/profile is a very bad place for backup scripts (or their config). Consider moving the setup/config to /home/backup/esa which is (or should be) shared between all nodes of the cluster, so changing it in one place updates it everywhere at once.
Also note the usage of quotes: The single and double quotes make sure that spaces in the variable node won't cause unexpected problems. Sure, it's very unlikely that there will be spaces in "$node" but if there are, the error message will mislead you.
So always quote properly.
The formatting of your question is a bit confusing, but it looks as if you have a quoting problem. If you do
ssh $node source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}
then the command source is executed on $node. After the command finishes, the remote connection is closed and with it, the shell that contains the result of sourcing /etc/profile.d/bkUp.sh. Now esaBackup command is run on the local machine. It won't see anything that you keep in `bkUp.sh
What you need to do is put quotes around all the commands you want the remote shell to run -- something like
ssh $node "source /etc/profile.d/bkUp.sh; esaBackup -b /home/backup/esa/${node}"
That will make ssh run the full list of commands on the remote node.

Pass variables to remote script through SSH

I am running scripts on a remote server from a local server via SSH. The script gets copied over using SCP in a first place, then called while being passed some arguments as follows:
scp /path/to/script server.example.org:/another/path/
ssh server.example.org \
MYVAR1=1 \
MYVAR2=2 \
/another/path/script
This works fine and on the remote server, the variables MYVAR1 and MYVAR2 are available with their corresponding value.
The issue is that these scripts are in constant development which requires the SSH command to be changed every-time a variable is renamed, added, or removed.
I'm looking for a way of passing all the local environment variables to the remote script (since MYVAR1 and MYVAR2 are actually local environment variables) which would address the SSH command maintenance issue.
Since MYVAR1=1 \ and MYVAR1=1 \ are lines which follow the env command output I tried replacing them with the actual command as follows:
ssh server.example.org \
`env`
/another/path/script
This seems to work for "simple" env output lines (e.g. SHELL=/bin/bash or LOGNAME=sysadmin), however I get errors for more "complex" output lines (e.g. LS_COLORS=rs=0:di=01;34:ln=01;[...] which gives errors such as -bash: 34:ln=01: command not found ). I can get rid of these errors by unsetting the variables corresponding to those complex output lines before running the SSH command (e.g. unset LS_COLORS, then ssh [...]) however I don't find this very solution very reliable.
Q: Does anybody know how to pass all the local environment variables to a remote script via SSH?
PS: the local environment variables are not environment variables available on the remote machine so I cannot use this solution.
Update with solution
I ended using sed to format the env command output from VAR=VALUE to VAR="VALUE" (and concatenating all lines in to 1) which prevents bash from interpreting some of the output as commands and fixes my problem.
ssh server.example.org \
`env | sed 's/\([^=]*\)=\(.*\)/\1="\2"/' | tr '\n' ' '` \
"/another/path/script"
I happened to read the sshd_config man page unrelated to this and found the option AcceptEnv:
AcceptEnv
Specifies what environment variables sent by the client
will be
copied into the session's environ(7). See SendEnv in
ssh_config(5) for how to configure the client. Note that
envi-
ronment passing is only supported for protocol 2.
Variables are
specified by name, which may contain the wildcard
characters *'
and?'. Multiple environment variables may be separated
by
whitespace or spread across multiple AcceptEnv directives.
Be
warned that some environment variables could be used to
bypass
restricted user environments. For this reason, care
should be
taken in the use of this directive. The default is not to
accept
any environment variables.
Maybe you could use this with AcceptEnv: *? I haven't got a box with sshd handy, but try it out!
The problem is that ; mark the end of your command. You must escape them:
Try whit this command:
env | sed 's/;/\\;/g'
Update:
I tested the command whit a remote host and it worked for me using this command:
var1='something;using semicolons;'
ssh hostname "`env | sed 's/;/\\\\;/g' | sed 's/.*/set &\;/g'` echo \"$var1\""
I double escape ; whit \\\\; and then I use an other sed substitution to output variables in the form of set name=value;. Doing this ensure every variables get setted correclty on the remote host before executing the command.
You should use set instead of env.
From the bash manual:
Without options, the name and value of each shell variable are displayed in a format that can be reused as input for setting or resetting the currently-set variables.
This will take care of all your semi-colon and backslash issues.
scp /path/to/script server.example.org:/another/path/
set > environment
scp environment server.example.org:/another/path/
ssh server.example.org "source environment; /another/path/script"
If there are any variables you don't want to send over you can filter them out with something like:
set | grep -v "DONT_NEED" > environment
You could also update the ~/.bash_profile on the remote system to run the environment script as you log in so you wouldn't have to run the environment script explicit:
ssh server.example.org "/another/path/script"
How about uploading the environment at the same time?
scp /path/to/script server.example.org:/another/path/
env > environment
scp environment server.example.org:/another/path
ssh server.example.org "source environment; /another/path/script"
Perl to the rescue:
#!/usr/bin/perl
use strict;
use warnings;
use Net::OpenSSH;
use Getopt::Long;
my $usage = "Usage:\n $0 --env=FOO --env=BAR ... [user\#]host command args\n\n";
my #envs;
GetOptions("env=s" => \#envs)
or die $usage;
my $host = shift #ARGV;
die $usage unless defined $host and #ARGV;
my $ssh = Net::OpenSSH->new($host);
$ssh->error and die "Unable to connect to remote host: " . $ssh->error;
my #cmds;
for my $env (#envs) {
next unless defined $ENV{$env};
push #cmds, "export " . $ssh->shell_quote($env) .'='.$ssh->shell_quote($ENV{$env})
}
my $cmd = join('&&', #cmds, '('. join(' ', #ARGV) .')');
warn "remote command: $cmd\n";
$ssh->system($cmd);
And it will not break in case your environment variables contain funny things as quotes.
This solution works well for me.
Suppose you have script which takes two params or have two variables:
#!/bin/sh
echo "local"
echo "$1"
echo "$2"
/usr/bin/ssh root#192.168.1.2 "/path/test.sh \"$1\" \"$2\";"
And script test.sh on 192.168.1.2:
#!/bin/bash
echo "remote"
echo "$1"
echo "$2"
Output will be:
local
This is first params
And this is second
remote
This is first params
And this is second

Resources