I need to execute the shell command as follows:
ssh <device> "command"
command is invoked as:
$(typeset); <function_name> \"arguement_string\"; cd ...; ls ...
How exactly to quote here? Is this correct?
""$(typeset); <function_name> \"arguement_string\""; cd ...; ls ..."
I am confused with this quoting in shell scripts.
Don't try to do the quoting by hand -- ask the shell to do it for you!
command_array=( function_name "first argument" "second argument" )
printf -v command_str '%q ' "${command_array[#]}"
ssh_str="$(typeset); $command_str"
ssh machine "$ssh_str"
You can then build up command_array as you wish -- using logic to conditionally append values, with only the kind of quoting you'd usually refer to use to those values, and let printf %q add all additional quoting needed to make the content safe to pass through ssh.
If you're trying to incrementally build up a script, you can do that like so:
remote_script="$(typeset)"$'\n'
safe_append_command() {
local command_str
printf -v command_str '%q ' "$#"
remote_script+="$command_str"$'\n'
}
safe_append_command cp "$file" "$destination"
safe_append_command tar -cf /tmp/foo.tar "${destination%/*}"
# ...etc...
ssh machine "$remote_script"
Note that in this case, all expansions take place locally, when the script is being generated, and shell constructs such as redirection operators cannot be used (except by embedding them in a function you then pass to the remote system with typeset). Doing so means that no data passed to safe_append_command can be treated as code -- foreclosing large classes of potential security holes at the cost of flexibility.
I would use a here document:
ssh machine <<'EOF'
hello() {
echo "hello $1!"
}
hello "world"
EOF
Note that I wrapped the starting EOF in single quotes. Doing so prevents bash from interpreting variables or command substitutions in the local shell.
Related
I have a test.sh file which takes as a parameter a bash command, it does some logic, i.e. setting and checking some env vars, and then executes that input command.
#!/bin/bash
#Some other logic here
echo "Run command: $#"
eval "$#"
When I run it, here's the output
% ./test.sh echo "ok"
Run command: echo ok
ok
But the issue is, when I pass something like sh -c 'echo "ok"', I don't get the output.
% ./test.sh sh -c 'echo "ok"'
Run command: sh -c echo "ok"
%
So I tried changing eval with exec, tried to execute $# directly (without eval or exec), even tried to execute it and save the output to a variable, still no use.
Is there any way to run the passed command in this format and get the ourput?
Use case:
The script is used as an entrypoint for the docker container, it receives the parameters from docker CMD and executes those to run the container.
As a quickfix I can remove the sh -c and pass the command without it, but I want to make the script reusable and not to change the commands.
TL;DR:
This is a typical use case (perform some business logic in a Docker entrypoint script before running a compound command, given at command line) and the recommended last line of the script is:
exec "$#"
Details
To further explain this line, some remarks and hyperlinks:
As per the Bash user manual, exec is a POSIX shell builtin that replaces the shell [with the command supplied] without creating a new process.
As a result, using exec like this in a Docker entrypoint context is important because it ensures that the CMD program that is executed will still have PID 1 and can directly handle signals, including that of docker stop (see also that other SO answer: Speed up docker-compose shutdown).
The double quotes ("$#") are also important to avoid word splitting (namely, ensure that each positional argument is passed as is, even if it contains spaces). See e.g.:
#!/usr/bin/env bash
printargs () { for arg; do echo "$arg"; done; }
test0 () {
echo "test0:"
printargs $#
}
test1 () {
echo "test1:"
printargs "$#"
}
test0 /bin/sh -c 'echo "ok"'
echo
test1 /bin/sh -c 'echo "ok"'
test0:
/bin/sh
-c
echo
"ok"
test1:
/bin/sh
-c
echo "ok"
Finally eval is a powerful bash builtin that is (1) unneeded for your use case, (2) and actually not advised to use in general, in particular for security reasons. E.g., if the string argument of eval relies on some user-provided input… For details on this issue, see e.g. https://mywiki.wooledge.org/BashFAQ/048 (which recaps the few situations where one would like to use this builtin, typically, the command eval "$(ssh-agent -s)").
I am trying to ssh to another server in a shell script and run some scripts.
Currently my line looks something like:
ssh user#$SERVER '$(typeset -a >> /dev/null); PROFILE_LOCATION=`locate db2profile| grep -i $INST_NAME| grep -v bak`; . $PROFILE_LOCATION; function1; function2;'
I've tried both ' and " , as well as using a combination of those with \; or ';'
How do I use the variables I have in my current shell script in my ssh into another server and running multiple commands? Thanks!!
If you want function declarations, and your shell is bash, use typeset -p rather than typeset -a (which will provide a textual dump of variables but not functions). Also, you need to actually run that in a context where it'll be locally evaluated (and ensure that your remote shell is something that understands it, not /bin/sh).
The following hits all those points:
evaluate_db2profile() {
local db2profile
db2profile=$(locate db2profile | grep -i "$INST_NAME" | grep -v bak | head -n 1)
[ -n "$db2profile" ] && . "$db2profile"
}
ssh "user#$SERVER" bash -s <<EOF
$(typeset -p)
evaluate_db2profile
function1
function2
EOF
Because <<EOF is used rather than <<'EOF', the typeset -p command is run locally and substituted into the heredoc. (You could also accomplish this by using double rather than single quotes in the one-line formulation, but see below).
Defining evaluate_db2profile locally as a function ensures that typeset -p will emit it in a format that the remote shell can evaluate, without need to be concerned about escaping.
Using bash -s on the remote command line ensures that the shell interpreting your functions is bash, not /bin/sh. If your code is written for ksh, run ksh -s to achieve that same effect.
Suppose you have the following command stored in a variable:
COMMAND='echo hello'
What's the difference between
$ eval "$COMMAND"
hello
$ bash -c "$COMMAND"
hello
$ $COMMAND
hello
? Why is the last version almost never used if it is shorter and (as far as I can see) does exactly the same thing?
The third form is not at all like the other two -- but to understand why, we need to go into the order of operations when bash in interpreting a command, and look at which of those are followed when each method is in use.
Bash Parsing Stages
Quote Processing
Splitting Into Commands
Special Operator Parsing
Expansions
Word Splitting
Globbing
Execution
Using eval "$string"
eval "$string" follows all the above steps starting from #1. Thus:
Literal quotes within the string become syntactic quotes
Special operators such as >() are processed
Expansions such as $foo are honored
Results of those expansions are split on characters into whitespace into separate words
Those words are expanded as globs if they parse as same and have available matches, and finally the command is executed.
Using sh -c "$string"
...performs the same as eval does, but in a new shell launched as a separate process; thus, changes to variable state, current directory, etc. will expire when this new process exits. (Note, too, that that new shell may be a different interpreter supporting a different language; ie. sh -c "foo" will not support the same syntax that bash, ksh, zsh, etc. do).
Using $string
...starts at step 5, "Word Splitting".
What does this mean?
Quotes are not honored.
printf '%s\n' "two words" will thus parse as printf %s\n "two words", as opposed to the usual/expected behavior of printf %s\n two words (with the quotes being consumed by the shell).
Splitting into multiple commands (on ;s, &s, or similar) does not take place.
Thus:
s='echo foo && echo bar'
$s
...will emit the following output:
foo && echo bar
...instead of the following, which would otherwise be expected:
foo
bar
Special operators and expansions are not honored.
No $(foo), no $foo, no <(foo), etc.
Redirections are not honored.
>foo or 2>&1 is just another word created by string-splitting, rather than a shell directive.
$ bash -c "$COMMAND"
This version starts up a new bash interpreter, runs the command, and then exits, returning control to the original shell. You don't need to be running bash at all in the first place to do this, you can start a bash interpreter from tcsh, for example. You might also do this from a bash script to start with a fresh environment or to keep from polluting your current environment.
EDIT:
As #CharlesDuffy points out starting a new bash shell in this way will clear shell variables but environment variables will be inherited by the spawned shell process.
Using eval causes the shell to parse your command twice. In the example you gave, executing $COMMAND directly or doing an eval are equivalent, but have a look at the answer here to get a more thorough idea of what eval is good (or bad) for.
There are at least times when they are different. Consider the following:
$ cmd="echo \$var"
$ var=hello
$ $cmd
$var
$ eval $cmd
hello
$ bash -c "$cmd"
$ var=world bash -c "$cmd"
world
which shows the different points at which variable expansion is performed. It's even more clear if we do set -x first
$ set -x
$ $cmd
+ echo '$var'
$var
$ eval $cmd
+ eval echo '$var'
++ echo hello
hello
$ bash -c "$cmd"
+ bash -c 'echo $var'
$ var=world bash -c "$cmd"
+ var=world
+ bash -c 'echo $var'
world
We can see here much of what Charles Duffy talks about in his excellent answer. For example, attempting to execute the variable directly prints $var because parameter expansion and those earlier steps had already been done, and so we don't get the value of var, as we do with eval.
The bash -c option only inherits exported variables from the parent shell, and since I didn't export var it's not available to the new shell.
In my Bash CGI script, I take a command passed as GET parameter and execute it. This could be:
CMD='ls -al'
$CMD
Which works fine and produces expected output. But if I try to pass two commands with
CMD='ls -al; echo hello'
$CMD
or
CMD='ls -al && echo hello'
$CMD
neither command gets executed.
How can I run multiple commands from the same line/variable in my bash CGI?
You can execute variables as bash code using bash:
# UNSAFE, DO NOT USE
cmd='ls -al; echo hello'
bash -c "$cmd"
Alternatively, depending on the context you want to run it in, you can use eval "$cmd" to run it as if it was a line in your own script, rather than a separate piece of shell code to execute:
# UNSAFE, DO NOT USE
cmd='ls -al; echo hello'
eval "$cmd"
Both of these methods have serious implications for security and correctness, so I felt I had to add warnings to prevent them from being copied out of context.
For your remote shell or root kit specifically meant to run insecure user input, you can ignore the warnings.
I have a bash function which takes an array as an argument and it executes multiple commands.
Based on user input I want to run all the commands in this method locally or on remote machine. It has many quotes in the commands and echoing with "" will become ugly.
This is how I am invoking the function right now:
run_tool_commands "${ARGS[#]}"
function run_tool_commands {
ARGS=("$#")
.. Loads of commands here
}
if [ case 1 ]; then
# run locally
else
# run remotely
fi
This seems helpful, but this is possible if I have the method text piped to "here document".
If
all the commands that are to be executed under run_tool_commands are present on remote system as well,
All commands are executables, & not alias/functions
All these excutables are in default paths. (No need to source .bashrc or any other file on remote.)
Then perhaps this code may work: (not tested):
{ declare -f run_tool_commands; echo run_tool_commands "${ARGS[#]}"; } | ssh -t user#host
OR
{ declare -f run_tool_commands;
echo -n run_tool_commands;
for arg in "${ARGS[#]}"; do
echo -ne "\"$t\" ";
done; } | ssh -t user#host
Using for loop, to preserve quotes around arguments. (may or may not be required, not tested.)