bash variable expansion piped to ssh - linux

I can't get my head around how to declare / refer to these variables in a shell script.
Given the contents of commands_to_execute_on_remote.sh as:
for c in 1 2 3 4 5
do
supervisorctl restart broadcast-server-${ENVIRONMENT_NAME}-${c}
done
Where ENVIRONMENT_NAME is declared as an environment variable on the local machine...
When I'm running this from a local machine as, e.g.:
cat commands_to_execute_on_remote.sh | ssh user#123.456.789
How do I refer to those variables in order that, by the time the script is piped to the remote box, $ENVIRONMENT_NAME is populated with the actual value but $c is - obviously - a loop counter within the script?

Putting the commands in a separate file is an unnecessary complication.
ssh user#123.45.67.89 <<____EOF
for c in 1 2 3 4 5; do
supervisorctl restart broadcast-server-${ENVIRONMENT_NAME}-\$c
done
____EOF
Notice how you want $c to be evaluated in the remote ssh shell (so you need to escape it from your local shell) while $ENVIRONMENT_NAME gets expanded by your local shell before the command line is sent to the remote server.
If you insist on putting the script snippet in a file, someething like
sed "s/[$][{]ENVIRONMENT_NAME[}]/$ENVIRONMENT_NAME/" commands_to_execute_on_remote.sh |
ssh user#132.45.67.89
allows for that (and avoids the ugly useless cat). (If you remove the technically unnecessary braces, you need to adjust the regex; if ENVIRONMENT_NAME could contain a slash, use a different separator like "s%...%...%".)

Related

Edit remote file with variables

I've write this script but it does not works:
E_OPT=" some_host(ro,insecure) some_host2(ro,insecure)"
echo -n "Insert path to export [ ex: /path/test ]"
read PATH
FINAL=$PATH$E_OPT
ssh SERVER echo "$FINAL" >> file
or
ssh SERVER echo '$FINAL >> file'
or
ssh SERVER 'echo "$FINAL" >> file'
How can I pass text in variable to append in remote files?
There are a couple of problems here. The first is with read PATH. The variable PATH is one of many that have special meaning to the system: it defines where to look for executables (e.g. for commands). As soon as you redefine it as something else, the system will be unable to find executables like ssh, so commands will start to fail. Solution: use lowercase or mixed-case variable names to avoid conflicts with any of the special-meaning variables (which are all uppercase).
Second, all of your attempts at quoting are wrong. The command is going to go through two levels of shell parsing: first on the local computer (where you want the variable $FINAL -- or better $final -- to be expanded), and then on the remote server (where you want the parentheses to be in quotes, so they don't cause shell syntax errors). This means you need two levels of quoting: an outer leven that gets parsed & removed by the local shell, and a second level that gets parsed & removed by the remote shell. Variable expansion is only done in double-quotes, not single-quotes, so the outer level has to be double-quotes. The inner level could be either, but single-quotes are going to be easiest:
ssh SERVER "echo '$final' >> file"
Now, it may look like that $final variable is in single-quotes so it won't get expanded; but quotes don't nest! As far as the local shell is concerned, that's a double-quoted string that happens to contain some single-quotes or apostrophes or something that doesn't really matter. When the remote shell receives the command, the variable has been substituted and the outer quotes removed, so it looks like this:
echo '/some/path some_host(ro,insecure) some_host2(ro,insecure)' >> file
...which is what you want.
You must export the FINAL variable, besides, you also need to execute your script with a dot at the beginning, like:
. server-script.sh
This will evaluate the variables on the local bash, instead of a sub-shell.

Proxying commands across a SSH connection

I'd like to be able to "spoof" certain commands on my machine, actually invoking them on a remote system. For example. Whenever I run:
cmd options
I'd like the actual command to be:
ssh user#host cmd options
Ideally I'd like to have a folder called spoof, add it to my PATH, and have an executable in there called cmd which does the spoofing. If I have a lot of commands, this could get tedious. Anyone have ideas of a good way to go about this? Such that I can add and remove a lot of commands in the future? And, I'd like to be able to pass all the arguments exactly (or as exact as possible) and every single command I want to spoof would just have the ssh user#host in front of it.
The reason for this is I'm running a container (specifically singularity) on my machine, and there are certain commands I don't really want to containerize, but still want to run from within the container. I've found I can get the functionality I want by just appending the ssh in front of it. Examples are sbatch and matlab which are a pain to containerize and I'm fine with just using ssh to call them. Files that these programs use are written to a bind point so the host machine can see them just fine.
The following script can be hardlinked under all the names of commands you wish to transparently proxy:
#!/usr/bin/env bash
printf -v str '%q ' "${0##*/}" "$#"
ssh host "$str"
SSH combines all its arguments into a single string, which is then executed by a remote shell. To ensure that the remote arguments are identical to the local one, the values need to be escaped; otherwise, somecommand "hello world" and somecommand "hello" "world" can be represented identically over-the-wire.
In an appropriately extended printf (including both bash and ksh implementations), %q is replaced with an escaped form of the corresponding value, which will be evaled back to the original (literal) text by if interpreted later.
printf -v varname stores the output of printf in a variable named varname without the overhead/inefficiency of a command substitutions. (In ksh93, varname=$(printf ...) is optimized to skip subshell overhead, so this is not necessary there).
$0 evaluates to argv[0], which is by convention the name of the command currently being run. (This can be overridden, but you trust your users to behave reasonably... right?)
${0##*/} is a parameter expansion which returns only content after the last / in $0 (should it in fact contain any slashes; otherwise, the original value is used unmodified).
"$#" refers to the exact argument vector passed to your script.

bash + Linux + how to ignore the character "!"

I want to send little script to remote machine by ssh
the script is
#!/bin/bash
sleep 1
reboot
but I get event not found - because the "!"
ssh 183.34.4.9 "echo -e '#!/bin/bash\nsleep 1\reboot>'/tmp/file"
-bash: !/bin/bash\nsleep: event not found
how to ignore the "!" char so script will so send successfully by ssh?
remark I cant use "\" before the "!" because I get
more /tmp/file
#\!/bin/bash
sleep 1
Use set +H before your command to disable ! style history substitution:
set +H
ssh 183.34.4.9 "echo -e '#!/bin/bash\nsleep 1\reboot>'/tmp/file"
# enable hostory expnsion again
set -H
I think your command line is not well formated. You can send this:
ssh 183.34.4.9 'echo -e "#!/bin/bash\nsleep 1\nreboot">/tmp/file'
When I say "not well formated" I mean you put ">" inside the "echo" and you forgot to add "n" before "reboot", and you put "\reboot", wich will be interpreted as "CR" (carriage return) followed by "eboot" command (which I don't think that exists).
But what did the trick here is to invert the comas changing (') with (") and viceversa.
Bash is running interactively (which means that you are feeding commands to it from the standard input and not exec(2)ing a command from a shell script) so you don't need to include the line #!/bin/bash in that case (even more, bash should just ignore it, but not the included bang, as it is part of the active history mechanism)
But why? the first two characters in an executable file (any file capable of being exec(2)ed from secondary storage, not your case) have a special meaning (for the kernel and for the shell): they are the magic number that identifies the kind of executable file the kernel is loading. This allows the kernel to select the proper executable loading routines depending on the binary executable format (and what allows you for example to execute BSD programs in linux kernels, and viceversa)
A special value for this magic numbers is composed by the two characters # and ! (in that order) that forces the kernel to read the complete first line of that file and load the executable file specified in that line instead, allowing you to execute shell scripts for different interpreters directly from the command line. And it is done on purpose, as the # character is commonly in shell script parlance a comment character. This only happens when the shell that is interpreting the commands is not an interactive shell. When the shell loads a script with those characters, it normally reads the first line also to check if it has the #! mark and load the proper interpreter, by replicating the kernel function that does this. Despite of being a comment for the shell, it does this to allow to treat as executables files that are not stored on secondary storage (the only ones the exec(2) system call can deal with), but coming from stdin (as happens to yours).
As your shell is running interactively and you do want to execute its commands without a shell change, you don't need that line and can completely eliminate it without having to disable the bang character.
Sorry, but the solution given about executing the shell with -H option will probably not be viable, as the shell executing the commands is the login shell in the target machine, so you cannot provide specific parameters to it (parameters are selected by the login(8) program and normally don't include arbitrary parameters like -H).
The best solution is to fully eliminate the #!/bin/bash line, as you are not going to exec(2) that program in the target. In case you want to select the shell from the input line (case the user has a different shell installed as login shell), it is better to invoke the wanted shell in the command line and pass it (through stdin, or making it read the shell script as a file) the shell commands you wan to execute (but again, without the #! line).
NOTE
Its important to ensure you'll execute the whole thing, so it's best to pass all the script contents in the destination target, and once assured you have passed the whole thing to execute it as a whole. Then your #! first line will be properly processed, as the executable will be run by means of an exec(2) made from the kernel.
Example:
DIRECTORY=/bla/bla
FILE=/path/to/file
OUTPUT=/path/to/output
# this is the command we want to pass through the line
cat <<EOF | ssh user#target "cat >>/tmp/shell.sh"
cd $DIRECTORY
foo $FILE >$OUTPUT
exit 0
EOF
# we have copied the script file in a remote /tmp/shell.sh
# and we are sure it has passed correctly, so it's ready
# for local execution there.
# now, execute it.
# the remote shell won't be interactive, and you'll ensure that it is /bin/bash
ssh user#target "/bin/bash /tmp/shell.sh" >remote_shell.out
A more sophisticate system is one that allows to to sign the shell script before sending, and verify the script signature before executing it, so you are protected against possible trojan horse attacks. But this is out of scope on this explanation.
Another alternative is to use the batch(2) command remotely and pass it all the commands you want executed. you'll get a sessionless executing environment, more suitable to the task you are demanding (despite the fact that you'll get the script output by email to the target user running the script)
Interactively, beware that ! triggers history expansion inside double quotes
from here: https://riptutorial.com/bash/example/2465/quoting-literal-text
my recommended solution is to use single quotes to define the string (and either escape single quotes \' or use double quotes " within the string):
ssh 183.34.4.9 'echo -e "#!/bin/bash\nsleep 1\reboot>"/tmp/file'

How can I write a bash script that sets a variable that's available to the user in the terminal? [duplicate]

This question already has answers here:
Can I export a variable to the environment from a Bash script without sourcing it?
(13 answers)
Closed 3 years ago.
The community reviewed whether to reopen this question last year and left it closed:
Original close reason(s) were not resolved
I'm trying to write a shell script that, when run, will set some environment variables that will stay set in the caller's shell.
setenv FOO foo
in csh/tcsh, or
export FOO=foo
in sh/bash only set it during the script's execution.
I already know that
source myscript
will run the commands of the script rather than launching a new shell, and that can result in setting the "caller's" environment.
But here's the rub:
I want this script to be callable from either bash or csh. In other words, I want users of either shell to be able to run my script and have their shell's environment changed. So 'source' won't work for me, since a user running csh can't source a bash script, and a user running bash can't source a csh script.
Is there any reasonable solution that doesn't involve having to write and maintain TWO versions on the script?
Use the "dot space script" calling syntax. For example, here's how to do it using the full path to a script:
. /path/to/set_env_vars.sh
And here's how to do it if you're in the same directory as the script:
. set_env_vars.sh
These execute the script under the current shell instead of loading another one (which is what would happen if you did ./set_env_vars.sh). Because it runs in the same shell, the environmental variables you set will be available when it exits.
This is the same thing as calling source set_env_vars.sh, but it's shorter to type and might work in some places where source doesn't.
Your shell process has a copy of the parent's environment and no access to the parent process's environment whatsoever. When your shell process terminates any changes you've made to its environment are lost. Sourcing a script file is the most commonly used method for configuring a shell environment, you may just want to bite the bullet and maintain one for each of the two flavors of shell.
You're not going to be able to modify the caller's shell because it's in a different process context. When child processes inherit your shell's variables, they're
inheriting copies themselves.
One thing you can do is to write a script that emits the correct commands for tcsh
or sh based how it's invoked. If you're script is "setit" then do:
ln -s setit setit-sh
and
ln -s setit setit-csh
Now either directly or in an alias, you do this from sh
eval `setit-sh`
or this from csh
eval `setit-csh`
setit uses $0 to determine its output style.
This is reminescent of how people use to get the TERM environment variable set.
The advantage here is that setit is just written in whichever shell you like as in:
#!/bin/bash
arg0=$0
arg0=${arg0##*/}
for nv in \
NAME1=VALUE1 \
NAME2=VALUE2
do
if [ x$arg0 = xsetit-sh ]; then
echo 'export '$nv' ;'
elif [ x$arg0 = xsetit-csh ]; then
echo 'setenv '${nv%%=*}' '${nv##*=}' ;'
fi
done
with the symbolic links given above, and the eval of the backquoted expression, this has the desired result.
To simplify invocation for csh, tcsh, or similar shells:
alias dosetit 'eval `setit-csh`'
or for sh, bash, and the like:
alias dosetit='eval `setit-sh`'
One nice thing about this is that you only have to maintain the list in one place.
In theory you could even stick the list in a file and put cat nvpairfilename between "in" and "do".
This is pretty much how login shell terminal settings used to be done: a script would output statments to be executed in the login shell. An alias would generally be used to make invocation simple, as in "tset vt100". As mentioned in another answer, there is also similar functionality in the INN UseNet news server.
In my .bash_profile I have :
# No Proxy
function noproxy
{
/usr/local/sbin/noproxy #turn off proxy server
unset http_proxy HTTP_PROXY https_proxy HTTPs_PROXY
}
# Proxy
function setproxy
{
sh /usr/local/sbin/proxyon #turn on proxy server
http_proxy=http://127.0.0.1:8118/
HTTP_PROXY=$http_proxy
https_proxy=$http_proxy
HTTPS_PROXY=$https_proxy
export http_proxy https_proxy HTTP_PROXY HTTPS_PROXY
}
So when I want to disable the proxy,
the function(s) run in the login shell and sets the variables
as expected and wanted.
It's "kind of" possible through using gdb and setenv(3), although I have a hard time recommending actually doing this. (Additionally, i.e. the most recent ubuntu won't actually let you do this without telling the kernel to be more permissive about ptrace, and the same may go for other distros as well).
$ cat setfoo
#! /bin/bash
gdb /proc/${PPID}/exe ${PPID} <<END >/dev/null
call setenv("foo", "bar", 0)
END
$ echo $foo
$ ./setfoo
$ echo $foo
bar
This works — it isn't what I'd use, but it 'works'. Let's create a script teredo to set the environment variable TEREDO_WORMS:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL -i
It will be interpreted by the Korn shell, exports the environment variable, and then replaces itself with a new interactive shell.
Before running this script, we have SHELL set in the environment to the C shell, and the environment variable TEREDO_WORMS is not set:
% env | grep SHELL
SHELL=/bin/csh
% env | grep TEREDO
%
When the script is run, you are in a new shell, another interactive C shell, but the environment variable is set:
% teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
When you exit from this shell, the original shell takes over:
% exit
% env | grep TEREDO
%
The environment variable is not set in the original shell's environment. If you use exec teredo to run the command, then the original interactive shell is replaced by the Korn shell that sets the environment, and then that in turn is replaced by a new interactive C shell:
% exec teredo
% env | grep TEREDO
TEREDO_WORMS=ukelele
%
If you type exit (or Control-D), then your shell exits, probably logging you out of that window, or taking you back to the previous level of shell from where the experiments started.
The same mechanism works for Bash or Korn shell. You may find that the prompt after the exit commands appears in funny places.
Note the discussion in the comments. This is not a solution I would recommend, but it does achieve the stated purpose of a single script to set the environment that works with all shells (that accept the -i option to make an interactive shell). You could also add "$#" after the option to relay any other arguments, which might then make the shell usable as a general 'set environment and execute command' tool. You might want to omit the -i if there are other arguments, leading to:
#!/bin/ksh
export TEREDO_WORMS=ukelele
exec $SHELL "${#-'-i'}"
The "${#-'-i'}" bit means 'if the argument list contains at least one argument, use the original argument list; otherwise, substitute -i for the non-existent arguments'.
You should use modules, see http://modules.sourceforge.net/
EDIT: The modules package has not been updated since 2012 but still works ok for the basics. All the new features, bells and whistles happen in lmod this day (which I like it more): https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
Another workaround that I don't see mentioned is to write the variable value to a file.
I ran into a very similar issue where I wanted to be able to run the last set test (instead of all my tests). My first plan was to write one command for setting the env variable TESTCASE, and then have another command that would use this to run the test. Needless to say that I had the same exact issue as you did.
But then I came up with this simple hack:
First command ( testset ):
#!/bin/bash
if [ $# -eq 1 ]
then
echo $1 > ~/.TESTCASE
echo "TESTCASE has been set to: $1"
else
echo "Come again?"
fi
Second command (testrun ):
#!/bin/bash
TESTCASE=$(cat ~/.TESTCASE)
drush test-run $TESTCASE
You can instruct the child process to print its environment variables (by calling "env"), then loop over the printed environment variables in the parent process and call "export" on those variables.
The following code is based on Capturing output of find . -print0 into a bash array
If the parent shell is the bash, you can use
while IFS= read -r -d $'\0' line; do
export "$line"
done < <(bash -s <<< 'export VARNAME=something; env -0')
echo $VARNAME
If the parent shell is the dash, then read does not provide the -d flag and the code gets more complicated
TMPDIR=$(mktemp -d)
mkfifo $TMPDIR/fifo
(bash -s << "EOF"
export VARNAME=something
while IFS= read -r -d $'\0' line; do
echo $(printf '%q' "$line")
done < <(env -0)
EOF
) > $TMPDIR/fifo &
while read -r line; do export "$(eval echo $line)"; done < $TMPDIR/fifo
rm -r $TMPDIR
echo $VARNAME
Under OS X bash you can do the following:
Create the bash script file to unset the variable
#!/bin/bash
unset http_proxy
Make the file executable
sudo chmod 744 unsetvar
Create alias
alias unsetvar='source /your/path/to/the/script/unsetvar'
It should be ready to use so long you have the folder containing your script file appended to the path.
It's not what I would call outstanding, but this also works if you need to call the script from the shell anyway. It's not a good solution, but for a single static environment variable, it works well enough.
1.) Create a script with a condition that exits either 0 (Successful) or 1 (Not successful)
if [[ $foo == "True" ]]; then
exit 0
else
exit 1
2.) Create an alias that is dependent on the exit code.
alias='myscript.sh && export MyVariable'
You call the alias, which calls the script, which evaluates the condition, which is required to exit zero via the '&&' in order to set the environment variable in the parent shell.
This is flotsam, but it can be useful in a pinch.
You can invoke another one Bash with the different bash_profile.
Also, you can create special bash_profile for using in multi-bashprofile environment.
Remember that you can use functions inside of bashprofile, and that functions will be avialable globally.
for example, "function user { export USER_NAME $1 }" can set variable in runtime, for example: user olegchir && env | grep olegchir
Another option is to use "Environment Modules" (http://modules.sourceforge.net/). This unfortunately introduces a third language into the mix. You define the environment with the language of Tcl, but there are a few handy commands for typical modifications (prepend vs. append vs set). You will also need to have environment modules installed. You can then use module load *XXX* to name the environment you want. The module command is basically a fancy alias for the eval mechanism described above by Thomas Kammeyer. The main advantage here is that you can maintain the environment in one language and rely on "Environment Modules" to translate it to sh, ksh, bash, csh, tcsh, zsh, python (?!?!!), etc.
I created a solution using pipes, eval and signal.
parent() {
if [ -z "$G_EVAL_FD" ]; then
die 1 "Rode primeiro parent_setup no processo pai"
fi
if [ $(ppid) = "$$" ]; then
"$#"
else
kill -SIGUSR1 $$
echo "$#">&$G_EVAL_FD
fi
}
parent_setup() {
G_EVAL_FD=99
tempfile=$(mktemp -u)
mkfifo "$tempfile"
eval "exec $G_EVAL_FD<>'$tempfile'"
rm -f "$tempfile"
trap "read CMD <&$G_EVAL_FD; eval \"\$CMD\"" USR1
}
parent_setup #on parent shell context
( A=1 ); echo $A # prints nothing
( parent A=1 ); echo $A # prints 1
It might work with any command.
I don't see any answer documenting how to work around this problem with cooperating processes. A common pattern with things like ssh-agent is to have the child process print an expression which the parent can eval.
bash$ eval $(shh-agent)
For example, ssh-agent has options to select Csh or Bourne-compatible output syntax.
bash$ ssh-agent
SSH2_AUTH_SOCK=/tmp/ssh-era/ssh2-10690-agent; export SSH2_AUTH_SOCK;
SSH2_AGENT_PID=10691; export SSH2_AGENT_PID;
echo Agent pid 10691;
(This causes the agent to start running, but doesn't allow you to actually use it, unless you now copy-paste this output to your shell prompt.) Compare:
bash$ ssh-agent -c
setenv SSH2_AUTH_SOCK /tmp/ssh-era/ssh2-10751-agent;
setenv SSH2_AGENT_PID 10752;
echo Agent pid 10752;
(As you can see, csh and tcsh uses setenv to set varibles.)
Your own program can do this, too.
bash$ foo=$(makefoo)
Your makefoo script would simply calculate and print the value, and let the caller do whatever they want with it -- assigning it to a variable is a common use case, but probably not something you want to hard-code into the tool which produces the value.
Technically, that is correct -- only 'eval' doesn't fork another shell. However, from the point of view of the application you're trying to run in the modified environment, the difference is nil: the child inherits the environment of its parent, so the (modified) environment is conveyed to all descending processes.
Ipso facto, the changed environment variable 'sticks' -- as long as you are running under the parent program/shell.
If it is absolutely necessary for the environment variable to remain after the parent (Perl or shell) has exited, it is necessary for the parent shell to do the heavy lifting. One method I've seen in the documentation is for the current script to spawn an executable file with the necessary 'export' language, and then trick the parent shell into executing it -- always being cognizant of the fact that you need to preface the command with 'source' if you're trying to leave a non-volatile version of the modified environment behind. A Kluge at best.
The second method is to modify the script that initiates the shell environment (.bashrc or whatever) to contain the modified parameter. This can be dangerous -- if you hose up the initialization script it may make your shell unavailable the next time it tries to launch. There are plenty of tools for modifying the current shell; by affixing the necessary tweaks to the 'launcher' you effectively push those changes forward as well.
Generally not a good idea; if you only need the environment changes for a particular application suite, you'll have to go back and return the shell launch script to its pristine state (using vi or whatever) afterwards.
In short, there are no good (and easy) methods. Presumably this was made difficult to ensure the security of the system was not irrevocably compromised.
The short answer is no, you cannot alter the environment of the parent process, but it seems like what you want is an environment with custom environment variables and the shell that the user has chosen.
So why not simply something like
#!/usr/bin/env bash
FOO=foo $SHELL
Then when you are done with the environment, just exit.
You could always use aliases
alias your_env='source ~/scripts/your_env.sh'
I did this many years ago. If I rememeber correctly, I included an alias in each of .bashrc and .cshrc, with parameters, aliasing the respective forms of setting the environment to a common form.
Then the script that you will source in any of the two shells has a command with that last form, that is suitable aliased in each shell.
If I find the concrete aliases, I will post them.
Other than writings conditionals depending on what $SHELL/$TERM is set to, no. What's wrong with using Perl? It's pretty ubiquitous (I can't think of a single UNIX variant that doesn't have it), and it'll spare you the trouble.

How to pass local variable to remote ssh commands?

I need to execute multiple commands on remote machine, and use ssh to do so,
ssh root#remote_server 'cd /root/dir; ./run.sh'
In the script, I want to pass a local variable $argument when executing run.sh, like
ssh root#remote_server 'cd /root/dir; ./run.sh $argument'
It does not work, since in single quote $argument is not interpreted the expected way.
Edit: I know double quote may be used, but is there any side effects on that?
You can safely use double quotes here.
ssh root#remote_server "cd /root/dir; ./run.sh $argument"
This will expand the $argument variable. There is nothing else present that poses any risk.
If you have a case where you do need to expand some variables, but not others, you can escape them with backslashes.
$ argument='-V'
$ echo "the variable \$argument is $argument"
would display
the variable $argument is -V
You can always test with double quotes to discover any hidden problems that might catch you by surprise. You can always safely test with echo.
Additionally, another way to run multiple commands is to redirect stdin to ssh. This is especially useful in scripts, or when you have more than 2 or 3 commands (esp. any control statements or loops)
$ ssh user#remoteserver << EOF
> # commands go here
> pwd
> # as many as you want
> # finish with EOF
> EOF
output, if any, of commands will display
$ # returned to your current shell prompt
If you do this on the command line, you'll get a stdin prompt to write your commands. On the command line, the SSH connection won't even be attempted until you indicate completion with EOF. So you won't see results as you go, but you can Ctrl-C to get out and start over. Whether on the command line or in a script, you wrap up the sequence of commands with EOF. You'll be returned to your normal shell at that point.
You could run xargs on the remote side:
$ echo "$argument" | ssh root#remote_server 'cd /root/dir; xargs -0 ./run.sh'
This avoids any quoting issues entirely--unless your argument has null characters in it, I suppose.

Resources