Custom commands with git-shell - linux

How to create custom commands for git-shell? According to the documentation:
When -c is given, the program executes non-interactively;
can be one of git receive-pack, git upload-pack, git
upload-archive, cvs server, or a command in COMMAND_DIR. The shell is
started in interactive mode when no arguments are given; in this case,
COMMAND_DIR must exist, and any of the executables in it can be
invoked.
However, I'm not sure I'm understanding this correctly. I created a user called gituser, and gave him /usr/bin/git-shell as a shell. I created a directory called git-shell-commands, and put a script called 'testy' in it, but I can't make it run via git-shell.
Here is what I'm trying from an other machine:
$ ssh gituser#server.com testy
fatal: unrecognized command 'testy'
Note that git-shell is working, and responding, it just can't find my custom command.
And here is the script:
:/home/gituser/git-shell-commands# ls -l -a
total 12
drwxr-xr-x 2 gituser gituser 4096 Jan 22 17:35 .
drwxr-xr-x 4 gituser gituser 4096 Jan 22 13:57 ..
-rwxr-xr-x 1 gituser gituser 26 Jan 22 13:58 testy
:/home/gituser/git-shell-commands# ./testy
hello!
:/home/sodigit/git-shell-commands# cat testy
echo "hello!"
What am I doing wrong? How to run custom commands with git-shell?

As it turned out, this feature has been introduced in git 1.7.4. I am using debian squeeze, wich contains an older version of git, so that was why it did not work.
If you experience this problem, check your git version.
However, as of git 1.7.10, the custom commands only work in interactive mode, and not with -c. I haven't tried the newest git though, so it is possible that this problem is unrelated to the version of the software.

To allow custom commands for pre-1.7.4 (and in non-interactive mode for 1.7.10), you can use a shell script wrapper for git-shell:
#!/bin/bash
cmdline=($1)
cmd=$(basename "${cmdline[0]}")
if [ -z "$cmd" ] ; then
exec git-shell
elif [ -n "$cmd" -a -x ~/git-shell-commands/"$cmd" ] ; then
~/git-shell-commands/"$cmd" "${cmdline[#]:1}"
else
exec git-shell -c "$1"
fi
Wherever you would normally use "git-shell", refer to this script instead, though leave out any "-c" argument to this script.
As with git-shell, the above script requires that the entire command line be passed as the first argument. If you'd rather pass the command line as separate arguments:
#!/bin/bash
cmd=$(basename $1)
if [ -z "$cmd" ] ; then
exec git-shell
elif [ -n "$cmd" -a -x ~/git-shell-commands/"$cmd" ] ; then
shift
~/git-shell-commands/"$cmd" "$#"
else
exec git-shell -c "$*"
fi
For example, this lets you invoke the restricted shell in authorize_keys as:
command="sshsh $SSH_ORIGINAL_COMMAND" ...
Note that neither script creates an interactive mode for pre-1.7.4 (attempting to start an interactive session will result in a "fatal: What do you think I am? A shell?" error from git-shell), but shouldn't interfere with interactive mode in 1.7.4 and newer.
Disclaimer: this has not been vetted for security holes. Use at your own risk. In particular, each command in ~/git-shell-commands is a potential security hole (though this is true of git-shell 1.7.4 and later, even without any of the above scripts).

Related

Launching a bash shell from a sudo-ed environment

Apologies for the confusing Question title. I am trying to launch an interactive bash shell from a shell script ( say shel2.sh) which has been launched by a parent script (shel1.sh) in a sudo-ed environment. ( I am creating a guided deployment
script for my software which needs to be installed as super-user , hence the sudo, but may need the user to access the shell. )
Here's shel1.sh
#!/bin/bash
set -x
sudo bash << EOF
echo $?
./shel2.sh
EOF
echo shel1 done
And here's shel2.sh
#!/bin/bash
set -x
bash --norc --verbose --noprofile -i
echo $?
echo done
I expected this to launch an interactive bash shell which waits for my input before returning to shel1.sh. This is what I see:
+ ./shel1.sh
+ sudo bash
0
+ bash --norc --verbose --noprofile -i
bash-4.3# exit
+ echo 0
0
+ echo done
done
+ echo shel1 done
shel1 done
The bash-4.3# calls an exit automatically and quits. Interestingly if I invoke the bash shell with -l (or --login) the automatic entry is logout !
Can someone explain what is happening here ?
When you use a here document, you are tying up the shell's -- and its spawned child processes' -- standard input to the here document input.
You can avoid using a here document in many situations. For example, replace the here document with a single-quoted string.
#!/bin/bash
set -x
sudo bash -c '
# Aside: How is this actually useful?
echo $?
# Spawned script inherits the stdin of "sudo bash"
./shel2.sh'
echo shel1 done
Without more details, it's hard to see where exactly you want to go with this, but most modern Linux platforms have package managers which allow all kinds of hooks for installation, so that you would typically not need to do this sort of thing. Have you looked into that?

docker ubuntu container: shell linked to bash still starts shell

Alright guys, so I try to install rvm in a docker container based on ubuntu:14.04. During the process, I discovered that some people do something like this to ensure docker commands are also run with the bash:
RUN ln -fs /bin/bash /bin/sh
Now The weirdness happens and I hope someone of you can explain it to me:
→ docker run -it --rm d81ff50de1ce /bin/bash
root#e93a877ab3dc:/# ls -lah /bin
....
lrwxrwxrwx 1 root root 9 Mar 1 16:15 sh -> /bin/bash
lrwxrwxrwx 1 root root 9 Mar 1 16:15 sh.distrib -> /bin/bash
...
root#e93a877ab3dc:/# /bin/sh
sh-4.3# echo $0
/bin/sh
Can someone explain what's going on here? I know I could just prefix my commands in the dockerfile w/ bash -c, but I would like to understand what is happening here and if possible still ditch the bash -c prefix in the dockerfile.
Thanks a lot,
Robin
It's because bash has a compatibility mode where it tries to emulate sh if it is started via the name sh, as the manpage says:
If bash is invoked with the name sh, it tries to mimic the startup
behavior of historical versions of sh as closely as possible, while
conforming to the POSIX standard as well. When invoked as an
interactive login shell, or a non-interactive shell with the --login
option, it first attempts to read and execute commands from
/etc/profile and ~/.profile, in that order. The --noprofile option
may be used to inhibit this behavior. When invoked as an interactive
shell with the name sh, bash looks for the variable ENV, expands its
value if it is defined, and uses the expanded value as the name of a
file to read and execute. Since a shell invoked as sh does not
attempt to read and execute commands from any other startup files, the
--rcfile option has no effect. A non-interactive shell invoked with the name sh does not attempt to read any other startup files. When
invoked as sh, bash enters posix mode after the startup files are
read.

user-data (cloud-init) script not executing on EC2

my user-data script
#!
set -e -x
echo `whoami`
su root
yum update -y
touch ~/PLEASE_WORK.txt
which is fed in from the command:
ec2-run-instances ami-05355a6c -n 1 -g mongo-group -k mykey -f myscript.sh -t t1.micro -z us-east-1a
but when I check the file /var/log/cloud-init.log, the tail -n 5 is:
[CLOUDINIT] 2013-07-22 16:02:29,566 - cloud-init-cfg[INFO]: cloud-init-cfg ['runcmd']
[CLOUDINIT] 2013-07-22 16:02:29,583 - __init__.py[DEBUG]: restored from cache type DataSourceEc2
[CLOUDINIT] 2013-07-22 16:02:29,686 - cloud-init-cfg[DEBUG]: handling runcmd with freq=None and args=[]
[CLOUDINIT] 2013-07-22 16:02:33,691 - cloud-init-run-module[INFO]: cloud-init-run-module ['once-per-instance', 'user-scripts', 'execute', 'run-parts', '/var/lib/cloud/data/scripts']
[CLOUDINIT] 2013-07-22 16:02:33,699 - __init__.py[DEBUG]: restored from cache type DataSourceEc2
I've also verified that curl http://169.254.169.254/latest/user-data returns my file as intended.
and no other errors or the output of my script happens. how do I get the user-data scrip to execute on boot up correctly?
Actually, cloud-init allows a single shell script as an input (though you may want to use a MIME archive for more complex setups).
The problem with the OP's script is that the first line is incorrect. You should use something like this:
#!/bin/sh
The reason for this is that, while cloud-init uses #! to recognize a user script, the operating system needs a complete shebang line in order to execute the script.
So what's happening in the OP's case is that cloud-init behaves correctly (i.e. it downloads and tries to run the script) but the operating system is unable to actually execute it.
See: Shebang (Unix) on Wikipedia
Cloud-init does not accept plain bash scripts, just like that. It's a beast that eats YAML file that defines your instance (packages, ssh keys and other stuff).
Using MIME you can also send arbitrary shell scripts, but you have to MIME-encode them.
$ cat my-boothook.txt
#!/bin/sh
echo "Hello World!"
echo "This will run as soon as possible in the boot sequence"
$ cat my-user-script.txt
#!/usr/bin/perl
print "This is a user script (rc.local)\n"
$ cat my-include.txt
# these urls will be read pulled in if they were part of user-data
# comments are allowed. The format is one url per line
http://www.ubuntu.com/robots.txt
http://www.w3schools.com/html/lastpage.htm
$ cat my-upstart-job.txt
description "a test upstart job"
start on stopped rc RUNLEVEL=[2345]
console output
task
script
echo "====BEGIN======="
echo "HELLO From an Upstart Job"
echo "=====END========"
end script
$ cat my-cloudconfig.txt
#cloud-config
ssh_import_id: [smoser]
apt_sources:
- source: "ppa:smoser/ppa"
$ ls
my-boothook.txt my-include.txt my-user-script.txt
my-cloudconfig.txt my-upstart-job.txt
$ write-mime-multipart --output=combined-userdata.txt \
my-boothook.txt:text/cloud-boothook \
my-include.txt:text/x-include-url \
my-upstart-job.txt:text/upstart-job \
my-user-script.txt:text/x-shellscript \
my-cloudconfig.txt
$ ls -l combined-userdata.txt
-rw-r--r-- 1 smoser smoser 1782 2010-07-01 16:08 combined-userdata.txt
The combined-userdata.txt is the file you want to paste there.
More info here:
https://help.ubuntu.com/community/CloudInit
Also note, this highly depends on the image you are using. But you say it is really cloud-init based image, so this applies. There are other cloud initiators which are not named cloud-init - then it could be different.
This is a couple years old now, but for others benefit I had the same issue, and it turned out that cloud-init was running twice, from inside /etc/rc3.d . Deleting these files inside the folder allowed the userdata to run correctly:
lrwxrwxrwx 1 root root 22 Jun 5 02:49 S-1cloud-config -> ../init.d/cloud-config
lrwxrwxrwx 1 root root 20 Jun 5 02:49 S-1cloud-init -> ../init.d/cloud-init
lrwxrwxrwx 1 root root 26 Jun 5 02:49 S-1cloud-init-local -> ../init.d/cloud-init-local
The problem is with cloud-init not allowing the user script to run on the next start-up.
First remove the cloud-init artifacts by executing:
rm /var/lib/cloud/instances/*/sem/config_scripts_user
And then your userdata must look like this:
#!/bin/bash
echo "hello!"
And then start you instance. It now works (tested).

What does the following line in this bash script do?

This line is inside my /etc/rc.sysinit file on linux:
[ -r /proc/mdstat -a -r /dev/md/md-device-map ] && /sbin/mdadm -IRs
I'm not so much interested in what it actually accomplishes as opposed to how the syntax works.
It tests if the files /proc/mdstat and /dev/md/md-device-map exists and are readable (-r), and if yes executes /sbin/mdadm -IRs.
The square brackets are an alternative name of the program test (or a Bash replacement thereof), which can test for lots of stuff, such as existence of files. The -a is a logical "and".
For more details, see "CONDITIONAL EXPRESSIONS" in man bash.
The [ is actually a command name itself, that is equivalent to the test command. So, use man test to find out what -r means.
Depending on your system, you may find [ in /usr/bin:
$ ls -l /usr/bin/[
-rwxr-xr-x 1 root root 37000 Oct 5 2011 /usr/bin/[
or it could be a symlink:
$ ls -l /usr/bin/[
-rwxr-xr-x 1 root root 4 Oct 5 2011 /usr/bin/[ -> test
Some shells also have [ as a built-in command (and some even have [[ which provides additional options). As with most built-in commands though, you'll also find an implementation in the filesystem.
This means :
if /proc/mdstat is readable by you and /dev/md/md-device-map is readable by you, then run /sbin/mdadm -IRs
See help test
NOTE
[[ is a bash keyword similar to (but more powerful than) the [ command. See http://mywiki.wooledge.org/BashFAQ/031 and http://mywiki.wooledge.org/BashGuide/TestsAndConditionals
Unless you're writing for POSIX sh, we recommend [[.

Pseudo-terminal will not be allocated because stdin is not a terminal

I am trying to write a shell script that creates some directories on a remote server and then uses scp to copy files from my local machine onto the remote. Here's what I have so far:
ssh -t user#server<<EOT
DEP_ROOT='/home/matthewr/releases'
datestamp=$(date +%Y%m%d%H%M%S)
REL_DIR=$DEP_ROOT"/"$datestamp
if [ ! -d "$DEP_ROOT" ]; then
echo "creating the root directory"
mkdir $DEP_ROOT
fi
mkdir $REL_DIR
exit
EOT
scp ./dir1 user#server:$REL_DIR
scp ./dir2 user#server:$REL_DIR
Whenever I run it I get this message:
Pseudo-terminal will not be allocated because stdin is not a terminal.
And the script just hangs forever.
My public key is trusted on the server and I can run all the commands outside of the script just fine. Any ideas?
Try ssh -t -t(or ssh -tt for short) to force pseudo-tty allocation even if stdin isn't a terminal.
See also: Terminating SSH session executed by bash script
From ssh manpage:
-T Disable pseudo-tty allocation.
-t Force pseudo-tty allocation. This can be used to execute arbitrary
screen-based programs on a remote machine, which can be very useful,
e.g. when implementing menu services. Multiple -t options force tty
allocation, even if ssh has no local tty.
Also with option -T from manual
Disable pseudo-tty allocation
Per zanco's answer, you're not providing a remote command to ssh, given how the shell parses the command line. To solve this problem, change the syntax of your ssh command invocation so that the remote command is comprised of a syntactically correct, multi-line string.
There are a variety of syntaxes that can be used. For example, since commands can be piped into bash and sh, and probably other shells too, the simplest solution is to just combine ssh shell invocation with heredocs:
ssh user#server /bin/bash <<'EOT'
echo "These commands will be run on: $( uname -a )"
echo "They are executed by: $( whoami )"
EOT
Note that executing the above without /bin/bash will result in the warning Pseudo-terminal will not be allocated because stdin is not a terminal. Also note that EOT is surrounded by single-quotes, so that bash recognizes the heredoc as a nowdoc, turning off local variable interpolation so that the command text will be passed as-is to ssh.
If you are a fan of pipes, you can rewrite the above as follows:
cat <<'EOT' | ssh user#server /bin/bash
echo "These commands will be run on: $( uname -a )"
echo "They are executed by: $( whoami )"
EOT
The same caveat about /bin/bash applies to the above.
Another valid approach is to pass the multi-line remote command as a single string, using multiple layers of bash variable interpolation as follows:
ssh user#server "$( cat <<'EOT'
echo "These commands will be run on: $( uname -a )"
echo "They are executed by: $( whoami )"
EOT
)"
The solution above fixes this problem in the following manner:
ssh user#server is parsed by bash, and is interpreted to be the ssh command, followed by an argument user#server to be passed to the ssh command
" begins an interpolated string, which when completed, will comprise an argument to be passed to the ssh command, which in this case will be interpreted by ssh to be the remote command to execute as user#server
$( begins a command to be executed, with the output being captured by the surrounding interpolated string
cat is a command to output the contents of whatever file follows. The output of cat will be passed back into the capturing interpolated string
<< begins a bash heredoc
'EOT' specifies that the name of the heredoc is EOT. The single quotes ' surrounding EOT specifies that the heredoc should be parsed as a nowdoc, which is a special form of heredoc in which the contents do not get interpolated by bash, but rather passed on in literal format
Any content that is encountered between <<'EOT' and <newline>EOT<newline> will be appended to the nowdoc output
EOT terminates the nowdoc, resulting in a nowdoc temporary file being created and passed back to the calling cat command. cat outputs the nowdoc and passes the output back to the capturing interpolated string
) concludes the command to be executed
" concludes the capturing interpolated string. The contents of the interpolated string will be passed back to ssh as a single command line argument, which ssh will interpret as the remote command to execute as user#server
If you need to avoid using external tools like cat, and don't mind having two statements instead of one, use the read built-in with a heredoc to generate the SSH command:
IFS='' read -r -d '' SSH_COMMAND <<'EOT'
echo "These commands will be run on: $( uname -a )"
echo "They are executed by: $( whoami )"
EOT
ssh user#server "${SSH_COMMAND}"
I'm adding this answer because it solved a related problem that I was having with the same error message.
Problem: I had installed cygwin under Windows and was getting this error: Pseudo-terminal will not be allocated because stdin is not a terminal
Resolution: It turns out that I had not installed the openssh client program and utilities. Because of that cygwin was using the Windows implementation of ssh, not the cygwin version. The solution was to install the openssh cygwin package.
All relevant information is in the existing answers, but let me attempt a pragmatic summary:
tl;dr:
DO pass the commands to run using a command-line argument:
ssh jdoe#server '...'
'...' strings can span multiple lines, so you can keep your code readable even without the use of a here-document:
ssh jdoe#server ' ... '
Do NOT pass the commands via stdin, as is the case when you use a here-document:
ssh jdoe#server <<'EOF' # Do NOT do this ... EOF
Passing the commands as an argument works as-is, and:
the problem with the pseudo-terminal will not even arise.
you won't need an exit statement at the end of your commands, because the session will automatically exit after the commands have been processed.
In short: passing commands via stdin is a mechanism that is at odds with ssh's design and causes problems that must then be worked around.
Read on, if you want to know more.
Optional background information:
ssh's mechanism for accepting commands to execute on the target server is a command-line argument: the final operand (non-option argument) accepts a string containing one or more shell commands.
By default, these commands run unattended, in a non-interactive shell, without the use of a (pseudo) terminal (option -T is implied), and the session automatically ends when the last command finishes processing.
In the event that your commands require user interaction, such as responding to an interactive prompt, you can explicitly request the creation of a pty (pseudo-tty), a pseudo terminal, that enables interacting with the remote session, using the -t option; e.g.:
ssh -t jdoe#server 'read -p "Enter something: "; echo "Entered: [$REPLY]"'
Note that the interactive read prompt only works correctly with a pty, so the -t option is needed.
Using a pty has a notable side effect: stdout and stderr are combined and both reported via stdout; in other words: you lose the distinction between regular and error output; e.g.:
ssh jdoe#server 'echo out; echo err >&2' # OK - stdout and stderr separate
ssh -t jdoe#server 'echo out; echo err >&2' # !! stdout + stderr -> stdout
In the absence of this argument, ssh creates an interactive shell - including when you send commands via stdin, which is where the trouble begins:
For an interactive shell, ssh normally allocates a pty (pseudo-terminal) by default, except if its stdin is not connected to a (real) terminal.
Sending commands via stdin means that ssh's stdin is no longer connected to a terminal, so no pty is created, and ssh warns you accordingly:
Pseudo-terminal will not be allocated because stdin is not a terminal.
Even the -t option, whose express purpose is to request creation of a pty, is not enough in this case: you'll get the same warning.
Somewhat curiously, you must then double the -t option to force creation of a pty: ssh -t -t ... or ssh -tt ... shows that you really, really mean it.
Perhaps the rationale for requiring this very deliberate step is that things may not work as expected. For instance, on macOS 10.12, the apparent equivalent of the above command, providing the commands via stdin and using -tt, does not work properly; the session gets stuck after responding to the read prompt:
ssh -tt jdoe#server <<<'read -p "Enter something: "; echo "Entered: [$REPLY]"'
In the unlikely event that the commands you want to pass as an argument make the command line too long for your system (if its length approaches getconf ARG_MAX - see this article), consider copying the code to the remote system in the form of a script first (using, e.g., scp), and then send a command to execute that script.
In a pinch, use -T, and provide the commands via stdin, with a trailing exit command, but note that if you also need interactive features, using -tt in lieu of -T may not work.
The warning message Pseudo-terminal will not be allocated because stdin is not a terminal. is due to the fact that no command is specified for ssh while stdin is redirected from a here document.
Due to the lack of a specified command as an argument ssh first expects an interactive login session (which would require the allocation of a pty on the remote host) but then has to realize that its local stdin is no tty/pty. Redirecting ssh's stdin from a here document normally requires a command (such as /bin/sh) to be specified as an argument to ssh - and in such a case no pty will be allocated on the remote host by default.
Since there are no commands to be executed via ssh that require the presence of a tty/pty (such as vim or top) the -t switch to ssh is superfluous.
Just use ssh -T user#server <<EOT ... or ssh user#server /bin/bash <<EOT ... and the warning will go away.
If <<EOF is not escaped or single-quoted (i. e. <<\EOT or <<'EOT') variables inside the here document will be expanded by the local shell before it is executing ssh .... The effect is that the variables inside the here document will remain empty because they are defined only in the remote shell.
So, if $REL_DIR should be both accessible by the local shell and defined in the remote shell, $REL_DIR has to be defined outside the here document before the ssh command (version 1 below); or, if <<\EOT or <<'EOT' is used, the output of the ssh command can be assigned to REL_DIR if the only output of the ssh command to stdout is genererated by echo "$REL_DIR" inside the escaped/single-quoted here document (version 2 below).
A third option would be to store the here document in a variable and then pass this variable as a command argument to ssh -t user#server "$heredoc" (version 3 below).
And, last but not least, it would be no bad idea to check if the directories on the remote host were created successfully (see: check if file exists on remote host with ssh).
# version 1
unset DEP_ROOT REL_DIR
DEP_ROOT='/tmp'
datestamp=$(date +%Y%m%d%H%M%S)
REL_DIR="${DEP_ROOT}/${datestamp}"
ssh localhost /bin/bash <<EOF
if [ ! -d "$DEP_ROOT" ] && [ ! -e "$DEP_ROOT" ]; then
echo "creating the root directory" 1>&2
mkdir "$DEP_ROOT"
fi
mkdir "$REL_DIR"
#echo "$REL_DIR"
exit
EOF
scp -r ./dir1 user#server:"$REL_DIR"
scp -r ./dir2 user#server:"$REL_DIR"
# version 2
REL_DIR="$(
ssh localhost /bin/bash <<\EOF
DEP_ROOT='/tmp'
datestamp=$(date +%Y%m%d%H%M%S)
REL_DIR="${DEP_ROOT}/${datestamp}"
if [ ! -d "$DEP_ROOT" ] && [ ! -e "$DEP_ROOT" ]; then
echo "creating the root directory" 1>&2
mkdir "$DEP_ROOT"
fi
mkdir "$REL_DIR"
echo "$REL_DIR"
exit
EOF
)"
scp -r ./dir1 user#server:"$REL_DIR"
scp -r ./dir2 user#server:"$REL_DIR"
# version 3
heredoc="$(cat <<'EOF'
# -onlcr: prevent the terminal from converting bare line feeds to carriage return/line feed pairs
stty -echo -onlcr
DEP_ROOT='/tmp'
datestamp="$(date +%Y%m%d%H%M%S)"
REL_DIR="${DEP_ROOT}/${datestamp}"
if [ ! -d "$DEP_ROOT" ] && [ ! -e "$DEP_ROOT" ]; then
echo "creating the root directory" 1>&2
mkdir "$DEP_ROOT"
fi
mkdir "$REL_DIR"
echo "$REL_DIR"
stty echo onlcr
exit
EOF
)"
REL_DIR="$(ssh -t localhost "$heredoc")"
scp -r ./dir1 user#server:"$REL_DIR"
scp -r ./dir2 user#server:"$REL_DIR"
I don't know where the hang comes from, but redirecting (or piping) commands into an interactive ssh is in general a recipe for problems. It is more robust to use the command-to-run-as-a-last-argument style and pass the script on the ssh command line:
ssh user#server 'DEP_ROOT="/home/matthewr/releases"
datestamp=$(date +%Y%m%d%H%M%S)
REL_DIR=$DEP_ROOT"/"$datestamp
if [ ! -d "$DEP_ROOT" ]; then
echo "creating the root directory"
mkdir $DEP_ROOT
fi
mkdir $REL_DIR'
(All in one giant '-delimited multiline command-line argument).
The pseudo-terminal message is because of your -t which asks ssh to try to make the environment it runs on the remote machine look like an actual terminal to the programs that run there. Your ssh client is refusing to do that because its own standard input is not a terminal, so it has no way to pass the special terminal APIs onwards from the remote machine to your actual terminal at the local end.
What were you trying to achieve with -t anyway?
After reading a lot of these answers I thought I would share my resulting solution. All I added is /bin/bash before the heredoc and it doesn't give the error anymore.
Use this:
ssh user#machine /bin/bash <<'ENDSSH'
hostname
ENDSSH
Instead of this (gives error):
ssh user#machine <<'ENDSSH'
hostname
ENDSSH
Or use this:
ssh user#machine /bin/bash < run-command.sh
Instead of this (gives error):
ssh user#machine < run-command.sh
EXTRA:
If you still want a remote interactive prompt e.g. if the script you're running remotely prompts you for a password or other information, because the previous solutions won't allow you to type into the prompts.
ssh -t user#machine "$(<run-command.sh)"
And if you also want to log the entire session in a file logfile.log:
ssh -t user#machine "$(<run-command.sh)" | tee -a logfile.log
I was having the same error under Windows using emacs 24.5.1 to connect to some company servers through /ssh:user#host. What solved my problem was setting the "tramp-default-method" variable to "plink" and whenever I connect to a server I ommit the ssh protocol. You need to have PuTTY's plink.exe installed for this to work.
Solution
M-x customize-variable (and then hit Enter)
tramp-default-method (and then hit Enter again)
On the text field put plink and then Apply and Save the buffer
Whenever I try to access a remote server I now use C-x-f /user#host: and then input the password. The connection is now correctly made under Emacs on Windows to my remote server.

Resources