run a remote bash script with arguments with ssh - linux

I am unable to run a remote shell script located on "admin" server with arguments.
ssh koliwada#admin "~/bin/addautomaps $groupentry $homeentry $ticket"
"groupentry" and "homeentry" are as follows
user1:*:52940:OWNER-user1
user1 -rw,intr,hard,rsize=32768,wsize=32768 basinas01:/ifs/basinas01/home/&
the script is located at ~/bin/addautomaps in admin server.
I see the error,
tput: No value for $TERM and no -T specified
I also see the arguments also are not passed correctly.
I also tried using "ssh -t ..." but that doesnt work.

Answering your questions in reverse order (or most serious to least serious).
Your problem with the arguments (with spaces) not being passed correctly is that while you are quoting the command string locally you aren't quoting them when they are actually run by the remote machine.
That is you are generating a single string with the variables expanded but nothing tells the remote system not to split the expanded values on spaces.
The fix for that is that you need to quote the arguments inside the command for the remote shell as well as the entire string for ssh.
My answer here might help explain some (it is a similar issue).
The tput "issue" is likely just a warning that you can probably ignore if you don't care about the colorized/stylized/etc. output that tput is likely being used to create. You could also try forcing a value for $TERM on the remote side like ssh ... "export TERM=dumb; ..." or something like that to silence it.

Related

Command not found when running Bash script, but works when running command directly

I've been using letsencrypt to generate SSL certificates for my site, more specifically letsencrypt_webfaction. When I run this command in my project, it works
letsencrypt_webfaction --letsencrypt_account_email <Email I use> --domains <domains I use> --public <public_file> --username <username> --password <password>
However, when I run the same command in a bash script, I get the error
generate_certificate.sh: line 2: letsencrypt_webfaction: command not found
I made sure I had all possible permissions on the bash script using chmod 777 generate_certificate.sh, but still nothing. On top of that I have a bash script that runs right before that, which simply restarts Apache, and that works fine.
I read other S.O articles, such as this one, and tried running dos2unix script.sh, which did run successfully, but when I tried running the bash script again, it didn't work.
Restart Apache Script
#!/bin/bash
../apache2/bin/./restart
#END
Generate SSL Script
#!/bin/bash
letsencrypt_webfaction --letsencrypt_account_email <Email I use> --domains <domains I use> --public <public_file> --username <username> --password <password>
#END
I'm a python developer, and don't have much experience with Ruby, so excuse my ignorance, but the letsencrypt_webfaction command is a function in my bash profile.
~/.bash_profile
# .bash_profile
# Get the aliases and functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment and startup programs
function letsencrypt_webfaction {
PATH=$PATH:$GEM_HOME/bin GEM_HOME=$HOME/.letsencrypt_webfaction/gems RUBYLIB=$GEM_HOME/lib ruby2.2 $HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction $*
}
eval "$(rbenv init -)"
PATH=$PATH:$HOME/bin
export PATH
export PATH="$HOME/.rbenv/bin:$PATH"
export TMPDIR="/home/doc4design/src/tmp"
By default, shell functions are only available in the shell they were defined in; they're not inherited by subprocesses. Your .bash_profile is only run by the login shell, not shells that run as subprocesses (e.g. to run scripts).
Option 1: In bash, you can run export -f letsencrypt_webfaction in the defining shell (i.e. in your .bash_profile), and it'll be inherited by subprocesses (provided they're also running bash).
Option 2: You can define the function in your .bashrc instead of .bash_profile, and since you run .bashrc from .bash_profile it'll get defined in all your bash shells.
Option 3: Just use the full command in the script. This would be my preference, since it makes the script more independent. Having a script depend on a shell function that's defined in a completely different place is fragile (as you're experiencing) and just a bit weird.
While I'm at it, here are some general scripting recommendations:
In most contexts, you should put double-quotes around variable references (and strings that contain variable references) to avoid weird effects from word splitting and wildcard expansion. The right side of an assignment is one place it's ok to leave them off (e.g. PATH=$PATH:$HOME/bin and PATH="$PATH:$HOME/bin" are both ok), but I tend to recommend using quotes everywhere as it's hard to keep track of where it's safe to leave them off and where it's dangerous. For the same reason, you should almost always use "$#" instead of $* (as in the letsencrypt_webfaction function).
shellcheck.net is really good at spotting errors like this, so I recommend running your shell scripts through it and acting on its suggestions.
Using the function keyword to define a function is nonstandard; the standard syntax is to use () after the function name, like this:
letsencrypt_webfaction() {
PATH="$PATH:$GEM_HOME/bin" GEM_HOME="$HOME/.letsencrypt_webfaction/gems" RUBYLIB="$GEM_HOME/lib" ruby2.2 "$HOME/.letsencrypt_webfaction/gems/bin/letsencrypt_webfaction" "$#"
}
The function I just gave still may not work right, since it (re)defines GEM_HOME after using it. The entire line gets parsed (and pre-existing variable definitions expanded), then the variables defined as prefixes to the command get included in the environment of the command. This means that the ruby script gets the updated value of GEM_HOME, but the updated values of PATH and RUBYLIB are based on whatever value GEM_HOME had when the function was run. I'm pretty sure this is not what you intended.
In the restart apache script, you use a relative path to the restart command. This will be evaluated relative to the working directory of the process that runs the script, not relative to the script's location. This could be anywhere.

Proxying commands across a SSH connection

I'd like to be able to "spoof" certain commands on my machine, actually invoking them on a remote system. For example. Whenever I run:
cmd options
I'd like the actual command to be:
ssh user#host cmd options
Ideally I'd like to have a folder called spoof, add it to my PATH, and have an executable in there called cmd which does the spoofing. If I have a lot of commands, this could get tedious. Anyone have ideas of a good way to go about this? Such that I can add and remove a lot of commands in the future? And, I'd like to be able to pass all the arguments exactly (or as exact as possible) and every single command I want to spoof would just have the ssh user#host in front of it.
The reason for this is I'm running a container (specifically singularity) on my machine, and there are certain commands I don't really want to containerize, but still want to run from within the container. I've found I can get the functionality I want by just appending the ssh in front of it. Examples are sbatch and matlab which are a pain to containerize and I'm fine with just using ssh to call them. Files that these programs use are written to a bind point so the host machine can see them just fine.
The following script can be hardlinked under all the names of commands you wish to transparently proxy:
#!/usr/bin/env bash
printf -v str '%q ' "${0##*/}" "$#"
ssh host "$str"
SSH combines all its arguments into a single string, which is then executed by a remote shell. To ensure that the remote arguments are identical to the local one, the values need to be escaped; otherwise, somecommand "hello world" and somecommand "hello" "world" can be represented identically over-the-wire.
In an appropriately extended printf (including both bash and ksh implementations), %q is replaced with an escaped form of the corresponding value, which will be evaled back to the original (literal) text by if interpreted later.
printf -v varname stores the output of printf in a variable named varname without the overhead/inefficiency of a command substitutions. (In ksh93, varname=$(printf ...) is optimized to skip subshell overhead, so this is not necessary there).
$0 evaluates to argv[0], which is by convention the name of the command currently being run. (This can be overridden, but you trust your users to behave reasonably... right?)
${0##*/} is a parameter expansion which returns only content after the last / in $0 (should it in fact contain any slashes; otherwise, the original value is used unmodified).
"$#" refers to the exact argument vector passed to your script.

bash + Linux + how to ignore the character "!"

I want to send little script to remote machine by ssh
the script is
#!/bin/bash
sleep 1
reboot
but I get event not found - because the "!"
ssh 183.34.4.9 "echo -e '#!/bin/bash\nsleep 1\reboot>'/tmp/file"
-bash: !/bin/bash\nsleep: event not found
how to ignore the "!" char so script will so send successfully by ssh?
remark I cant use "\" before the "!" because I get
more /tmp/file
#\!/bin/bash
sleep 1
Use set +H before your command to disable ! style history substitution:
set +H
ssh 183.34.4.9 "echo -e '#!/bin/bash\nsleep 1\reboot>'/tmp/file"
# enable hostory expnsion again
set -H
I think your command line is not well formated. You can send this:
ssh 183.34.4.9 'echo -e "#!/bin/bash\nsleep 1\nreboot">/tmp/file'
When I say "not well formated" I mean you put ">" inside the "echo" and you forgot to add "n" before "reboot", and you put "\reboot", wich will be interpreted as "CR" (carriage return) followed by "eboot" command (which I don't think that exists).
But what did the trick here is to invert the comas changing (') with (") and viceversa.
Bash is running interactively (which means that you are feeding commands to it from the standard input and not exec(2)ing a command from a shell script) so you don't need to include the line #!/bin/bash in that case (even more, bash should just ignore it, but not the included bang, as it is part of the active history mechanism)
But why? the first two characters in an executable file (any file capable of being exec(2)ed from secondary storage, not your case) have a special meaning (for the kernel and for the shell): they are the magic number that identifies the kind of executable file the kernel is loading. This allows the kernel to select the proper executable loading routines depending on the binary executable format (and what allows you for example to execute BSD programs in linux kernels, and viceversa)
A special value for this magic numbers is composed by the two characters # and ! (in that order) that forces the kernel to read the complete first line of that file and load the executable file specified in that line instead, allowing you to execute shell scripts for different interpreters directly from the command line. And it is done on purpose, as the # character is commonly in shell script parlance a comment character. This only happens when the shell that is interpreting the commands is not an interactive shell. When the shell loads a script with those characters, it normally reads the first line also to check if it has the #! mark and load the proper interpreter, by replicating the kernel function that does this. Despite of being a comment for the shell, it does this to allow to treat as executables files that are not stored on secondary storage (the only ones the exec(2) system call can deal with), but coming from stdin (as happens to yours).
As your shell is running interactively and you do want to execute its commands without a shell change, you don't need that line and can completely eliminate it without having to disable the bang character.
Sorry, but the solution given about executing the shell with -H option will probably not be viable, as the shell executing the commands is the login shell in the target machine, so you cannot provide specific parameters to it (parameters are selected by the login(8) program and normally don't include arbitrary parameters like -H).
The best solution is to fully eliminate the #!/bin/bash line, as you are not going to exec(2) that program in the target. In case you want to select the shell from the input line (case the user has a different shell installed as login shell), it is better to invoke the wanted shell in the command line and pass it (through stdin, or making it read the shell script as a file) the shell commands you wan to execute (but again, without the #! line).
NOTE
Its important to ensure you'll execute the whole thing, so it's best to pass all the script contents in the destination target, and once assured you have passed the whole thing to execute it as a whole. Then your #! first line will be properly processed, as the executable will be run by means of an exec(2) made from the kernel.
Example:
DIRECTORY=/bla/bla
FILE=/path/to/file
OUTPUT=/path/to/output
# this is the command we want to pass through the line
cat <<EOF | ssh user#target "cat >>/tmp/shell.sh"
cd $DIRECTORY
foo $FILE >$OUTPUT
exit 0
EOF
# we have copied the script file in a remote /tmp/shell.sh
# and we are sure it has passed correctly, so it's ready
# for local execution there.
# now, execute it.
# the remote shell won't be interactive, and you'll ensure that it is /bin/bash
ssh user#target "/bin/bash /tmp/shell.sh" >remote_shell.out
A more sophisticate system is one that allows to to sign the shell script before sending, and verify the script signature before executing it, so you are protected against possible trojan horse attacks. But this is out of scope on this explanation.
Another alternative is to use the batch(2) command remotely and pass it all the commands you want executed. you'll get a sessionless executing environment, more suitable to the task you are demanding (despite the fact that you'll get the script output by email to the target user running the script)
Interactively, beware that ! triggers history expansion inside double quotes
from here: https://riptutorial.com/bash/example/2465/quoting-literal-text
my recommended solution is to use single quotes to define the string (and either escape single quotes \' or use double quotes " within the string):
ssh 183.34.4.9 'echo -e "#!/bin/bash\nsleep 1\reboot>"/tmp/file'

Alias definition of multiple commands after ssh

I'm logging in and out of a remote machine many times a day (through ssh) and I'd like to shorten a bit the whole procedure. I've added an alias in my .bashrc and .profile that looks like:
alias connect='ssh -XC username#remotemachine && cd /far/away/location/that/takes/time/to/get/to/;'
My problem is that when I write connect, I first get to the location in cause (on my local machine) and then the ssh connection takes place. How can this be? I've thought that by using "&&" the second command will be run only after the first one is successful. After the ssh command is successful, the .profile/.bashrc are loaded anew, before the second part of the alias is successfully executed?
For the ssh specifically, you're looking for the following:
ssh -t username#remotemachine "cd /path/you/want ; bash"
Using "&&" or even ";" normally will execute the commands in the shell that you're currently in. It's like if you're programming and make a function call and then have another line that you want to effect what happens in the function-- it doesn't work because it's essentially in a different scope.
For sequence of commands :
Try this (Using ;) :
alias cmd='command1;command2;command3;'
Use of '&&' instead of ';' -
The && makes it only execute subsequent commands if the previous returns successful.

ssh environment variable bash command not found

I try to start some command from ssh non-interactive ssh connection. I use ant-sshexec connection for that.
In order to set everything up I used this article:
http://www.raphink.info/2008/09/forcing-environment-in-ssh.html
I use ~/.ssh/environment.
In order to do that, I set PermitUserEnvironment to "yes" in sshd_config and restarted sshd.
In my .ssh/environment I have this content:
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/ubuntu/java/jdk1.6.0_27/bin
JAVA_HOME=/home/ubuntu/java/jdk1.6.0_27
#PATH=/home/ubuntu/java/jdk1.6.0_27/bin:$PATH
#PLAY_HOME=/home/ubuntu/play
and I have the error when try to connect using non-interactive connection:
[sshexec] Could not execute the java executable, please make sure the JAVA_HOME environment variable is set properly (the java executable should reside at JAVA_HOME/bin/java).
But I added the java to the path..
The man page for sshd(8) says this about ~/.ssh/environment:
It can only contain empty lines, comment lines (that start with
‘#’), and assignment lines of the form name=value.
That is, it is not a shell script at all. You have double quotes, variable expansion and an alias definition. None of that will work. Try this:
PATH=/home/ubuntu/java/jdk1.6.0_27/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games
JAVA_HOME=/home/ubuntu/java/jdk1.6.0_27
PLAY_HOME=/home/ubuntu/play
Also ensure that the permissions on the ~/.ssh/environment are as described in the man page — no group or other write permissions on the file.
If you are concerned with locking yourself out of the account with a broken environment, test by logging onto the host first and running test commands such like this:
ssh localhost 'echo $JAVA_HOME'
You can ensure that the environment variables are set as you expect them and if something goes wrong, you are still logged onto the host allowing you to reverse your changes.
You used multiple environement variable for path . But don't export from command what i see.
You should do it like that way.
export PATH="A"
export PATH="$PATH:B"
export PATH="$PATH:C"
Also you can get this type of help from there.
So please post it to unix.
https://unix.stackexchange.com/questions/12391/how-to-run-my-c-program-from-anywhere-within-the-system-ubuntu-10-10
Hope it helps.

Resources