"LANG: Undefined variable" when running a ssh command - linux

I have found a problem in our FC8 linux machines with the LANG variable when running a command through ssh.
When in a terminal, I can see that my LANG variable is "es_ES"
[angelv#italia ~]$ echo $LANG
es_ES
If I connect back to my machine through ssh, there are no problems and $LANG is still "es_ES"
[angelv#italia ~]$ ssh italia
Last login: Mon Jul 26 12:51:12 2010 from XXXXXXXXXXXX
[angelv#italia ~]$ echo $LANG
es_ES
[angelv#italia ~]$
But if I try to run a command with ssh, then that variable is undefined...
[angelv#italia ~]$ ssh italia 'echo $LANG'
LANG: Undefined variable.
[angelv#italia ~]$
Does anybody know where I should look to find the culprit?

Quoth the SSH manual:
If command is specified, it is
executed on the remote host instead of
a login shell.
Login shells behave quite differently than non-login shells, most notably here in that they don't usually source the the login .profile files. See your shell documentation for more detail.

On linux, your locale variable is generally listed in /usr/share/locale. You should check on the server machin what local they use. It may differ from your machine.
EDIT: sorry I mistake the question.
in bash you should do
export LANG="es_ES"
in other shell you may have to use setenv instead of export

You may be able to work around this feature of ssh by invoking your shell and asking it to act like a login shell:
ssh italia "sh -l -c 'echo $LANG'"
Depending on the actual shell you're using, the required option might be -l or something else.

Related

Why does executing a command over SSH without a visual terminal use a different PATH location?

When executing an SSH session that simply launches a command instead of actually connecting you, it appears as though my PATH environmental variable differs from when I connect to the SSH session normally, and it's missing the location of my binaries for bash commands. Why would this be, and how can I avoid it?
Normal connection of : ssh root#host
Yields a PATH env of
PATH='/sbin;/usr/sbin;/proc/boot'
An ssh to execute command but not connect to the terminal directy (ssh root#host ls) yields "ls: command not found". Upon further inspection, the PATH environmental variable is missing /proc/boot, and thus missing the location of the ls binary file.
The PATH env of this 'non terminal' session yields:
PATH='/usr/sbin;/sbin'
but NOT /proc/boot, so it can't call standard actions like ls,mkdir, etc.
Why is this? How can I get my proper PATH when simply executing a command over SSH, but not connecting directly to a displayed terminal?
Run the .profile of the remote server before running commands
ssh user#host "~/.bash_profile; $command"
#!/bin/bash
dets () {
sleep 1;
echo $1
sleep 1
}
dets "$1" | ssh -T username#ipaddress
Try using the above script passing the command you want to execute to the script i.e. ./sshscr "ls" This will disable pseudo-tty allocation (-T) and then execute the commands through a function det with the commands passed.
This is actually a feature. When you use a terminal ssh session, you get an interactive login session. So the sshd daemon starts your login shell (the one that is declared in /etc/password) as a login shell. The profile files are read and initialize various environment parameters and you can the start entering commands - for old dinosaurs it is the rlogin mode, for younger guys it is just a login mode
When you pass a remote command directly on the ssh line, none of the above occurs. The sshd daemon just sets up a default environment and launches the command - it is the rsh mode for dinosaurs or command mode for younger ones.
How to fix:
The best way is to not rely on the PATH when you pass commands directly in the ssh line:
ssh root#host /bin/ls
Alternatively, you can pass commands to an interactive shell (assuming bash on linux):
echo 'ls' | ssh root#host "bash -i"
But beware it is just an interactive shell, not a login shell: the ~/.bashrc will be read, but not ~/.profile nor ~/.bash_profile

different ssh behavior from crond

I've been pulling my hair out on this one for several hours now. I welcome any new ideas on where to look next.
The objective is to login to a custom application CLI over SSH and then drop down a debug shell on the far-end device using one of the custom CLI commands. On the client side I'm using CentOS minimal and running ssh as follows:
Working case:
[user#ashleys-xpvm ws]$ ssh -p8222 admin#192.168.56.20
admin#192.168.56.20's password:
Welcome to CLI
admin connected from 172.29.33.108 using ssh on scm2
TRAN39# debug-utils shell
device#scm2:~$
The ssh client session accesses the custom CLI using the application-specific port 8222. Once inside the CLI, we drop down to the bash shell using the 'debug-utils shell' command.
This sequence was scripted with Python/pexpect and that worked fine when the script was launched from the user's command line. The problem arose when the script was moved to the crontab to be run automatically by crond. In the latter case, the script fails in a peculiar way.
Following the recommendation from this post: How to simulate the environment cron executes a script with? I launched a new shell on the client machine with the same environment variables as what the cron job uses and I was able to manually reproduce the same problem that the automatic cron job was running into.
With the cron environment set, the far-end device now throws the following error at the point where we issue the command to drop into the device's bash shell:
sh-4.2$ ssh -p8222 admin#192.168.56.20
admin#192.168.56.20's password:
Welcome to CLI
admin connected from 172.29.33.108 using ssh on scm2
TRAN39# debug-utils shell
error: failed to decode arguments
TRAN39#
Once I had the problem reproduced, I setup two terminals, one with the working environment variables and the other with the failing environment variables. I ran ssh from both terminals with '-vvv' flag and compared the debug output between the two.
The two outputs were identical except for where they step through the environment variables to determine what to send to the send the SSH server (obviously), as well as the 'bits set' lines were slightly different. I looked at the environment variable lines and I could see that ssh is ignoring all of them except for LANG which is identical in both the working case and the failing case.
I'm at a loss now for why the ssh server at the far-end device is behaving differently between these two client-side environment settings.
Here is the working environment:
[user#centos_vm ws]$ env
XDG_SESSION_ID=294
HOSTNAME=centos_vm
SELINUX_ROLE_REQUESTED=
TERM=xterm-256color
SHELL=/bin/bash
HISTSIZE=1000
SSH_CLIENT=192.168.56.20 52795 22
SELINUX_USE_CURRENT_RANGE=
OLDPWD=/home/user
SSH_TTY=/dev/pts/4
USER=user
LS_COLORS=rs=0:di=38;5;27:ln=38;5;51:mh=44;38;5;15:pi=40;38;5;11:so=38;5;13:do=38;5;5:bd=48;5;232;38;5;11:cd=48;5;232;38;5;3:or=48;5;232;38;5;9:mi=05;48;5;232;38;5;15:su=48;5;196;38;5;15:sg=48;5;11;38;5;16:ca=48;5;196;38;5;226:tw=48;5;10;38;5;16:ow=48;5;10;38;5;21:st=48;5;21;38;5;15:ex=38;5;34:*.tar=38;5;9:*.tgz=38;5;9:*.arc=38;5;9:*.arj=38;5;9:*.taz=38;5;9:*.lha=38;5;9:*.lz4=38;5;9:*.lzh=38;5;9:*.lzma=38;5;9:*.tlz=38;5;9:*.txz=38;5;9:*.tzo=38;5;9:*.t7z=38;5;9:*.zip=38;5;9:*.z=38;5;9:*.Z=38;5;9:*.dz=38;5;9:*.gz=38;5;9:*.lrz=38;5;9:*.lz=38;5;9:*.lzo=38;5;9:*.xz=38;5;9:*.bz2=38;5;9:*.bz=38;5;9:*.tbz=38;5;9:*.tbz2=38;5;9:*.tz=38;5;9:*.deb=38;5;9:*.rpm=38;5;9:*.jar=38;5;9:*.war=38;5;9:*.ear=38;5;9:*.sar=38;5;9:*.rar=38;5;9:*.alz=38;5;9:*.ace=38;5;9:*.zoo=38;5;9:*.cpio=38;5;9:*.7z=38;5;9:*.rz=38;5;9:*.cab=38;5;9:*.jpg=38;5;13:*.jpeg=38;5;13:*.gif=38;5;13:*.bmp=38;5;13:*.pbm=38;5;13:*.pgm=38;5;13:*.ppm=38;5;13:*.tga=38;5;13:*.xbm=38;5;13:*.xpm=38;5;13:*.tif=38;5;13:*.tiff=38;5;13:*.png=38;5;13:*.svg=38;5;13:*.svgz=38;5;13:*.mng=38;5;13:*.pcx=38;5;13:*.mov=38;5;13:*.mpg=38;5;13:*.mpeg=38;5;13:*.m2v=38;5;13:*.mkv=38;5;13:*.webm=38;5;13:*.ogm=38;5;13:*.mp4=38;5;13:*.m4v=38;5;13:*.mp4v=38;5;13:*.vob=38;5;13:*.qt=38;5;13:*.nuv=38;5;13:*.wmv=38;5;13:*.asf=38;5;13:*.rm=38;5;13:*.rmvb=38;5;13:*.flc=38;5;13:*.avi=38;5;13:*.fli=38;5;13:*.flv=38;5;13:*.gl=38;5;13:*.dl=38;5;13:*.xcf=38;5;13:*.xwd=38;5;13:*.yuv=38;5;13:*.cgm=38;5;13:*.emf=38;5;13:*.axv=38;5;13:*.anx=38;5;13:*.ogv=38;5;13:*.ogx=38;5;13:*.aac=38;5;45:*.au=38;5;45:*.flac=38;5;45:*.mid=38;5;45:*.midi=38;5;45:*.mka=38;5;45:*.mp3=38;5;45:*.mpc=38;5;45:*.ogg=38;5;45:*.ra=38;5;45:*.wav=38;5;45:*.axa=38;5;45:*.oga=38;5;45:*.spx=38;5;45:*.xspf=38;5;45:
MAIL=/var/spool/mail/user
PATH=/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/user/.local/bin:/home/user/bin
PWD=/home/user/ws
LANG=en_US.UTF-8
SELINUX_LEVEL_REQUESTED=
HISTCONTROL=ignoredups
SHLVL=1
HOME=/home/user
LOGNAME=user
SSH_CONNECTION=192.168.56.20 52795 192.168.56.101 22
LESSOPEN=||/usr/bin/lesspipe.sh %s
XDG_RUNTIME_DIR=/run/user/1000
_=/usr/bin/env
[user#centos_vm ws]$
...and here is the failing (i.e. cron) environment:
sh-4.2$ env
XDG_SESSION_ID=321
SHELL=/bin/sh
USER=user
PATH=/usr/bin:/bin
PWD=/home/user/ws
LANG=en_US.UTF-8
HOME=/home/user
SHLVL=2
LOGNAME=user
XDG_RUNTIME_DIR=/run/user/1000
_=/usr/bin/env
OLDPWD=/home/user
sh-4.2$
I'm running out of my depth on ssh debugging at this point so any guidance on where to look next is greatly appreciated.
Usually ssh without specifying a command (ssh user#host) would pass the value of TERM on local host to remote server. For example:
# TERM=foo ssh 127.0.0.1
bash-4.4# echo $TERM
foo
bash-4.4#
In crontab, crond by default will not set the TERM var so after ssh login, the TERM will be set to dumb (which is not fully functional). See example:
# (unset TERM; ssh 127.0.0.1)
bash-4.4# echo $TERM
dumb
bash-4.4# clear
TERM environment variable not set.
bash-4.4#
In your case it sounds like the remote application requires a more functional TERM so explicitly setting it to TERM=xterm (which will be passed to the remote server) in crontab would fix it.
Note that ssh with a command (ssh user#host command...) will not allocate a pty on remote server so the local TERM will not be passed. To force creating a pty and passing the var we must use ssh -t. See example:
# echo $TERM
dtterm
# ssh 127.0.0.1 'tty; echo $TERM'
not a tty
dumb
# ssh -t 127.0.0.1 'tty; echo $TERM'
/dev/pts/8
dtterm
#
Found Dumb terminals on Wikipedia:
Dumb terminals are those that can interpret a limited number of control codes (CR, LF, etc.) but do not have the ability to process special escape sequences that perform functions such as clearing a line, clearing the screen, or controlling cursor position. In this context dumb terminals are sometimes dubbed glass Teletypes, for they essentially have the same limited functionality as does a mechanical Teletype. This type of dumb terminal is still supported on modern Unix-like systems by setting the environment variable TERM to dumb. Smart or intelligent terminals are those that also have the ability to process escape sequences, in particular the VT52, VT100 or ANSI escape sequences.

node.js unavailable via ssh

I am trying to call an installation of node.js on a remote server running Ubuntu via SSH. Node has been installed via nvm.
SSHing in and calling node works just fine:
user#localmachine:~$ ssh user#remoteserver
(Server welcome text)
user#remoteserver:~$ which node
/home/user/.nvm/v0.10.00/bin/node
However if I combine it into one line:
user#localmachine:~$ ssh user#remoteserver "which ls"
/bin/ls
user#localmachine:~$ ssh user#remoteserver "which node"
No sign of node, so I tried sourcing .bashrc and waiting 10 seconds:
user#localmachine:~$ ssh user#remoteserver "source ~/.bashrc; sleep 10; which node"
Only node seems affected by this. One thing I did notice was that if I ssh in and then check which shell I'm in it says -bash whilst if I ssh direct it gives me /bin/bash. I tried running the commands inside a bash login shell:
user#localmachine:~$ ssh user#remoteserver 'bash --login -c "which node"'
Still nothing.
Basically my question is: Why isn't bash finding my node.js installation when I call it non-interactively from SSH?
Another approach is to run bash in interactive mode with the -i flag:
user#localmachine:~$ ssh user#remoteserver "bash -i -c 'which node'"
/home/user/.nvm/v0.10.00/bin/node
$ ssh user#remoteserver "which node"
When you run ssh and specify a command to be run on the remote system, ssh by default doesn't allocate a PTY (pseudo-TTY) for the session. Not having a TTY causes your remote shell process (ie, bash) to initialize as a non-interactive session instead of an interactive session. This can alter how it interprets your initialization files--.bashrc, .bash_profile, and so on.
The actual problem is probably that the line which adds /home/user/.nvm/v0.10.00/bin to your command PATH isn't executing for non-interactive sessions. There are two ways to resolve this:
Find the command in your initialization file(s) which adds /home/user/.nvm/v0.10.00/bin to your command path, figure out why it's not running for non-interactive sessions, and correct it.
Run ssh with the -t option. This tells it to allocate a PTY for the remote session. Or add the line RequestTTY yes to your .ssh/config file on the local host.

Suppress 'Warning: no access to tty' in ssh

I have a short simple script, that compiles a .c file and runs it on a remote server running tcsh and then just gives back control to my machine (this is for school, I need my programs to work properly on the lab computers but want to edit them etc. on my machine). It runs commands this way:
ssh -T user#server << EOF
cd cs4400/$dest
gcc -o $efile $file
./$efile
EOF
So far it works fine, but it gives this warning every time I do this:
Warning: no access to tty (Bad file descriptor).
Thus no job control in this shell.
I know this technically isn't a problem, but it's SUPER annoying. I'm trying to do school work, checking the output of my program etc., and this clutters everything, and I HATE it.
I'm running this version of ssh on my machine:
OpenSSH_6.1p1 Debian-4, OpenSSL 1.0.1c 10 May 2012
This version of tcsh on the server:
tcsh 6.17.00 (Astron) 2009-07-10 (x86_64-unknown-linux)
And this version of ssh on the server:
OpenSSH_5.3p1, OpenSSL 1.0.0-fips 29 Mar 2010
The message is actually printed by shell, in this case tcsh.
You can use
strings /usr/bin/tcsh | grep 'no access to tty'
to ensure that it belongs to tcsh itself.
It is related to ssh only very loosely, ie ssh in this case is just the trigger, not the cause.
You should either change your approach and not use HERE DOCUMENT. Instead place executable custom_script into /path/custom_script and run it via ssh.
# this will work
ssh user#dest '/path/custom_script'
Or, just run complex command as a oneliner.
# this will work as well
ssh user#dest "cd cs4400/$dest;gcc -o $efile $file;./$efile"
On OS X, I solved a similar problem (for script provisioning on Vagrant) with ssh -t -t (note that -t comes twice).
Advice based on the ssh BSD man page:
-T Disable pseudo-terminal allocation.
-t Force pseudo-terminal allocation. This can be used to execute arbitrary screen-based programs on a remote machine, which can
be very useful, e.g. when implementing menu services. Multiple -t options force tty allocation, even if ssh has no local tty.
If running tcsh is not important for you, specify a different shell and it will work:
ssh -T user#server bash << EOF
cd cs4400/$dest
gcc -o $efile $file
./$efile
EOF

User environment is not sourced with chroot

I have a little problem with a chroot environment and I hope you could help me :)
Here's my story:
1 - I created a user demo (with a home like /home/demo) and I chrooted him thanks to a script /bin/chrootshell which is as following:
#!/bin/bash
exec -c /usr/sbin/chroot /home/$USER /bin/bash --login -i
2 - Usual login authentication are disabled for this user, so I have to use su - demo to be logged as him
Everything works well (like all the chrooted system commands or my java configuration). But each time I become user demo, it seems my .bashrc or /etc/profile are not sourced... And I don't know why.
But if I launch a manual bash it works as you can see here:
root#test:~# su - demo
bash-4.1$ env
PWD=/
SHELL=/bin/chrootshell
SHLVL=1
_=/bin/env
bash-4.1$ bash
bash-4.1$ env
PWD=/
SHLVL=2
SHELL=/bin/chrootshell
PLOP=export variable test
_=/bin/env
As you can see, my $PLOP variable (describes in /.bashrc == /home/demo/.bashrc) is well set in the second bash, but I don't know why
Thanks in advance if you have any clue about my issue :)
edit: What I actually don't understand is why SHELL=/bin/chrootshell ? in my chroot env I declare my demo user with /bin/bash shell
As far as I can tell the behaviour that you are experiencing is bash working as designed.
In short: when bash is started as a login shell (that is what happens when you call bash with --login) it will read .profile but not .bashrc. When bash is started as a non login shell then bash will read .bashrc but not .profile.
Read the bash manual chapter about startup files for more information.
My suggestion to work around this design decision is to create a .bash_profile with the following content:
if [ -f "~/.profile" ]; then
source "~/.profile"
fi
if [ -f "~/.bashrc" ]; then
source "~/.bashrc"
fi
That will make bash read .profile and .bashrc if started as login shell, and read only .bashrc if started as non login shell. Now you can put the stuff which needs to be done once (during login) in .profile, and the stuff which needs to be done everytime in .bashrc.

Resources