node.js unavailable via ssh - node.js

I am trying to call an installation of node.js on a remote server running Ubuntu via SSH. Node has been installed via nvm.
SSHing in and calling node works just fine:
user#localmachine:~$ ssh user#remoteserver
(Server welcome text)
user#remoteserver:~$ which node
/home/user/.nvm/v0.10.00/bin/node
However if I combine it into one line:
user#localmachine:~$ ssh user#remoteserver "which ls"
/bin/ls
user#localmachine:~$ ssh user#remoteserver "which node"
No sign of node, so I tried sourcing .bashrc and waiting 10 seconds:
user#localmachine:~$ ssh user#remoteserver "source ~/.bashrc; sleep 10; which node"
Only node seems affected by this. One thing I did notice was that if I ssh in and then check which shell I'm in it says -bash whilst if I ssh direct it gives me /bin/bash. I tried running the commands inside a bash login shell:
user#localmachine:~$ ssh user#remoteserver 'bash --login -c "which node"'
Still nothing.
Basically my question is: Why isn't bash finding my node.js installation when I call it non-interactively from SSH?

Another approach is to run bash in interactive mode with the -i flag:
user#localmachine:~$ ssh user#remoteserver "bash -i -c 'which node'"
/home/user/.nvm/v0.10.00/bin/node

$ ssh user#remoteserver "which node"
When you run ssh and specify a command to be run on the remote system, ssh by default doesn't allocate a PTY (pseudo-TTY) for the session. Not having a TTY causes your remote shell process (ie, bash) to initialize as a non-interactive session instead of an interactive session. This can alter how it interprets your initialization files--.bashrc, .bash_profile, and so on.
The actual problem is probably that the line which adds /home/user/.nvm/v0.10.00/bin to your command PATH isn't executing for non-interactive sessions. There are two ways to resolve this:
Find the command in your initialization file(s) which adds /home/user/.nvm/v0.10.00/bin to your command path, figure out why it's not running for non-interactive sessions, and correct it.
Run ssh with the -t option. This tells it to allocate a PTY for the remote session. Or add the line RequestTTY yes to your .ssh/config file on the local host.

Related

establish ssh connection and execute command remotely [duplicate]

I wish to run a script on the remote system and then wish to stay there.
Running following script:-
ssh user#remote logs.sh
This do run the script but after that I am back to my host system. i need to stay on remote one. I tried with..
ssh user#remote logs.sh;bash -l
somehow it solves the problem but still not working exactly as a fresh login as the command:-
ssh user#remote
Or it will be better if i could include something in my script that would open the bash terminal in the same directory where the script was running. Please suggest.
Try this:
ssh -t user#remote 'logs.sh; bash -l'
The quotes are needed to pass both commands to ssh. The -t option forces a pseudo-tty allocation.
Discussion
Consider:
ssh user#remote logs.sh;bash -l
When the shell parses this line, it splits it into two commands. The first is:
ssh user#remote logs.sh
This runs logs.sh on the remote machine. The second command is:
bash -l
This opens a login shell on the local machine.
The quotes were added above to prevent the shell from splitting up the commands this way.

How to remotely run a (shebang prefixed) node script using ssh?

I want to remotely run a node.js script containing a shebang line through ssh, similarly as when running it locally.
myscript file:
#!/usr/bin/env node
var param = process.argv[2] || 'help';
//... other js code
When running locally on each host – e.g. myscript arg1 – it runs successfully. When running remotely on a "sister" node in a cluster (containing the same file and directory structure, including nodeand myscript):
ssh -o "PasswordAuthentication no" bob#123.1.2.3 /path/to/myscript arg1
I get /usr/bin/env: ‘node’: No such file or directory error.
Am I missing a ssh param / option?
Mode details: If I run
ssh -o "PasswordAuthentication no" bob#123.1.2.3 echo "hello"
It also works fine. Forgive me it this is obvious to you, I'm not an advanced Linux user, the ssh manual seemed a little bit overwhelming and tried a couple answers found here with no success:
What exactly does "/usr/bin/env node" do at the beginning of node files?
Run scripts remotely via SSH
how to run a script file remotely using ssh
If the node executable isn't already in your PATH environment variable at login, you could provide the full path to it in the shebang line of your script:
#!/usr/bin/env /full/path/to/node
As others have commented, you would have to update your script if the path to node ever changes. This is not ideal. Alternatively, you could force ssh to create a pseudo-terminal session by specifying the -t flag and run your script in an interactive bash shell:
ssh -t -o "PasswordAuthentication no" bob#123.1.2.3 'bash -ic "/path/to/myscript arg1"'
Sebastian's answer inspired me to find a solution that doesn't hardcode the full path to node on the script. Basically, I make sure the remote PATH is available before running the command:
ssh -o "PasswordAuthentication no" bob#123.1.2.3 "export PATH=$PATH;/path/to/myscript arg1"
But this only worked for me because both local and remote servers have the same PATH value, since the local PATH is being set onto the remote session.
Here there may be some ways to explore other solutions if your case is not like mine:
How do I set $PATH such that `ssh user#host command` works?
How to set PATH when running a ssh command?

Executing SSH with the Linux/Unix at command

I place this ssh call in the following a shell script on our Linux box named "tstz" and then call it with the linux "at" command in order to schedule it for later execution.
tstz script:
#! /bin/ksh
/usr/bin/ssh -tt <remote windows server> pmcmds ${fl} ${wf} < /dev/null >/tmp/test1.log 2>&1
at command syntax:
at -f tstz now + 1 minute
The ssh call executes remote command as expected, but the ssh connection closes immediately before the remote command has completed. I need the connection to stay open until the remote command has completed and then return control to the tstz script with an exit status.
This is the error I get in the /tmp/test1.log:
tcgetattr: Inappropriate ioctl for device
^[[2JConnection to dc01nj2dwifdv02.nj.core.him closed.^M
NOTE: When using the "at" command to schedule tstz, if I don't use -tt, the ssh command will not execute the remoted command "pmcmds ${fl} ${wf}". I believe this is because a terminal is required. I can however run tstz from the Linux command prompt in the foreground without the -tt on the ssh command line and it runs as expected.
Any help would be greately appreciated. Thanks!
As I understand you need to specify a command to execute on the REMOTE machine after successfully connecting to the server, not on LOCAL machine.
I use following command
ssh -i "key.pem" ec2-user#ec2-XX-XX-XX-XX.us-west-2.compute.amazonaws.com -t 'command; bash -l -c "sudo su"'
where you should replace "sudo su" with your own command, I guess with "pmcmds DFD_ETIME wf_TESTa"
So, try to issue, maybe
/usr/bin/ssh -tt <remote windows server> 'command; bash -l -c "pmcmds DFD_ETIME wf_TESTa"'
P.S. I have discovered interesting service on google called "explainshell"
which helped me to understand that "command;" keyword is crucial inside quotes.

Why does executing a command over SSH without a visual terminal use a different PATH location?

When executing an SSH session that simply launches a command instead of actually connecting you, it appears as though my PATH environmental variable differs from when I connect to the SSH session normally, and it's missing the location of my binaries for bash commands. Why would this be, and how can I avoid it?
Normal connection of : ssh root#host
Yields a PATH env of
PATH='/sbin;/usr/sbin;/proc/boot'
An ssh to execute command but not connect to the terminal directy (ssh root#host ls) yields "ls: command not found". Upon further inspection, the PATH environmental variable is missing /proc/boot, and thus missing the location of the ls binary file.
The PATH env of this 'non terminal' session yields:
PATH='/usr/sbin;/sbin'
but NOT /proc/boot, so it can't call standard actions like ls,mkdir, etc.
Why is this? How can I get my proper PATH when simply executing a command over SSH, but not connecting directly to a displayed terminal?
Run the .profile of the remote server before running commands
ssh user#host "~/.bash_profile; $command"
#!/bin/bash
dets () {
sleep 1;
echo $1
sleep 1
}
dets "$1" | ssh -T username#ipaddress
Try using the above script passing the command you want to execute to the script i.e. ./sshscr "ls" This will disable pseudo-tty allocation (-T) and then execute the commands through a function det with the commands passed.
This is actually a feature. When you use a terminal ssh session, you get an interactive login session. So the sshd daemon starts your login shell (the one that is declared in /etc/password) as a login shell. The profile files are read and initialize various environment parameters and you can the start entering commands - for old dinosaurs it is the rlogin mode, for younger guys it is just a login mode
When you pass a remote command directly on the ssh line, none of the above occurs. The sshd daemon just sets up a default environment and launches the command - it is the rsh mode for dinosaurs or command mode for younger ones.
How to fix:
The best way is to not rely on the PATH when you pass commands directly in the ssh line:
ssh root#host /bin/ls
Alternatively, you can pass commands to an interactive shell (assuming bash on linux):
echo 'ls' | ssh root#host "bash -i"
But beware it is just an interactive shell, not a login shell: the ~/.bashrc will be read, but not ~/.profile nor ~/.bash_profile

SSH - Remote executing program from host failed?

I need to run this command from Ubuntu machine: ssh user#hostip "which mvn" to show excutable path of Maven , but it show nothing
root#~: ssh user#hostip "which mvn"
root#~:
I check command which mvn on remote host it show me :
root#~: which mvn
/usr/share/mvn
root#~:
I try to sourcing .bashrc when excute ssh command but no luck:
root#~: ssh user#hostip ". ~/.bashrc;which mvn"
root#~:
In ./bashrc also have nothing about maven PATH configure
So , what i have to do ?
root#~: ssh user#hostip "which mvn"
When you run ssh and specify a command to invoke on the remote system, ssh by default doesn't allocate a PTY (pseudo-TTY) for the remote session. Shells like bash will detect that they're running without a TTY, and this will alter how the shell process initializes itself.
In your case, whatever command adds "/usr/share" to your remote PATH is probably not running for non-interactive sessions (sessions without a TTY).
You can probably solve this by telling ssh to request a TTY for the remote session:
ssh -tt user#hostip "which mvn"
The -t option tells ssh to request a TTY for the remote session. This would cause your remote shell instance to initialize itself for an interactive session. Refer to the ssh manual for details on the -t option.
If this doesn't solve the problem, you will need to find the point in your shell configuration files on the remote server (.bash_profile, .bashrc, etc.) where /usr/share is being added to your PATH, and make sure that step is performed for non-interactive sessions.

Resources