establish ssh connection and execute command remotely [duplicate] - linux

I wish to run a script on the remote system and then wish to stay there.
Running following script:-
ssh user#remote logs.sh
This do run the script but after that I am back to my host system. i need to stay on remote one. I tried with..
ssh user#remote logs.sh;bash -l
somehow it solves the problem but still not working exactly as a fresh login as the command:-
ssh user#remote
Or it will be better if i could include something in my script that would open the bash terminal in the same directory where the script was running. Please suggest.

Try this:
ssh -t user#remote 'logs.sh; bash -l'
The quotes are needed to pass both commands to ssh. The -t option forces a pseudo-tty allocation.
Discussion
Consider:
ssh user#remote logs.sh;bash -l
When the shell parses this line, it splits it into two commands. The first is:
ssh user#remote logs.sh
This runs logs.sh on the remote machine. The second command is:
bash -l
This opens a login shell on the local machine.
The quotes were added above to prevent the shell from splitting up the commands this way.

Related

Running bash script over SSH [duplicate]

I have to run a local shell script (windows/Linux) on a remote machine.
I have SSH configured on both machine A and B. My script is on machine A which will run some of my code on a remote machine, machine B.
The local and remote computers can be either Windows or Unix based system.
Is there a way to run do this using plink/ssh?
If Machine A is a Windows box, you can use Plink (part of PuTTY) with the -m parameter, and it will execute the local script on the remote server.
plink root#MachineB -m local_script.sh
If Machine A is a Unix-based system, you can use:
ssh root#MachineB 'bash -s' < local_script.sh
You shouldn't have to copy the script to the remote server to run it.
This is an old question, and Jason's answer works fine, but I would like to add this:
ssh user#host <<'ENDSSH'
#commands to run on remote host
ENDSSH
This can also be used with su and commands which require user input. (note the ' escaped heredoc)
Since this answer keeps getting bits of traffic, I would add even more info to this wonderful use of heredoc:
You can nest commands with this syntax, and that's the only way nesting seems to work (in a sane way)
ssh user#host <<'ENDSSH'
#commands to run on remote host
ssh user#host2 <<'END2'
# Another bunch of commands on another host
wall <<'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
You can actually have a conversation with some services like telnet, ftp, etc. But remember that heredoc just sends the stdin as text, it doesn't wait for response between lines
I just found out that you can indent the insides with tabs if you use <<-END!
ssh user#host <<-'ENDSSH'
#commands to run on remote host
ssh user#host2 <<-'END2'
# Another bunch of commands on another host
wall <<-'ENDWALL'
Error: Out of cheese
ENDWALL
ftp ftp.example.com <<-'ENDFTP'
test
test
ls
ENDFTP
END2
ENDSSH
(I think this should work)
Also see
http://tldp.org/LDP/abs/html/here-docs.html
Also, don't forget to escape variables if you want to pick them up from the destination host.
This has caught me out in the past.
For example:
user#host> ssh user2#host2 "echo \$HOME"
prints out /home/user2
while
user#host> ssh user2#host2 "echo $HOME"
prints out /home/user
Another example:
user#host> ssh user2#host2 "echo hello world | awk '{print \$1}'"
prints out "hello" correctly.
This is an extension to YarekT's answer to combine inline remote commands with passing ENV variables from the local machine to the remote host so you can parameterize your scripts on the remote side:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'
# commands to run on remote host
echo $ARG1 $ARG2
ENDSSH
I found this exceptionally helpful by keeping it all in one script so it's very readable and maintainable.
Why this works. ssh supports the following syntax:
ssh user#host remote_command
In bash we can specify environment variables to define prior to running a command on a single line like so:
ENV_VAR_1='value1' ENV_VAR_2='value2' bash -c 'echo $ENV_VAR_1 $ENV_VAR_2'
That makes it easy to define variables prior to running a command. In this case echo is our command we're running. Everything before echo defines environment variables.
So we combine those two features and YarekT's answer to get:
ssh user#host ARG1=$ARG1 ARG2=$ARG2 'bash -s' <<'ENDSSH'...
In this case we are setting ARG1 and ARG2 to local values. Sending everything after user#host as the remote_command. When the remote machine executes the command ARG1 and ARG2 are set the local values, thanks to local command line evaluation, which defines environment variables on the remote server, then executes the bash -s command using those variables. Voila.
<hostA_shell_prompt>$ ssh user#hostB "ls -la"
That will prompt you for password, unless you have copied your hostA user's public key to the authorized_keys file on the home of user .ssh's directory. That will allow for passwordless authentication (if accepted as an auth method on the ssh server's configuration)
I've started using Fabric for more sophisticated operations. Fabric requires Python and a couple of other dependencies, but only on the client machine. The server need only be a ssh server. I find this tool to be much more powerful than shell scripts handed off to SSH, and well worth the trouble of getting set up (particularly if you enjoy programming in Python). Fabric handles running scripts on multiple hosts (or hosts of certain roles), helps facilitate idempotent operations (such as adding a line to a config script, but not if it's already there), and allows construction of more complex logic (such as the Python language can provide).
cat ./script.sh | ssh <user>#<host>
chmod +x script.sh
ssh -i key-file root#111.222.3.444 < ./script.sh
Try running ssh user#remote sh ./script.unx.
Assuming you mean you want to do this automatically from a "local" machine, without manually logging into the "remote" machine, you should look into a TCL extension known as Expect, it is designed precisely for this sort of situation. I've also provided a link to a script for logging-in/interacting via SSH.
https://www.nist.gov/services-resources/software/expect
http://bash.cyberciti.biz/security/expect-ssh-login-script/
ssh user#hostname ". ~/.bashrc;/cd path-to-file/;. filename.sh"
highly recommended to source the environment file(.bashrc/.bashprofile/.profile). before running something in remote host because target and source hosts environment variables may be deffer.
I use this one to run a shell script on a remote machine (tested on /bin/bash):
ssh deploy#host . /home/deploy/path/to/script.sh
if you wanna execute command like this
temp=`ls -a`
echo $temp
command in `` will cause errors.
below command will solve this problem
ssh user#host '''
temp=`ls -a`
echo $temp
'''
If the script is short and is meant to be embedded inside your script and you are running under bash shell and also bash shell is available on the remote side, you may use declare to transfer local context to remote. Define variables and functions containing the state that will be transferred to the remote. Define a function that will be executed on the remote side. Then inside a here document read by bash -s you can use declare -p to transfer the variable values and use declare -f to transfer function definitions to the remote.
Because declare takes care of the quoting and will be parsed by the remote bash, the variables are properly quoted and functions are properly transferred. You may just write the script locally, usually I do one long function with the work I need to do on the remote side. The context has to be hand-picked, but the following method is "good enough" for any short scripts and is safe - should properly handle all corner cases.
somevar="spaces or other special characters"
somevar2="!##$%^"
another_func() {
mkdir -p "$1"
}
work() {
another_func "$somevar"
touch "$somevar"/"$somevar2"
}
ssh user#server 'bash -s' <<EOT
$(declare -p somevar somevar2) # transfer variables values
$(declare -f work another_func) # transfer function definitions
work # call the function
EOT
The answer here (https://stackoverflow.com/a/2732991/4752883) works great if
you're trying to run a script on a remote linux machine using plink or ssh.
It will work if the script has multiple lines on linux.
**However, if you are trying to run a batch script located on a local
linux/windows machine and your remote machine is Windows, and it consists
of multiple lines using **
plink root#MachineB -m local_script.bat
wont work.
Only the first line of the script will be executed. This is probably a
limitation of plink.
Solution 1:
To run a multiline batch script (especially if it's relatively simple,
consisting of a few lines):
If your original batch script is as follows
cd C:\Users\ipython_user\Desktop
python filename.py
you can combine the lines together using the "&&" separator as follows in your
local_script.bat file:
https://stackoverflow.com/a/8055390/4752883:
cd C:\Users\ipython_user\Desktop && python filename.py
After this change, you can then run the script as pointed out here by
#JasonR.Coombs: https://stackoverflow.com/a/2732991/4752883 with:
`plink root#MachineB -m local_script.bat`
Solution 2:
If your batch script is relatively complicated, it may be better to use a batch
script which encapsulates the plink command as well as follows as pointed out
here by #Martin https://stackoverflow.com/a/32196999/4752883:
rem Open tunnel in the background
start plink.exe -ssh [username]#[hostname] -L 3307:127.0.0.1:3306 -i "[SSH
key]" -N
rem Wait a second to let Plink establish the tunnel
timeout /t 1
rem Run the task using the tunnel
"C:\Program Files\R\R-3.2.1\bin\x64\R.exe" CMD BATCH qidash.R
rem Kill the tunnel
taskkill /im plink.exe
This bash script does ssh into a target remote machine, and run some command in the remote machine, do not forget to install expect before running it (on mac brew install expect )
#!/usr/bin/expect
set username "enterusenamehere"
set password "enterpasswordhere"
set hosts "enteripaddressofhosthere"
spawn ssh $username#$hosts
expect "$username#$hosts's password:"
send -- "$password\n"
expect "$"
send -- "somecommand on target remote machine here\n"
sleep 5
expect "$"
send -- "exit\n"
You can use runoverssh:
sudo apt install runoverssh
runoverssh -s localscript.sh user host1 host2 host3...
-s runs a local script remotely
Useful flags:
-g use a global password for all hosts (single password prompt)
-n use SSH instead of sshpass, useful for public-key authentication
If it's one script it's fine with the above solution.
I would set up Ansible to do the Job. It works in the same way (Ansible uses ssh to execute the scripts on the remote machine for both Unix or Windows).
It will be more structured and maintainable.
It is unclear if the local script uses locally set variables, functions, or aliases.
If it does this should work:
myscript.sh:
#!/bin/bash
myalias $myvar
myfunction $myvar
It uses $myvar, myfunction, and myalias. Let us assume they is set locally and not on the remote machine.
Make a bash function that contains the script:
eval "myfun() { `cat myscript.sh`; }"
Set variable, function, and alias:
myvar=works
alias myalias='echo This alias'
myfunction() { echo This function "$#"; }
And "export" myfun, myfunction, myvar, and myalias to server using env_parallel from GNU Parallel:
env_parallel -S server -N0 --nonall myfun ::: dummy
Extending answer from #cglotr. In order to write inline command use printf, it useful for simple command and it support multiline using char escaping '\n'
example :
printf "cd /to/path/your/remote/machine/log \n tail -n 100 Server.log" | ssh <user>#<host> 'bash -s'
See don't forget to add bash -s
There is another approach ,you can copy your script in your host with scp command then execute it easily .
First, copy the script over to Machine B using scp
[user#machineA]$ scp /path/to/script user#machineB:/home/user/path
Then, just run the script
[user#machineA]$ ssh user#machineB "/home/user/path/script"
This will work if you have given executable permission to the script.

Executing SSH with the Linux/Unix at command

I place this ssh call in the following a shell script on our Linux box named "tstz" and then call it with the linux "at" command in order to schedule it for later execution.
tstz script:
#! /bin/ksh
/usr/bin/ssh -tt <remote windows server> pmcmds ${fl} ${wf} < /dev/null >/tmp/test1.log 2>&1
at command syntax:
at -f tstz now + 1 minute
The ssh call executes remote command as expected, but the ssh connection closes immediately before the remote command has completed. I need the connection to stay open until the remote command has completed and then return control to the tstz script with an exit status.
This is the error I get in the /tmp/test1.log:
tcgetattr: Inappropriate ioctl for device
^[[2JConnection to dc01nj2dwifdv02.nj.core.him closed.^M
NOTE: When using the "at" command to schedule tstz, if I don't use -tt, the ssh command will not execute the remoted command "pmcmds ${fl} ${wf}". I believe this is because a terminal is required. I can however run tstz from the Linux command prompt in the foreground without the -tt on the ssh command line and it runs as expected.
Any help would be greately appreciated. Thanks!
As I understand you need to specify a command to execute on the REMOTE machine after successfully connecting to the server, not on LOCAL machine.
I use following command
ssh -i "key.pem" ec2-user#ec2-XX-XX-XX-XX.us-west-2.compute.amazonaws.com -t 'command; bash -l -c "sudo su"'
where you should replace "sudo su" with your own command, I guess with "pmcmds DFD_ETIME wf_TESTa"
So, try to issue, maybe
/usr/bin/ssh -tt <remote windows server> 'command; bash -l -c "pmcmds DFD_ETIME wf_TESTa"'
P.S. I have discovered interesting service on google called "explainshell"
which helped me to understand that "command;" keyword is crucial inside quotes.

Why does executing a command over SSH without a visual terminal use a different PATH location?

When executing an SSH session that simply launches a command instead of actually connecting you, it appears as though my PATH environmental variable differs from when I connect to the SSH session normally, and it's missing the location of my binaries for bash commands. Why would this be, and how can I avoid it?
Normal connection of : ssh root#host
Yields a PATH env of
PATH='/sbin;/usr/sbin;/proc/boot'
An ssh to execute command but not connect to the terminal directy (ssh root#host ls) yields "ls: command not found". Upon further inspection, the PATH environmental variable is missing /proc/boot, and thus missing the location of the ls binary file.
The PATH env of this 'non terminal' session yields:
PATH='/usr/sbin;/sbin'
but NOT /proc/boot, so it can't call standard actions like ls,mkdir, etc.
Why is this? How can I get my proper PATH when simply executing a command over SSH, but not connecting directly to a displayed terminal?
Run the .profile of the remote server before running commands
ssh user#host "~/.bash_profile; $command"
#!/bin/bash
dets () {
sleep 1;
echo $1
sleep 1
}
dets "$1" | ssh -T username#ipaddress
Try using the above script passing the command you want to execute to the script i.e. ./sshscr "ls" This will disable pseudo-tty allocation (-T) and then execute the commands through a function det with the commands passed.
This is actually a feature. When you use a terminal ssh session, you get an interactive login session. So the sshd daemon starts your login shell (the one that is declared in /etc/password) as a login shell. The profile files are read and initialize various environment parameters and you can the start entering commands - for old dinosaurs it is the rlogin mode, for younger guys it is just a login mode
When you pass a remote command directly on the ssh line, none of the above occurs. The sshd daemon just sets up a default environment and launches the command - it is the rsh mode for dinosaurs or command mode for younger ones.
How to fix:
The best way is to not rely on the PATH when you pass commands directly in the ssh line:
ssh root#host /bin/ls
Alternatively, you can pass commands to an interactive shell (assuming bash on linux):
echo 'ls' | ssh root#host "bash -i"
But beware it is just an interactive shell, not a login shell: the ~/.bashrc will be read, but not ~/.profile nor ~/.bash_profile

Is it possible to run multiple command with remote command option in putty?

I want to run multiple commands automatically like sudo bash, ssh server01, ls , cd /tmp etc at server login..
I am using Remote command option under SSH in putty.
I tried multiple commands with delimiter && but not working.
There is a some information lacking in your question.
You say you want to run sudo bash, then ssh server01.
Will sudo prompt for a password in your remote server?
Assuming there is no password in sudo, running bash will open another shell waiting for user input. The command ssh server01 will not be run until that bash shell is exited.
If you want to run 2 commands, try first simpler ones like:
ls -l /tmp ; echo "hi there"
or if you prefer:
ls -l /tmp && echo "hi there"
Does this work?
If what you want is to run ssh after running bash, you can try :
sudo bash -c "ssh server01"
That is probably because the command is expected to be a program name followed by parameters, which will be passed directly to the program. In order to get && and other functionality that is provided by a command line interpreter such as bash, try this:
/bin/bash -c "command1 && command2"
I tried what I suggested in my previous answer.
It is possible to run 2 simple commands in putty separated by a semicolon. As in my example I tried with ls and echo. The remote server runs them and then the session closes.
I also tried to ssh to a remote server that is configured for not asking for a password. In that case, it also works, I get connected to the 2nd server and I can run commands on it. Upon exit, the 2 connections are closed.
So please, let us know what you actually need / want.
You can execute two consecutive commands in PuTTY using a regular shell syntax. E.g. using ; or &&.
But you want to execute ssh server01 in sudo bash shell, right?
These are not two consecutive commands, it's ssh server01 command executed within sudo bash.
So you have to use a sudo command-line syntax to execute the ssh server01, like
sudo bash ssh server01

node.js unavailable via ssh

I am trying to call an installation of node.js on a remote server running Ubuntu via SSH. Node has been installed via nvm.
SSHing in and calling node works just fine:
user#localmachine:~$ ssh user#remoteserver
(Server welcome text)
user#remoteserver:~$ which node
/home/user/.nvm/v0.10.00/bin/node
However if I combine it into one line:
user#localmachine:~$ ssh user#remoteserver "which ls"
/bin/ls
user#localmachine:~$ ssh user#remoteserver "which node"
No sign of node, so I tried sourcing .bashrc and waiting 10 seconds:
user#localmachine:~$ ssh user#remoteserver "source ~/.bashrc; sleep 10; which node"
Only node seems affected by this. One thing I did notice was that if I ssh in and then check which shell I'm in it says -bash whilst if I ssh direct it gives me /bin/bash. I tried running the commands inside a bash login shell:
user#localmachine:~$ ssh user#remoteserver 'bash --login -c "which node"'
Still nothing.
Basically my question is: Why isn't bash finding my node.js installation when I call it non-interactively from SSH?
Another approach is to run bash in interactive mode with the -i flag:
user#localmachine:~$ ssh user#remoteserver "bash -i -c 'which node'"
/home/user/.nvm/v0.10.00/bin/node
$ ssh user#remoteserver "which node"
When you run ssh and specify a command to be run on the remote system, ssh by default doesn't allocate a PTY (pseudo-TTY) for the session. Not having a TTY causes your remote shell process (ie, bash) to initialize as a non-interactive session instead of an interactive session. This can alter how it interprets your initialization files--.bashrc, .bash_profile, and so on.
The actual problem is probably that the line which adds /home/user/.nvm/v0.10.00/bin to your command PATH isn't executing for non-interactive sessions. There are two ways to resolve this:
Find the command in your initialization file(s) which adds /home/user/.nvm/v0.10.00/bin to your command path, figure out why it's not running for non-interactive sessions, and correct it.
Run ssh with the -t option. This tells it to allocate a PTY for the remote session. Or add the line RequestTTY yes to your .ssh/config file on the local host.

Resources