Run a command on local machine while on ssh in bash [closed] - linux

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I want to run a command on local system while I have ssh'd to a remote system in bash. Is there a way to do this? This is what I want:
#!/bin/bash
ssh mysystem#ip <<'SSH'
#Do something
#Run a command here on local machine and not on machine I have sshed to
#Do Something
exit
SSH
Edit: I want to echo some message and since echo command output won't show from remote machine, I want to run from local.

WHen you are using SSH, the key sequence <enter>~ is a escape prefix that allows you to pause SSH and send key sequences to the ssh client on the host-side.
The sequence <enter>~<ctrl + z> will pause (stop) the ssh-client job and drop you to a prompt in the calling system. Typing fg (if ou are on a Unix shell) will resume your ssh session afterwards.
You can see other ssh escape sequences avaiable by typing <enter>~?.
The sequence <enter>~. will terminate the connection and is very handy when your session is locked on the remote machine.
(Users with non-US keyboard layouts that use ~ as a dead-key to compose accents and digrams have, obviously, to type ~ twice in all of these sequences)
These sequences are of use from when you are operating the SSH session an d typign commands yourself, not for scripting.
Since you seem to want a way to that in scripts, the straightforward solution is to include an ssh command back to the originating host.

I have an approach which is pretty hacky, but it works.
Overview and security caveats
In brief, you use reverse SSH tunnelling to SSH back to your local machine and run a single command, and you connect back using your SSH keys so that no password is required.
NB This approach involves agent forwarding, which comes with a risk:
anyone with root access on the remote host can discreetly access your local SSH agent through the socket. They can use your keys to impersonate you on other machines on the network.
The risk is lessened in your case because the SSH session is only open for the duration of the command. But I'm not a security expert so can't comment further.
An alternative would be to generate a specific keypair just for this connection and use that, but I'm not sure how scriptable this would be.
The second security caveat is that this approach involves running an SSH server on your local machine. See my notes at the end of this answer for more on that.
Details
First of all, your SSH command needs some extra parameters:
ssh mysystem#ip -A -R 2900:localhost:22
-A forwards your credentials (detailed article on agent forwarding). You'll use them when connecting back to your local machine.
-R 2900:localhost:22 sets up the reverse tunnel. This means that on the remote machine you can run ssh -p2900 yourlocaluser#localhost and it'll SSH back to your local machine. Replace yourlocaluser with the user from your host machine (not the machine you're SSHing into). I picked 2900 as an arbitrary port. It needs to be higher than 1024, I think.
To avoid typing these every time, you can set them in your SSH config (~/.ssh/config) on your local machine. These are the relevant properties:
ForwardAgent yes
RemoteForward 2900 localhost:22
Also, you need to tell your local machine that SSH connections are allowed to connect to it using its own key pair(!) To do this, add the contents of your public key file (e.g. ~/.ssh/id_rsa.pub) to ~/.ssh/authorized_keys.
You can now connect to your remote machine and run a command like this to connect back to your local one:
ssh -t -p2900 yourlocaluser#localhost <command here>
Note, however, that the first time you connect back from the remote machine to your local one using the key, you'll get a warning that the host you're connecting to is unknown. Once you say that you want to continue connecting, it'll save the relevant details to ~/.ssh/known_hosts on the remote machine and not ask again.
You could log in and manually do an SSH to get the details saved. Alternatively, you can update the SSH command that you run on the remote machine, but it comes with an additional security caveat.
Here's the updated command:
ssh -o StrictHostKeyChecking=accept-new -t -p2900 yourlocaluser#localhost <command here>
The security risk is that you're accepting the key without reviewing it and making sure that it's what you're expecting, so you're vulnerable to man-in-the-middle attacks. Again, I'm no security expert, but given that you're connecting using an SSH tunnel rather than a regular SSH connection, I believe that this reduces the risk. If the known hosts file on the remote machine only contains the entry for your local machine, you could update your SSH config to replace the contents of that file with your local machine's key fingerprint from your local machine on login, and then remove -o StrictHostKeyChecking=accept-new from the above.
Note: If you're prompted for your password when trying to SSH back, that suggests that agent forwarding hasn't worked. You probably need to run ssh-add on your local machine or update your local SSH config for the host in question to include AddKeysToAgent yes.
Note about running sshd on your local machine
The above assumes that you're running sshd on your local machine, and thus accepting SSH connections to that machine. That's a security risk in itself. One way of reducing that risk is to specify that SSH is only allowed from localhost, which will work in this case because you're tunnelling back. You can find instructions on how to configure your local SSH server for this here: https://askubuntu.com/questions/179325/accepting-ssh-connections-only-from-localhost
You could also adapt the answer here and use netcat rather than SSH: https://superuser.com/a/1274810/126533

If you can change the script, you can use an expect script for that - expect_example_and_tips
This allows you to start an "ssh process" to which can send commands to the remote machine, while still running on the local machine.
Much easier in python though in my opinion - example:
#!/usr/bin/env python
import pexpect
PROMPT = "\$|\%|\>"
ssh_cmd = "ssh user#192.168.1.1"
try:
ssh = pexpect.spawn(ssh_cmd)
ssh.sendline("echo hello on remote")
ssh.expect(PROMPT)
print "hello on local machine"
ssh.close()
except Exception as e:
print e
sys.exit(2)

If you want to (for argument's sake) run date locally, just don't quote the here document, and any command substitution will be executed locally.
ssh mysystem#ip <<SSH # notice absence of quotes
echo I am logged in from $(uname -n) since $(date)
SSH
Here, the uname and date commands will be executed locally, before the ssh command runs, whereas the echo in the here document will then execute remotely.
(As an aside, there is no need to explicitly exit at the end; the shell will exit when it reaches the end of input. It's hard to imagine a scenario where anything else would make any sense whatsoever.)

Related

ssh without key and collect the output using bash script

I want to create a bash script that will login to all the linux servers in my network using ssh and collect the output of 'uptime' command to a local file. There is no keypair installed between these local server and the remote servers. So I need to give the password (username and password is same for all the remote servers) in the script itself. I know this is not a secure way to do it, but it is just for learning purpose. I see 'expect' command can be used for the ssh login with password but confused how to use it together with the 'uptime' command that provide the server status. So my requirement is
1. I have local server test1 which contains a text file 'server_status.txt'
2. I need a script in test1 that will try to login to all the remote servers (say 192.168.0.1 to 192.168.0.50) using the same username and password. It will execute the command 'uptime' once logged in to the remote servers and store the output to the local file 'server_status.txt'
REVOKE: paste your public key into the server's /path2userthatshouldlogon/.ssh/authorized_keys and run the your commands remotely using ssh user#host commandtoexecute
due to connection wanted to be established without key.
UPDATE: have a look at sshpass if you really want to need passwords, which is NOT RECOMMENDED
Note: Doing this is poor practice. If you are testing around with this then you are learning a bad habit. Don't do this in production on servers you care about.
You'll want to execute the expect call as a $? and be sure to store the $USER and $SERVER variables or just replace them:
uptime=$(expect -c 'spawn ssh $USER#$SERVER send "uptime"; exit;')
echo $uptime

How to scp back to local when I've already sshed into remote machine?

Often I face this situation: I sshed into a remote server and ran some programs, and I want to copy their output files back to my local machine. What I do is remember the file path on remote machine, exit the connection, then scp user#remote:filepath .
Obviously this is not optimal. What I'm looking for is a way to let me scp file back to local machine without exiting the connection. I did some searching, almost all results are telling me how to do scp from my local machine, which I already know.
Is this possible? Better still, is it possible without needing to know the IP address of my local machine?
Given that you have an sshd running on your local machine, it's possible and you don't need to know your outgoing IP address. If SSH port forwarding is enabled, you can open a secure tunnel even when you already have an ssh connection opened, and without terminating it.
Assume you have an ssh connection to some server:
local $ ssh user#example.com
Password:
remote $ echo abc > abc.txt # now we have a file here
OK now we need to copy that file back to our local server, and for some reason we don't want to open a new connection. OK, let's get the ssh command line by pressing Enter ~C (Enter, then tilde, then capital C):
ssh> help
Commands:
-L[bind_address:]port:host:hostport Request local forward
-R[bind_address:]port:host:hostport Request remote forward
-D[bind_address:]port Request dynamic forward
-KR[bind_address:]port Cancel remote forward
That's just like the regular -L/R/D options. We'll need -R, so we hit Enter ~C again and type:
ssh> -R 127.0.0.1:2222:127.0.0.1:22
Forwarding port.
Here we forward remote server's port 2222 to local machine's port 22 (and here is where you need the local SSH server to be started on port 22; if it's listening on some other port, use it instead of 22).
Now just run scp on a remote server and copy our file to remote server's port 2222 which is mapped to our local machine's port 22 (where our local sshd is running).
remote $ scp -P2222 abc.txt user#127.0.0.1:
user#127.0.0.1's password:
abc.txt 100% 4 0.0KB/s 00:00
We are done!
remote $ exit
logout
Connection to example.com closed.
local $ cat abc.txt
abc
Tricky, but if you really cannot just run scp from another terminal, could help.
I found this one-liner solution on SU to be a lot more straightforward than the accepted answer. Since it uses an environmental variable for the local IP address, I think that it also satisfies the OP's request to not know it in advance.
based on that, here's a bash function to "DownLoad" a file (i.e. push from SSH session to a set location on the local machine)
function dl(){
scp "$1" ${SSH_CLIENT%% *}:/home/<USER>/Downloads
}
Now I can just call dl somefile.txt while SSH'd into the remote and somefile.txt appears in my local Downloads folder.
extras:
I use rsa keys (ssh-copy-id) to get around password prompt
I found this trick to prevent the local bashrc from being sourced on the scp call
Note: this requires SSH access to local machine from remote (is this often the case for anyone?)
The other answers are pretty good and most users should be able to work with them. However, I found the accepted answer a tad cumbersome and others not flexible enough. A VPN server in between was also causing trouble for me with figuring out IP addresses.
So, the workaround I use is to generate the required scp command on the remote system using the following function in my .bashrc file:
function getCopyCommand {
echo "scp user#remote:$(pwd)/$1 ."
}
I find rsync to be more useful if the local system is almost a mirror of the remote server (including the username) and I require to copy the directory structure also.
function getCopyCommand {
echo "rsync -rvPR user#remote:$(pwd)/$1 /"
}
The generated scp or rsync command is then simply pasted on my local terminal to retrieve the file.
You would need a local ssh server running in your machine, then you can just:
scp [-r] local_content your_local_user#your_local_machine_ip:
Anyway, you don't need to close your remote connection to make a remote copy, just open another terminal and run scp there.
On your local computer:
scp root#remotemachine_name_or_IP:/complete_path_to_file /local_path

Shell script for remote SSH

I'm new to shell scripts, and I have centos running. I want to write a shell script that ssh a remote machine and execute a bunch of commands. The problem I'm facing is how to provide the username, the password, the remote machine address, and the private access key to a command that shall connect the remote machine.
I've Google'd and found some scripts, but all of them need a utility called expect, and I don't want to install any utility, only to run my script. Is there a way to do this?
You can pass all you need in a ssh call, doing the following:
ssh -i private_key_path user_name#remote_machine "command"
If you're going to use this connection many times, and want to maintain it configured, add the following lines to you .ssh/config file:
Host host_alias
User user_name
HostName remote_machine
IdentityFile private_key_path
and then access the remote machine, and execute the command you want, by doing:
ssh host_alias "command"
Notice that command, AFAIK, must be embraced in quotes, as it must be considered as only one argument by ssh.

Script to automate two consecutive ssh connections

I know it is possible to write a shell script which passes your hard-coded password to a ssh connection authentication (using expect). However what I need is slightly more complicated.
At my university I have a desktop computer appointed to me. I can connect remotely to this computer by first making a ssh connection with some server, then making another ssh connection from that server to my appointed desktop computer. This goes like:
localuser#localcomputer:~$ ssh -X username#serveraddress
username#serveradress password:
server$ ssh -X username#remotecomputeraddress
username#remotecomputeraddress password:
username#remotecomputer:~>
Is there a way to write a script which could automate the above (i.e. performing two consecutive ssh connections)?
Thanks in advance!
ps: Both the local and the remote computers are running on Linux.
You can do this interactively with:
ssh -t -X username#serveraddress ssh -t -X username#remotecomputeraddress
Note that is not a pipe - the second ssh is the command to run on the connection created by the first ssh. The -t options are necessary to allocate the pseudo-ttys necessary for interaction (password gathering as well as the ultimate goal - an interactive session on the remote system). Wrapping it up with expect left as an exercise for the reader.... ;-)
Bonus points for setting up proper private/public key pairs and ssh-agent so that the passwords aren't necessary (unless, of course, that is disallowed for security reasons).
Yes, you can do this.
Presuming you have your except script in the expect_script:
cat expect_script | ssh -X username#serveraddress sh -s
In this expect_script you must run ssh -X username#remotecomputeraddress.
And of course you can install public keys on the both hosts and use passwordless authentication.
I wrote something to do this with bang paths a while back:
http://stromberg.dnsalias.org/~strombrg/deep-ssh.html
So you'd set up passwordless, passphraseless authentication (or use an agent for the passphrase), like:
http://stromberg.dnsalias.org/~strombrg/ssh-keys.html
And then:
deep-ssh username#serveraddress!username#remotecomputeraddress command
If bash complains about the !, you can just escape it with a backslash.
The old timers will recognize that this is how UUCP paths were specified.

linux execute command remotely

how do I execute command/script on a remote linux box?
say I want to do service tomcat start on box b from box a.
I guess ssh is the best secured way for this, for example :
ssh -OPTIONS -p SSH_PORT user#remote_server "remote_command1; remote_command2; remote_script.sh"
where the OPTIONS have to be deployed according to your specific needs (for example, binding to ipv4 only) and your remote command could be starting your tomcat daemon.
Note:
If you do not want to be prompt at every ssh run, please also have a look to ssh-agent, and optionally to keychain if your system allows it. Key is... to understand the ssh keys exchange process. Please take a careful look to ssh_config (i.e. the ssh client config file) and sshd_config (i.e. the ssh server config file). Configuration filenames depend on your system, anyway you'll find them somewhere like /etc/sshd_config. Ideally, pls do not run ssh as root obviously but as a specific user on both sides, servers and client.
Some extra docs over the source project main pages :
ssh and ssh-agent
man ssh
http://www.snailbook.com/index.html
https://help.ubuntu.com/community/SSH/OpenSSH/Configuring
keychain
http://www.gentoo.org/doc/en/keychain-guide.xml
an older tuto in French (by myself :-) but might be useful too :
http://hornetbzz.developpez.com/tutoriels/debian/ssh/keychain/
ssh user#machine 'bash -s' < local_script.sh
or you can just
ssh user#machine "remote command to run"
If you don't want to deal with security and want to make it as exposed (aka "convenient") as possible for short term, and|or don't have ssh/telnet or key generation on all your hosts, you can can hack a one-liner together with netcat. Write a command to your target computer's port over the network and it will run it. Then you can block access to that port to a few "trusted" users or wrap it in a script that only allows certain commands to run. And use a low privilege user.
on the server
mkfifo /tmp/netfifo; nc -lk 4201 0</tmp/netfifo | bash -e &>/tmp/netfifo
This one liner reads whatever string you send into that port and pipes it into bash to be executed. stderr & stdout are dumped back into netfifo and sent back to the connecting host via nc.
on the client
To run a command remotely:
echo "ls" | nc HOST 4201

Resources