How to script multiple ssh and scp commands to various systems - linux

I would like to script a sequence of commands involving multiple ssh and scp calls. On a daily basis, I find myself manually performing this task:
From LOCAL system, ssh to SYSTEM1
mkdir /tmp/data on SYSTEM1
from SYSTEM1, ssh to SYSTEM2
mkdir /tmp/data on SYSTEM2
from SYSTEM2, SSH to SYSTEM3
scp files from SYSTEM3:/data to SYSTEM2:/tmp/data
exit to SYSTEM2
scp files from SYSTEM2:/data and SYSTEM2:/tmp/data to SYSTEM1:/tmp/data
rm -fr SYSTEM2:/tmp/data
exit to SYSTEM1
scp files from SYSTEM1:/data and SYSTEM1:/tmp/data to LOCAL:/data
rm -fr SYSTEM1:/tmp/data
I do this process at LEAST once a day and it takes approximately 5-10 minutes going between the different systems and then cleaning up afterwards. I would really like to automate this in a bash script but my amateur attempts so far have been unsuccessful. As you might suspect, the systems communication is constrained, meaning LOCAL can only see System1, System2 can only see System1 and System3, system3 can only see system2, etc. You get the idea. What is the best way to do this? Additionally, System1 is a hub for many other systems so SYSTEM2 must be indicated by the user (System3 will always have the same relative IP/hostname compared to any SYSTEM2).
I tried just putting the commands in the proper order in a shell script and then manually typing in the passwords when prompted (which would already be a huge gain in efficiency) but either the method doesn't work or my execution of the script is wrong. Additionally, I would want to have a command line argument for the script that would take a pattern for which 'system2' to connect to, a pattern for the data to copy, and a target location for the data on the local system.
Such as
./grab_data system2 *05-14* ~/grabbed-data
I did some searching and I think my next step would be to have scripts on each system that perform the local tasks, and then execute the scripts via ssh commands from the respective remote system. Is there a better way? What commands should I look at using and what would be the general approach to this automating this sort of nested ssh and scp problem?
I realize my description may be a bit convoluted so please ask for clarification on any area that I did not properly describe.
Thanks.

You can simplify this process a lot by tunneling ssh connections over other ssh connections (see this previous answer). The way I'd do it is to create an .ssh/config file on the LOCAL system with the following entries:
Host SYSTEM3
ProxyCommand ssh -e none SYSTEM2 exec /usr/bin/nc %h %p 2>/dev/null
HostName SYSTEM3.full.domain
User system3user
Host SYSTEM2
ProxyCommand ssh -e none SYSTEM1 exec /usr/bin/nc %h %p 2>/dev/null
HostName SYSTEM2.full.domain
User system2user
Host SYSTEM1
HostName SYSTEM1.full.domain
User system1user
(That's assuming both intermediate hosts have netcat installed as /usr/bin/nc -- if not, you may have to find/install some equivalent way of gatewaying stdin&stdout into a TCP session.)
With this set up, you can use scp SYSTEM3:/data /data on LOCAL, and it'll automatically tunnel through SYSTEM1 and SYSTEM2 (and ask for the passwords for the three SYSTEMn's in order -- this can be a little confusing, especially if you mistype one).

If you're connecting to multiple systems, and especially if you have to forward connections through intermediate hosts, you will want to use public key authentication with ssh-agent forwarding enabled. That way, you only have to authenticate once.
Scripted SSH with agent forwarding may suffice if all you need to do is check the exit status from your remote commands, but if you're going to do anything complex you might be better off using expect or expect-lite to drive the SSH/SCP sessions in a more flexible way. Expect in particular is designed to be a robust replacement for interactive sessions.
If you stick with shell scripting, and your filenames change a lot, you can always create a wrapper around SSH or SCP like so:
# usage: my_ssh [remote_host] [command_line]
# returns: exit status of remote command, or 255 on SSH error
my_ssh () {
local host="$1"
shift
ssh -A "$host" "$#"
}
Between ssh-agent and the wrapper function, you should have a reasonable starting point for your own efforts.

Another way could be to use rsync, which tomatically creates any needed directories and, if you want, removes the copied source files.
In your case, you could work with the commands
home:~$ ssh system1
system1:~$ ssh system2
system2:~$ rsync -aPSHiv system3:/data /tmp/data
system2:~$ exit
system1:~$ rsync -aPSHiv --remove-source-files system2:/tmp/data /tmp/data
system1:~$ rsync -aPSHiv system2:/data /tmp/data
system1:~$ exit
home:~$ rsync -aPSHiv --remove-source-files system1:/tmp/data /tmp/data
home:~$ rsync -aPSHiv system1:/data /data
If you combine this with Gordon's approach, you can even reduce that to
home:~$ rsync -aPSHiv system1:/data/ system2:/data/ system3:/data/ /data/
Note that rsync makes a difference between ...data and ...data/ - the former means the directory and its contents, the latter just the contents. If you mix them up, you might end up with a directory data in another directory data.
Besides, you simplify things if you work with public SSH keys instead of passwords.

Related

SCP command Clarification

I'm using the scp commands to pull some files from the remote server and one variation of the command is not working.
I have 2 files names one.xml and two.xml in a remote server and I'm pulling these two files into the current dir using the following command:
scp stuadmin#10.44.220.112:/student/class/Intermediate/one.xml .
scp stuadmin#10.44.220.112:/student/class/Intermediate/two.xml .
The above command works fine but if I use wildcards to pull all the xml files in a single shot as shown below it returns scp: No match.
scp stuadmin#10.44.220.112:/student/class/Intermediate/*.xml .
Why is it working if I pull the files individually and not working if I try to pull using wildcards.
Yeah this is for SuperUser. The answer is because the asterisk does a wildcard expansion FIRST before the command even runs. It is expanded by your SHELL. echo *.xml will really run echo file1.xml file2.xml and that is what echo sees, while bash sees the *.xml.
Since you are passing multiple files and paths to SCP, it gets confused as the first argument (or 2nd) is not a host:/path.
Put 'echo' in front of your command to see what is really being executed. You can't use wildcards on a remote host (unless you escape them first).
You can clarify from these results
You can simplify this process a lot by tunneling ssh connections over other ssh connections (see this previous answer). The way I'd do it is to create an .ssh/config file on the LOCAL system with the following entries:
Host SYSTEM3
ProxyCommand ssh -e none SYSTEM2 exec /usr/bin/nc %h %p 2>/dev/null
HostName SYSTEM3.full.domain
User system3user
Host SYSTEM2
ProxyCommand ssh -e none SYSTEM1 exec /usr/bin/nc %h %p 2>/dev/null
HostName SYSTEM2.full.domain
User system2user
Host SYSTEM1
HostName SYSTEM1.full.domain
User system1user
(That's assuming both intermediate hosts have netcat installed as /usr/bin/nc -- if not, you may have to find/install some equivalent way of gatewaying stdin&stdout into a TCP session.)
With this set up, you can use scp SYSTEM3:/data /data on LOCAL, and it'll automatically tunnel through SYSTEM1 and SYSTEM2 (and ask for the passwords for the three SYSTEMn's in order -- this can be a little confusing, especially if you mistype one).

write a shell script to ssh to a remote machine and execute commands

I have two questions:
There are multiple remote linux machines, and I need to write a shell script which will execute the same set of commands in each machine. (Including some sudo operations). How can this be done using shell scripting?
When ssh'ing to the remote machine, how to handle when it prompts for RSA fingerprint authentication.
The remote machines are VMs created on the run and I just have their IPs. So, I cant place a script file beforehand in those machines and execute them from my machine.
There are multiple remote linux machines, and I need to write a shell script which will execute the same set of commands in each machine. (Including some sudo operations). How can this be done using shell scripting?
You can do this with ssh, for example:
#!/bin/bash
USERNAME=someUser
HOSTS="host1 host2 host3"
SCRIPT="pwd; ls"
for HOSTNAME in ${HOSTS} ; do
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
done
When ssh'ing to the remote machine, how to handle when it prompts for RSA fingerprint authentication.
You can add the StrictHostKeyChecking=no option to ssh:
ssh -o StrictHostKeyChecking=no -l username hostname "pwd; ls"
This will disable the host key check and automatically add the host key to the list of known hosts. If you do not want to have the host added to the known hosts file, add the option -o UserKnownHostsFile=/dev/null.
Note that this disables certain security checks, for example protection against man-in-the-middle attack. It should therefore not be applied in a security sensitive environment.
Install sshpass using, apt-get install sshpass then edit the script and put your linux machines IPs, usernames and password in respective order. After that run that script. Thats it ! This script will install VLC in all systems.
#!/bin/bash
SCRIPT="cd Desktop; pwd; echo -e 'PASSWORD' | sudo -S apt-get install vlc"
HOSTS=("192.168.1.121" "192.168.1.122" "192.168.1.123")
USERNAMES=("username1" "username2" "username3")
PASSWORDS=("password1" "password2" "password3")
for i in ${!HOSTS[*]} ; do
echo ${HOSTS[i]}
SCR=${SCRIPT/PASSWORD/${PASSWORDS[i]}}
sshpass -p ${PASSWORDS[i]} ssh -l ${USERNAMES[i]} ${HOSTS[i]} "${SCR}"
done
This work for me.
Syntax : ssh -i pemfile.pem user_name#ip_address 'command_1 ; command 2; command 3'
#! /bin/bash
echo "########### connecting to server and run commands in sequence ###########"
ssh -i ~/.ssh/ec2_instance.pem ubuntu#ip_address 'touch a.txt; touch b.txt; sudo systemctl status tomcat.service'
There are a number of ways to handle this.
My favorite way is to install http://pamsshagentauth.sourceforge.net/ on the remote systems and also your own public key. (Figure out a way to get these installed on the VM, somehow you got an entire Unix system installed, what's a couple more files?)
With your ssh agent forwarded, you can now log in to every system without a password.
And even better, that pam module will authenticate for sudo with your ssh key pair so you can run with root (or any other user's) rights as needed.
You don't need to worry about the host key interaction. If the input is not a terminal then ssh will just limit your ability to forward agents and authenticate with passwords.
You should also look into packages like Capistrano. Definitely look around that site; it has an introduction to remote scripting.
Individual script lines might look something like this:
ssh remote-system-name command arguments ... # so, for exmaple,
ssh target.mycorp.net sudo puppet apply
The accepted answer sshes to machines sequentially. In case you want to ssh to multiple machines and run some long-running commands like scp concurrently on them, run the ssh command as a background process.
#!/bin/bash
username="user"
servers=("srv-001" "srv-002" "srv-002" "srv-003");
script="pwd;"
for s in "${servers[#]}"; do
echo "sshing ${username}#${s} to run ${script}"
(ssh ${username}#${s} ${script})& # Run in background
done
wait # If removed, you can run some other script here
If you are able to write Perl code, then you should consider using Net::OpenSSH::Parallel.
You would be able to describe the actions that have to be run in every host in a declarative manner and the module will take care of all the scary details. Running commands through sudo is also supported.
For this kind of tasks, I repeatedly use Ansible which allows to duplicate coherently bash scripts in several containets or VM. Ansible (more precisely Red Hat) now has an additional web interface AWX which is the open-source edition of their commercial Tower.
Ansible: https://www.ansible.com/
AWX:https://github.com/ansible/awx
Ansible Tower: commercial product, you will probably fist explore the free open-source AWX, rather than the 15days free-trail of Tower
There is are multiple ways to execute the commands or script in the multiple remote Linux machines.
One simple & easiest way is via pssh (parallel ssh program)
pssh: is a program for executing ssh in parallel on a number of hosts. It provides features such as sending input to all of the processes, passing a password to ssh, saving the output to files, and timing out.
Example & Usage:
Connect to host1 and host2, and print "hello, world" from each:
pssh -i -H "host1 host2" echo "hello, world"
Run commands via a script on multiple servers:
pssh -h hosts.txt -P -I<./commands.sh
Usage & run a command without checking or saving host keys:
pssh -h hostname_ip.txt -x '-q -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -o PubkeyAuthentication=yes' -i 'uptime; hostname -f'
If the file hosts.txt has a large number of entries, say 100, then the parallelism option may also be set to 100 to ensure that the commands are run concurrently:
pssh -i -h hosts.txt -p 100 -t 0 sleep 10000
Options:
-I: Read input and sends to each ssh process.
-P: Tells pssh to display output as it arrives.
-h: Reads the host's file.
-H : [user#]host[:port] for single-host.
-i: Display standard output and standard error as each host completes
-x args: Passes extra SSH command-line arguments
-o option: Can be used to give options in the format used in the configuration file.(/etc/ssh/ssh_config) (~/.ssh/config)
-p parallelism: Use the given number as the maximum number of concurrent connections
-q Quiet mode: Causes most warning and diagnostic messages to be suppressed.
-t: Make connections time out after the given number of seconds. 0 means pssh will not timeout any connections
When ssh'ing to the remote machine, how to handle when it prompts for
RSA fingerprint authentication.
Disable the StrictHostKeyChecking to handle the RSA authentication prompt.
-o StrictHostKeyChecking=no
Source: man pssh
This worked for me. I made a function. Put this in your shell script:
sshcmd(){
ssh $1#$2 $3
}
sshcmd USER HOST COMMAND
If you have multiple machines that you want to do the same command on you would repeat that line with a semi colon. For example, if you have two machines you would do this:
sshcmd USER HOST COMMAND ; sshcmd USER HOST COMMAND
Replace USER with the user of the computer. Replace HOST with the name of the computer. Replace COMMAND with the command you want to do on the computer.
Hope this helps!
You can follow this approach :
Connect to remote machine using Expect Script. If your machine doesn't support expect you can download the same. Writing Expect script is very easy (google to get help on this)
Put all the action which needs to be performed on remote server in a shell script.
Invoke remote shell script from expect script once login is successful.

How can I scp a file and run an ssh command asking for password only once?

Here's the context of the question:
In order for me to be able to print documents at work, I have to copy the file over to a different computer and then print from that computer. (Don't ask. It's complicated and there is not another viable solution.) Both of the computers are Linux and I work in bash. The way I currently do this is I scp the file over to the print computer and then ssh in and print from command line.
Here's what I would like to do:
In order to make my life a bit easier, I'd like to combine these two step into one. I could easily write a function that did both these steps, but I would have to provide my password twice. Is there any way to combine the steps so that I only provide my password once?
Before somebody suggests it, key-based ssh-logins are not an option. It has been specifically disabled by the Administrators for security reasons.
Solution:
What I ended up doing was a modification of the second solution Wrikken provided. Simply wrapping up his first suggestion in a function would have gotten the job done, but I liked the idea of being able to print multiple documents without having to type my password once per document. I have a rather long password and I'm a lazy typist :)
So, what I did was take a sequence of commands and wrap them up in a python script. I used python because I wanted to parameterize the script, and I find it easiest to do in python. I cheated and just ran bash commands from python through os.system. Python just handled parameterization and flow control. The logic was as follows:
if socket does not exist:
run bash command to create socket with timeout
copy file using the created socket
ssh command to print using socket
In addition to using a timeout, I also put have an option in my python script to manually close the socket should I wish to do so.
If anyone wants the code, just let me know and I'll either paste-bin it or put it on my git repo.
ssh user#host 'cat - > /tmp/file.ext; do_something_with /tmp/file.ext;rm /tmp/file.ext' < file.ext
Another option would be to just leave an ssh tunnel open:
In ~/.ssh/config:
Host *
ControlMaster auto
ControlPath ~/.ssh/sockets/ssh-socket-%r-%h-%p
.
$ ssh -f -N -l user host
(socket is now open)
Subsequent ssh/scp requests will reuse the already existing tunnel.
Here is bash script template, which follows #Wrikken's second method, but can be used as is - no need to edit user's SSH config file:
#!/bin/bash
TARGET_ADDRESS=$1 # the first script argument
HOST_PATH=$2 # the second script argument
TARGET_USER=root
TMP_DIR=$(mktemp -d)
SSH_CFG=$TMP_DIR/ssh-cfg
SSH_SOCKET=$TMP_DIR/ssh-socket
TARGET_PATH=/tmp/file
# Create a temporary SSH config file:
cat > "$SSH_CFG" <<ENDCFG
Host *
ControlMaster auto
ControlPath $SSH_SOCKET
ENDCFG
# Open a SSH tunnel:
ssh -F "$SSH_CFG" -f -N -l $TARGET_USER $TARGET_ADDRESS
# Upload the file:
scp -F "$SSH_CFG" "$HOST_PATH" $TARGET_USER#$TARGET_ADDRESS:"$TARGET_PATH"
# Run SSH commands:
ssh -F "$SSH_CFG" $TARGET_USER#$TARGET_ADDRESS -T <<ENDSSH
# Do something with $TARGET_PATH here
ENDSSH
# Close the SSH tunnel:
ssh -F "$SSH_CFG" -S "$SSH_SOCKET" -O exit "$TARGET_ADDRESS"

linux execute command remotely

how do I execute command/script on a remote linux box?
say I want to do service tomcat start on box b from box a.
I guess ssh is the best secured way for this, for example :
ssh -OPTIONS -p SSH_PORT user#remote_server "remote_command1; remote_command2; remote_script.sh"
where the OPTIONS have to be deployed according to your specific needs (for example, binding to ipv4 only) and your remote command could be starting your tomcat daemon.
Note:
If you do not want to be prompt at every ssh run, please also have a look to ssh-agent, and optionally to keychain if your system allows it. Key is... to understand the ssh keys exchange process. Please take a careful look to ssh_config (i.e. the ssh client config file) and sshd_config (i.e. the ssh server config file). Configuration filenames depend on your system, anyway you'll find them somewhere like /etc/sshd_config. Ideally, pls do not run ssh as root obviously but as a specific user on both sides, servers and client.
Some extra docs over the source project main pages :
ssh and ssh-agent
man ssh
http://www.snailbook.com/index.html
https://help.ubuntu.com/community/SSH/OpenSSH/Configuring
keychain
http://www.gentoo.org/doc/en/keychain-guide.xml
an older tuto in French (by myself :-) but might be useful too :
http://hornetbzz.developpez.com/tutoriels/debian/ssh/keychain/
ssh user#machine 'bash -s' < local_script.sh
or you can just
ssh user#machine "remote command to run"
If you don't want to deal with security and want to make it as exposed (aka "convenient") as possible for short term, and|or don't have ssh/telnet or key generation on all your hosts, you can can hack a one-liner together with netcat. Write a command to your target computer's port over the network and it will run it. Then you can block access to that port to a few "trusted" users or wrap it in a script that only allows certain commands to run. And use a low privilege user.
on the server
mkfifo /tmp/netfifo; nc -lk 4201 0</tmp/netfifo | bash -e &>/tmp/netfifo
This one liner reads whatever string you send into that port and pipes it into bash to be executed. stderr & stdout are dumped back into netfifo and sent back to the connecting host via nc.
on the client
To run a command remotely:
echo "ls" | nc HOST 4201

How to send data to local clipboard from a remote SSH session

Borderline ServerFault question, but I'm programming some shell scripts, so I'm trying here first :)
Most *nixes have a command that will let you pipe/redirect output to the local clipboard/pasteboard, and retrieve from same. On OS X these commands are
pbcopy, pbpaste
Is there anyway to replicate this functionality while SSHed into another server? That is,
I'm using Computer A.
I open a terminal window
I SSH to Computer B
I run a command on Computer B
The output of Computer B is redirected or automatically copied to Computer A's clipboard.
And yes, I know I could just (shudder) use my mouse to select the text from the command, but I've gotten so used to the workflow of pipping output directly to the clipboard that I want the same for my remote sessions.
Code is useful, but general approaches are appreciated as well.
My favorite way is ssh [remote-machine] "cat log.txt" | xclip -selection c. This is most useful when you don't want to (or can't) ssh from remote to local.
Edit: on Cygwin ssh [remote-machine] "cat log.txt" > /dev/clipboard.
Edit: A helpful comment from nbren12:
It is almost always possible to setup a reverse ssh connection using SSH port forwarding. Just add RemoteForward 127.0.0.1:2222 127.0.0.1:22 to the server's entry in your local .ssh/config, and then execute ssh -p 2222 127.0.0.1 on the remote machine, which will then redirect the connection to the local machine. – nbren12
I'm resurrecting this thread because I've been looking for the same kind of solution, and I've found one that works for me. It's a minor modification to a suggestion from OSX Daily.
In my case, I use Terminal on my local OSX machine to connect to a linux server via SSH. Like the OP, I wanted to be able to transfer small bits of text from terminal to my local clipboard, using only the keyboard.
The essence of the solution:
commandThatMakesOutput | ssh desktop pbcopy
When run in an ssh session to a remote computer, this command takes the output of commandThatMakesOutput (e.g. ls, pwd) and pipes the output to the clipboard of the local computer (the name or IP of "desktop"). In other words, it uses nested ssh: you're connected to the remote computer via one ssh session, you execute the command there, and the remote computer connects to your desktop via a different ssh session and puts the text to your clipboard.
It requires your desktop to be configured as an ssh server (which I leave to you and google). It's much easier if you've set up ssh keys to facilitate fast ssh usage, preferably using a per-session passphrase, or whatever your security needs require.
Other examples:
ls | ssh desktopIpAddress pbcopy
pwd | ssh desktopIpAddress pbcopy
For convenience, I've created a bash file to shorten the text required after the pipe:
#!/bin/bash
ssh desktop pbcopy
In my case, i'm using a specially named key
I saved it with the file name cb (my mnemonic (ClipBoard). Put the script somewhere in your path, make it executable and voila:
ls | cb
Found a great solution that doesn't require a reverse ssh connection!
You can use xclip on the remote host, along with ssh X11 forwarding & XQuartz on the OSX system.
To set this up:
Install XQuartz (I did this with soloist + pivotal_workstation::xquartz recipe, but you don't have to)
Run XQuartz.app
Open XQuartz Preferences (+,)
Make sure "Enable Syncing" and "Update Pasteboard when CLIPBOARD changes" are checked
ssh -X remote-host "echo 'hello from remote-host' | xclip -selection clipboard"
Reverse tunnel port on ssh server
All the existing solutions either need:
X11 on the client (if you have it, xclip on the server works great) or
the client and server to be in the same network (which is not the case if you're at work trying to access your home computer).
Here's another way to do it, though you'll need to modify how you ssh into your computer.
I've started using this and it's nowhere near as intimidating as it looks so give it a try.
Client (ssh session startup)
ssh username#server.com -R 2000:localhost:2000
(hint: make this a keybinding so you don't have to type it)
Client (another tab)
nc -l 2000 | pbcopy
Note: if you don't have pbcopy then just tee it to a file.
Server (inside SSH session)
cat some_useful_content.txt | nc localhost 2000
Other notes
Actually even if you're in the middle of an ssh session there's a way to start a tunnel but i don’t want to scare people away from what really isn’t as bad as it looks. But I'll add the details later if I see any interest
There are various tools to access X11 selections, including xclip and XSel. Note that X11 traditionally has multiple selections, and most programs have some understanding of both the clipboard and primary selection (which are not the same). Emacs can work with the secondary selection too, but that's rare, and nobody really knows what to do with cut buffers...
$ xclip -help
Usage: xclip [OPTION] [FILE]...
Access an X server selection for reading or writing.
-i, -in read text into X selection from standard input or files
(default)
-o, -out prints the selection to standard out (generally for
piping to a file or program)
-l, -loops number of selection requests to wait for before exiting
-d, -display X display to connect to (eg localhost:0")
-h, -help usage information
-selection selection to access ("primary", "secondary", "clipboard" or "buffer-cut")
-noutf8 don't treat text as utf-8, use old unicode
-version version information
-silent errors only, run in background (default)
-quiet run in foreground, show what's happening
-verbose running commentary
Report bugs to <astrand#lysator.liu.se>
$ xsel -help
Usage: xsel [options]
Manipulate the X selection.
By default the current selection is output and not modified if both
standard input and standard output are terminals (ttys). Otherwise,
the current selection is output if standard output is not a terminal
(tty), and the selection is set from standard input if standard input
is not a terminal (tty). If any input or output options are given then
the program behaves only in the requested mode.
If both input and output is required then the previous selection is
output before being replaced by the contents of standard input.
Input options
-a, --append Append standard input to the selection
-f, --follow Append to selection as standard input grows
-i, --input Read standard input into the selection
Output options
-o, --output Write the selection to standard output
Action options
-c, --clear Clear the selection
-d, --delete Request that the selection be cleared and that
the application owning it delete its contents
Selection options
-p, --primary Operate on the PRIMARY selection (default)
-s, --secondary Operate on the SECONDARY selection
-b, --clipboard Operate on the CLIPBOARD selection
-k, --keep Do not modify the selections, but make the PRIMARY
and SECONDARY selections persist even after the
programs they were selected in exit.
-x, --exchange Exchange the PRIMARY and SECONDARY selections
X options
--display displayname
Specify the connection to the X server
-t ms, --selectionTimeout ms
Specify the timeout in milliseconds within which the
selection must be retrieved. A value of 0 (zero)
specifies no timeout (default)
Miscellaneous options
-l, --logfile Specify file to log errors to when detached.
-n, --nodetach Do not detach from the controlling terminal. Without
this option, xsel will fork to become a background
process in input, exchange and keep modes.
-h, --help Display this help and exit
-v, --verbose Print informative messages
--version Output version information and exit
Please report bugs to <conrad#vergenet.net>.
In short, you should try xclip -i/xclip -o or xclip -i -sel clip/xclip -o -sel clip or xsel -i/xsel -o or xsel -i -b/xsel -o -b, depending on what you want.
If you use iTerm2 on the Mac, there is an easier way. This functionality is built into iTerm2's Shell Integration capabilities via the it2copy command:
Usage: it2copy
Copies to clipboard from standard input
it2copy filename
Copies to clipboard from file
To make it work, choose iTerm2-->Install Shell Integration menu item while logged into the remote host, to install it to your own account. Once that is done, you'll have access to it2copy, as well as a bunch of other aliased commands that provide cool functionality.
The other solutions here are good workarounds but this one is so painless in comparison.
This is my solution based on SSH reverse tunnel, netcat and xclip.
First create script (eg. clipboard-daemon.sh) on your workstation:
#!/bin/bash
HOST=127.0.0.1
PORT=3333
NUM=`netstat -tlpn 2>/dev/null | grep -c " ${HOST}:${PORT} "`
if [ $NUM -gt 0 ]; then
exit
fi
while [ true ]; do
nc -l ${HOST} ${PORT} | xclip -selection clipboard
done
and start it in background.
./clipboard-daemon.sh&
It will start nc piping output to xclip and respawning process after receiving portion of data
Then start ssh connection to remote host:
ssh user#host -R127.0.0.1:3333:127.0.0.1:3333
While logged in on remote box, try this:
echo "this is test" >/dev/tcp/127.0.0.1/3333
then try paste on your workstation
You can of course write wrapper script that starts clipboard-daemon.sh first and then ssh session. This is how it works for me. Enjoy.
Allow me to add a solution that if I'm not mistaken was not suggested before.
It does not require the client to be exposed to the internet (no reverse connections), nor does it use any xlibs on the server and is implemented completely using ssh's own capabilities (no 3rd party bins)
It involves:
Opening a connection to the remote host, then creating a fifo file on it and waiting on that fifo in parallel (same actual TCP connection for everything).
Anything you echo to that fifo file ends up in your local clipboard.
When the session is done, remove the fifo file on the server and cleanly terminate the connections together.
The solution utilizes ssh's ControlMaster functionality to use just one TCP connection for everything so it will even support hosts that require a password to login and prompt you for it just once.
Edit: as requested, the code itself:
Paste the following into your bashrc and use sshx host to connect.
On the remote machine echo SOMETHING > ~/clip and hopefully, SOMETHING will end up in the local host's clipboard.
You will need the xclip utility on your local host.
_dt_term_socket_ssh() {
ssh -oControlPath=$1 -O exit DUMMY_HOST
}
function sshx {
local t=$(mktemp -u --tmpdir ssh.sock.XXXXXXXXXX)
local f="~/clip"
ssh -f -oControlMaster=yes -oControlPath=$t $# tail\ -f\ /dev/null || return 1
ssh -S$t DUMMY_HOST "bash -c 'if ! [ -p $f ]; then mkfifo $f; fi'" \
|| { _dt_term_socket_ssh $t; return 1; }
(
set -e
set -o pipefail
while [ 1 ]; do
ssh -S$t -tt DUMMY_HOST "cat $f" 2>/dev/null | xclip -selection clipboard
done &
)
ssh -S$t DUMMY_HOST \
|| { _dt_term_socket_ssh $t; return 1; }
ssh -S$t DUMMY_HOST "rm $f"
_dt_term_socket_ssh $t
}
More detailed explanation is on my website:
https://xicod.com/2021/02/09/clipboard-over-ssh.html
The simplest solution of all, if you're on OS X using Terminal and you've been ssh'ing around in a remote server and wish to grab the results of a text file or a log or a csv, simply:
1) Cmd-K to clear the output of the terminal
2) cat <filename> to display the contents of the file
3) Cmd-S to save the Terminal Output
You'll have the manually remove the first line and last line of the file, but this method is a bit simpler than relying on other packages to be installed, "reverse tunnels" and trying to have a static IP, etc.
This answer develops both upon the chosen answer by adding more security.
That answer discussed the general form
<command that makes output> | \
ssh <user A>#<host A> <command that maps stdin to clipboard>
Where security may be lacking is in the ssh permissions allowing <user B> on host B> to ssh into host A and execute any command.
Of course B to A access may already be gated by an ssh key, and it may even have a password. But another layer of security can restrict the scope of allowable commands that B can execute on A, e.g. so that rm -rf / cannot be called. (This is especially important when the ssh key doesn't have a password.)
Fortunately, ssh has a built-in feature called command restriction or forced command. See ssh.com, or
this serverfault.com question.
The solution below shows the general form solution along with ssh command restriction enforced.
Example Solution with command restriction added
This security enhanced solution follows the general form - the call from the ssh session on host-B is simply:
cat <file> | ssh <user-A>#<host A> to_clipboard
The rest of this shows the setup to get that to work.
Setup of ssh command restriction
Suppose the user account on B is user-B, and B has an ssh key id-clip, that has been created in the usual way (ssh-keygen).
Then in user-A's ssh directory there is a file
/home/user-A/.ssh/authorized_keys
that recognizes the key id-clip and allows ssh connection.
Usually the contents of each line authorized_keys is exactly the public key being authorized, e.g., the contents of id-clip.pub.
However, to enforce command restriction that public key content is prepended (on the same line) by the command to be executed.
In our case:
command="/home/user-A/.ssh/allowed-commands.sh id-clip",no-agent-forwarding,no-port-forwarding,no-user-rc,no-x11-forwarding,no-pty <content of file id-clip.pub>
The designated command "/home/user-A/.ssh/allowed-commands.sh id-clip", and only that designated command, is executed whenever key id-clip is used initiate an ssh connection to host-A - no matter what command is written the ssh command line.
The command indicates a script file allowed-commands.sh, and the contents of that that script file is
#/bin/bash
#
# You can have only one forced command in ~/.ssh/authorized_keys. Use this
# wrapper to allow several commands.
Id=${1}
case "$SSH_ORIGINAL_COMMAND" in
"to-clipboard")
notify-send "ssh to-clipboard, from ${Id}"
cat | xsel --display :0 -i -b
;;
*)
echo "Access denied"
exit 1
;;
esac
The original call to ssh on machine B was
... | ssh <user-A>#<host A> to_clipboard
The string to-clipboard is passed to allowed-commands.sh by the environment variable SSH_ORIGINAL_COMMAND.
Addition, we have passed the name of the key, id-clip, from the line in authorized_keyswhich is only accessed by id-clip.
The line
notify-send "ssh to-clipboard, from ${Id}"
is just a popup messagebox to let you know the clipboard is being written - that's probably a good security feature too. (notify-send works on Ubuntu 18.04, maybe not others).
In the line
cat | xsel --display :0 -i -b
the parameter --display :0 is necessary because the process doesn't have it's own X display with a clipboard,
so it must be specificied explicitly. This value :0 happens to work on Ubuntu 18.04 with Wayland window server. On other setups it might not work. For a standard X server this answer might help.
host-A /etc/ssh/sshd_config parameters
Finally a few parameters in /etc/ssh/sshd_config on host A that should be set to ensure permission to connect, and permission to use ssh-key only without password:
PubkeyAuthentication yes
PasswordAuthentication no
ChallengeResponseAuthentication no
AllowUsers user-A
To make the sshd server re-read the config
sudo systemctl restart sshd.service
or
sudo service sshd.service restart
conclusion
It's some effort to set it up, but other functions besides to-clipboard can be constructed in parallel the same framework.
Not a one-liner, but requires no extra ssh.
install netcat if necessary
use termbin: cat ~/some_file.txt | nc termbin.com 9999. This will copy the output to the termbin website and prints the URL to your output.
visit that url from your computer, you get your output
Of course, do not use it for sensitive content.
#rhileighalmgren solution is good, but pbcopy will annoyingly copy last "\n" character, I use "head" to strip out last character to prevent this:
#!/bin/bash
head -c -1 | ssh desktop pbcopy
My full solution is here : http://taylor.woodstitch.com/linux/copy-local-clipboard-remote-ssh-server/
Far Manager Linux port supports synchronizing clipboard between local and remote host. You just open local far2l, do "ssh somehost" inside, run remote far2l in that ssh session and get remote far2l working with your local clipboard.
It supports Linux, *BSD and OS X; I made a special putty build to utilize this functionality from windows also.
For anyone googling their way to this:
The best solution in this day and age seem to be lemonade
Various solutions is also mentioned in the neovim help text for clipboard-tool
If you're working over e.g. a pod in a Kubernetes cluster and not direct SSH, so that there is no way for your to do a file transfer, you could use cat and then save the terminal output as text. For example in macOS you can do Shell -> Export as text.

Resources