do xen support multiple vnc client connecting? - vnc

I open one vnc viewer to connect the xen virtual machines,that works right. At the same time, I open another vnc viewer to connect the same virtual machines, the vnc viewer cann't connect. The status is always be " Connecting....". As far as I know, the reason for that is Xen can not support multiple vnc client connect the same virtual machine at the same time. If anyone has the same problem? How to fix this? Expecting your answer.

The vncserver has options to allow different kinds of sharing, notably:
-nevershared
Never allow shared desktops.
-alwaysshared
Always allow shared desktops.
Normally I'd say to hunt down the vncserver command being used and modify it, but Xen appears to work differently, only exposing some options through flags to set. With Xen, I think multiple VNC connections was added to vanilla QEMU in 2010 or so, and being considered for addition to Xen around the same time, but I'm not seeing the expected benefit of that in the Xen docs I'm finding on the 'net, so maybe it didn't happen.
If Xen still doesn't support it directly, and you're only trying to make additional connections to an existing X server, you could probably run x11vnc within your first session to add support for extra connections (look for the -shared and probably -forever options). Note that your x11vnc should only allow connections from localhost, and then additional connections can be added by using SSH to make a tunnel to the VM using the following commands:
On the VM:
# :0 -> port 5900, increase if needed. If vncserver's already there, definitely do.
# x11vnc require root access to reach a Ubuntu login screen, or matching user access
# if a user is already logged in.
# An -auth option is usually needed - this one is for lighdm, but can vary wildly,
# the output of x11vnc has a lot info on this. The running X server will often
# have the exact thing you need in its own command line.
#
x11vnc -shared -forever -auth /var/run/lightdm/root/:0 -display :0 -rfbwait 600
On your local computer:
# adjust the 5900 to be the sum of 5900 plus the number after the ":" in x11vnc:
# the 5901 tells which :(something) to use with vncviewer, just subtract 5900.
#
ssh -XC -L5901:localhost:5900 yourvmhostname
# | |
# local port for :1 target port on VM for :0
Then locally run (here the :1 is why we have 5901 above, adjust to taste):
vncviewer :1 # connects to the 5901 part on your *own* host, provided by the ssh.
One quirk: these commands rely on the .Xauthority file having magic cookies for the :(n) display you want to use. Note that the vncserver script normally makes these for you, and once they exist, you don't need to recreate them. If you're not using vncserver itself, you might need to make them yourself with:
remdpy=7 # assuming you need cookies for display :7 for some reason.
host=$(uname -n)
cookie=1fedaff375011821b5e0b4cf514d574a # or something, see vncserver's example
for key in $host:$remdpy $host/unix:$remdpy ; do
if xauth list $key | grep -sq . ; then
echo $key already present
else
xauth -f ~/.Xauthority add $key . $cookie
fi
done
Don't use the same cookie this shows.
Note that to punch through a gateway host that hides some internal subnet of VMs (or real hosts), the first step is to set up a tunnel through it. Something like this (root is just an assumption, use lesser IDs if you can):
gateway=192.168.0.1
hidehost=172.16.0.1
# make a tunnel from localhost:2222 to $hidehost:22
ssh -CNn -L2222:$hidehost:22 root#$gateway &
# connect through it to $hidehost and run x11vnc there
ssh -C -p 2222 -L5901:127.0.0.1:5900 root#127.0.0.1 \
'x11vnc -shared -forever -auth /var/run/lightdm/root/:0 -display :0 -rfbwait 600'
# connect to VNC on localhost:5901 - which uses the 2nd tunnel we just made
vncviewer :1 # run in a different window.
This worked in my test. The two SSHes can be combined using -OProxyCommand (so that one doesn't have to clean up the backgrounded SSH job) but that's more involved. Example:
gateway=192.168.0.1
hidehost=172.16.0.1
ssh -C -oProxyCommand="ssh root#$gateway -n -W $hidehost:22" \
-L5901:127.0.1:5900 root#127.0.0.1 \
'x11vnc -shared -forever -auth /var/run/lightdm/root/:0 -display :0 -rfbwait 600'
vncviewer :1 # run in a different window.

Related

How to find window ID in a REMOTE linuix

I connect to a remote linux using ssh and I need to get only a window with x11vnc, i.e., I need to execute:
x11vnc -id -display :0
Every command I try in the ssh session (xprop, wmcrtl, etc.) returns info about the local xwindows system, not about the remote one, so I don't know how to get information of the windows running in the remote machine.
I can't get the while desktop with x11vnc because it is locked and I get only a black screen. I would try the '-id pick' option if had access to the desktop.
Every command I try in the ssh session (xprop, wmcrtl, etc.) returns info about the local xwindows system, not about the remote one
I assume this is because you connect using ssh -X or something similar. That way ssh sets DISPLAY to point to a tunnel it created to your local X server so that remote commands can display output on your screen. Try to override this variable, examples: DISPLAY=:0 xwininfo -tree -root or DISPLAY=:0 xprop -root|grep ^_NET_CLIENT_LIST.

How to set up working X11 forwarding on WSL2 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
When moving from WSL1 to WSL2 many things change; apparently this applies to X11 forwarding as well.
What steps do I need to make in order to use X11 forwarding with WSL2 on Windows 10 as I did with WSL1?
TL;DR:
Add the following to your ~/.bashrc:
export DISPLAY=$(ip route list default | awk '{print $3}'):0
export LIBGL_ALWAYS_INDIRECT=1
Enable Public Access on your X11 server for Windows.*
Add a separate inbound rule for TCP port 6000 to the windows firewall in order to allow WSL access to the X server, as described by the wsl-windows-toolbar-launcher people.
As pointed out by WSL_subreddit_mod on reddit and as you can read in Microsoft's documentation on WSL2, the WSL2 architecture uses virtualized network components. This means that WSL2 has a different IP address than the host machine.
This explains why the X11 forwarding settings of WSL1 cannot simply be transferred to WSL2.
On the Ubuntu Wiki page about WSL you can already find a configuration adapted for WSL2 under Running Graphical Applications. A similar configuration is also suggested by the above mentioned Reddit User, who also contributes another part of the solution: Enable Public Access on the X11 server under Windows.
This means add the following to your ~/.bashrc:
export DISPLAY=$(ip route list default | awk '{print $3}'):0
export LIBGL_ALWAYS_INDIRECT=1
And Enable Public Access on your X11 server for Windows.*
The most important part to enable X11 forwarding for WSL2 on Windows 10 is still missing: the Windows firewall blocks connections via the network interface configured for WSL by default.
A separate inbound rule for TCP port 6000 is required to allow WSL access to the X server. After the rule has been created, as described by the wsl-windows-toolbar-launcher people, the IP address range can be restricted to the WSL subnet in the settings of the newly created rule, under Scope: 172.16.0.0/12.
*: If you use VcXSrv you can enable public access for your X server by disabling Access Control on the Extra Settings:
Or by calling vcxsrv.exe directly with the ac flag: vcxsrv.exe -ac as pointed out by ameeno on the github issue.
Alternatively this SO answer shows how to share keys via .Xauthority files, leaving you with intact access control.
For some people who allowed only for private networks like me,
although they Should have been Both Ticked
It should have stop signs on Windows Defender firewall
Double click it and allow the connection for both private and public,
So all the 4 items should be ticked green.
Then the above answer from #NicolasBrauer was working for me.
Like disabling the access control when you XLaunch and
export DISPLAY=$(awk '/nameserver / {print $2; exit}' /etc/resolv.conf 2>/dev/null):0
export LIBGL_ALWAYS_INDIRECT=1
I come up with a solution using vxcsrv on windows 10, as others pointed out. Also working on windows 11.
XServer Windows - WSL1 & WSL2:
Install X-Server Windows
https://sourceforge.net/projects/vcxsrv/
Set Display forward in WSL Distro
Configure Display:
If you running WSL1:
export LIBGL_ALWAYS_INDIRECT=1
export DISPLAY=localhost:0
If you running WSL2:
export LIBGL_ALWAYS_INDIRECT=1
export DISPLAY=$(awk '/nameserver / {print $2; exit}' /etc/resolv.conf 2>/dev/null):0
(If you have disabled resolv.conf use this definition: https://stackoverflow.com/a/63092879/11473934)
and then (install x11-apps):
sudo apt update
sudo apt install x11-apps
Start XLaunch on Windows
Multiple Windows
Start no client
disable Native opengl
enable Disable access control
Test it
In wsl: enter xcalc - Calculator should open in Windows10
If everything worked
And you want to persist the settings in your wsl distro. Store them in your ~/.bashrc.
sudo nano ~/.bashrc
Copy the two lines (from Set Display forward in WSL Distro - Configure Display), two the end and save it.
Add it to autostart
Run Dialog see Start XLaunch on Windows
Save configuration
Press Windows + R
Enter: shell:startup
Copy saved configuration: *.launch (Generated in step 2) to this folder (step 4)
Now the XServer will be started with windows startup.
I’m using it for ROS. Works for me.
My XServer isn’t available over internet so its okay to disable access control.
Using /etc/resolv.conf nameserver won't work for me since I disabled resolv.conf generation in /etc/wsl.conf (I have a custom resolv.conf).
Ultimately you want the WSL2 host IP address, which should also be your default route. Here's my ~/.bashrc entry for my Debian WSL2 distro:
export DISPLAY=$(ip route | awk '/^default/{print $3; exit}'):0
How to Setup X11 forwarding in WSL2
This answer assumes that you already have a working XServer and PulseAudio configuration running on your Windows host because you already were using WSL1. (You also may have to add the -ac parameter to the command line to get your XServer of choice to work with WSL2.)
The way that I do this, and to ensure that I get X11 forwarding no matter whether I am using a static IP address or DHCP on the Windows host, or even whether my hostname or network location changes, I add the following to my ~/.bashrc file:
# Get the IP Address of the Windows 10 Host and use it in Environment.
HOST_IP=$(host `hostname` | grep -oP '(\s)\d+(\.\d+){3}' | tail -1 | awk '{ print $NF }' | tr -d '\r')
export LIBGL_ALWAYS_INDIRECT=1
export DISPLAY=$HOST_IP:0.0
export NO_AT_BRIDGE=1
export PULSE_SERVER=tcp:$HOST_IP
After doing the above, no matter what my Hostname or IP address of the Host is, it will be placed in the environment each time a BASH session is started in WSL2. Test it by running firefox from the command line and watch a YouTube video. You should be able to hear the sound as well as see the app itself to watch the video. Test by launching other GUI apps from the command line in addition.
What it does: It uses the host command to pull the IPv4 Addresses associated with the Hostname from the output, greps the address that matches the line that contains your Windows Host IPv4 address, strips the rest of the information except for the IP Address, and then awks that and prints it into the variable, with the output trimmed. This then is used to provide the necessary IP address as a string for use in the environment variables that allow for forwarding of X11 and sound output.
Hopefully it works for you if the other methods don't work for you (as they didn't for me).
Most CLI apps can be run either from the BASH Prompt or from Windows Terminal. If you want to make a shortcut, most CLI apps can be set up like either of the following examples (no need for X11 forwarding in such cases except apps like Links2):
C:\Windows\System32\wsl.exe -e htop
C:\Windows\System32\wsl.exe lynx
If you want to create desktop shortcuts for Linux GUI apps, unless you can get the environment variables from your ~/.bashrc file to be used before launching the programs, you will have to create shortcuts using the following template, and put the program name in place of {yourprogram}:
C:\Windows\System32\wsl.exe LIBGL_ALWAYS_INDIRECT=Yes IP=$(host `hostname` | grep -oP '(\s)\d+(\.\d+){3}' | tail -1 | awk '{ print $NF }' | tr -d '\r') DISPLAY=$IP:0.0 PULSE_SERVER=tcp:$IP {yourprogram}
You do not have to place the full command line for many programs. For PERL-based programs or Python-based programs, you sometimes will have to add the path for PERL and PYTHON, as well as your program's full path, to run such GUI programs in Linux using WSL2. For one of my perl programs, I have to do it this way:
C:\Windows\System32\wsl.exe IP=$(host `hostname` | grep -oP '(\s)\d+(\.\d+){3}' | tail -1 | awk '{ print $NF }' | tr -d '\r') ; export LIBGL_ALWAYS_INDIRECT=Yes export DISPLAY=$IP:0.0 ; cd /mnt/c/Users/{yourusername}/Desktop ; /usr/bin/perl ~/wget-gui.pl
You may have to experiment a bit to get some apps working properly. For example, you might need to dbus-launch an app, and will need to add that command to the shortcut just before the program name.
C:\Windows\System32\wsl.exe LIBGL_ALWAYS_INDIRECT=Yes IP=$(host `hostname` | grep -oP '(\s)\d+(\.\d+){3}' | tail -1 | awk '{ print $NF }' | tr -d '\r') DISPLAY=$IP:0.0 PULSE_SERVER=tcp:$IP dbus-launch --exit-with-session gedit
And you might have to use a shorter variable name in some circumstances. Some apps just won't work well, if at all, but this situation is improving over time. Also, don't try to run the above from a Windows Command Prompt or from PowerShell. It will throw errors about 'grep' not being recognized as an internal or external command, etc.
Following is a screenshot of a few Linux GUI apps running on my Windows 10 system, with working X11 forwarding on WSL2.
Copied my answer from this github issue.
The idea is to use the ability to communicate over stdio.
Prerequisite
Just so we can use socat in Windows host, you need a distribution running WSL1. I am sure you can do this in powershell but I didn't have time to research this. Maybe someone can write a stdio->tcp redirector in powershell, then we wouldn't need to have 2 WSL distros.
How to forward X-server connection
Have your favorite X server running on Windows. By default they would listen to port 6000.
In the WSL2 distro, run the following command in the background (ubuntu is the name of the WSL1 distro with socat installed):
mkdir -p /tmp/.X11-unix/
socat UNIX-LISTEN:/tmp/.X11-unix/X0,fork EXEC:"/mnt/c/Windows/System32/wsl.exe -d Ubuntu socat - TCP\:localhost\:6000"
Basically this sets up a tunnel from the normal X unix domain socket into the host's port 6000.
How to forward any TCP connection back to host
Let's assume there is a tcp service running at port 5555 on Windows. In the WSL2 distro, run the following command in the background (ubuntu is the name of the WSL1 distro with socat installed):
socat TCP-LISTEN:5555,fork EXEC:"/mnt/c/Windows/System32/wsl.exe -d ubuntu socat - TCP\:localhost\:5555"
How to forward any TCP connection from host into WSL2
This is simply doing the same thing, but in the opposite direction. You can run the following in your WSL1 distro:
socat TCP-LISTEN:5555,fork EXEC:"/mnt/c/Windows/System32/wsl.exe -d ubuntuwsl2 socat - TCP\:localhost\:5555"
Performance
On my PC, it can handle up to 150MB/s of data so it's not the fastest but fast enough for most applications.
For those who may work with simulation engines such as ROS/Gazebo, Unity and so on, another configuration is needed.
Add these to ~/.bashrc:
export DISPLAY=$(awk '/nameserver / {print $2; exit}' /etc/resolv.conf 2>/dev/null):0
export LIBGL_ALWAYS_INDIRECT=0
Be sure to enable both Public Access and Private Access for your X11 server in windows. Also disable any access control your X11 server supports.
If you use VcXSrv uncheck Native opengl. Final config for VcXSrv will be like:
Alternative good X11 servers with less difficulties are X410 and MobaXterm. For some details about this configuration refer here and here.
I don't know if that's specific to my configuration but these solutions don't work on my computer. They return the address 192.168.0.254 which is my gateway and not my host computer.
To make it work I had to use the following on my Ubuntu/WSL2 :
export DISPLAY="`ip -4 address | grep -A1 eth0 | grep inet | cut -d' ' -f6 | cut -d/ -f1`:0"
You can get connect to the X server without disabling access control on the server. You use xauth on the server to generate a cookie, then load it into Linux with xauth on the Linux side. You can get the server IP from /etc/resolv.conf. The following is in my .bashrc:
k=$('/mnt/c/Program Files/VcXsrv/xauth.exe' -f 'C:\Users\xxx\Documents\scratch.xauth' -i -n -q 2>/dev/null <<EOF
generate localhost:0 . trusted timeout 604800
list
quit
EOF
)
if [ -n "$k" ]
then
export DISPLAY=$(sed '/^nameserver/ {s/^nameserver\s\s*\([0-9][0-9.]*\)[^0-9.]*$/\1/;p;};d' /etc/resolv.conf):0
xauth -q add $DISPLAY . ${k##* }
export LIBGL_ALWAYS_INDIRECT=true
fi
unset k
Windows 11, and Windows 10 22H2 (build 2311) and later, include WSLg. It just works™ 🎉
Drivers for vGPU (Intel AMD Nvidia) are recommended.
The "System Information" App will tell you your current build number.
Note: WSL1 is not compatible with WSLg. New WSL2 instances will just work™.
Existing WSL2 systems will need to be "updated":
In administrative PowerShell: wsl --update
wsl --shutdown to force a restart of the WSL
Don't forget to remove any other modifications to DISPLAY that you may have made.
I used the following bash to set display:
export DISPLAY=$(powershell.exe -c ipconfig | grep -A4 WSL | tail -1 | awk '{ print $NF }' | tr -d '\r'):0
The solution from https://github.com/microsoft/WSL/issues/4793#issuecomment-588321333 uses VcXsrv as the X-server, and it is where I'm getting this answer (slightly edited for readability). Note that the original is being updated by its author, so don't forget to re-check.
To make it work:
On Windows, with the following, change E:\VcXsrv to where your installation is, and save it as xxx.bat in your Windows startup folder, e.g., C:\Users\Me\AppData\Roaming\Microsoft\Windows\Start Menu\Programs\Startup, and you can make it run when boot if you like:
#ECHO OFF
REM Start WSL once to create WSL network interface
wsl exit
REM Find IP for WSL network interface
SET WSL_IF_IP=
CALL :GetIp "vEthernet (WSL)" WSL_IF_IP
ECHO WSL_IF_IP=%WSL_IF_IP%
setx "WSL_IF_IP" "%WSL_IF_IP%"
setx "WSLENV" "WSL_IF_IP/u"
REM Change E:\VcXsrv to your VcXsrv installation folder
START /D "E:\VcXsrv" /B vcxsrv.exe -multiwindow -clipboard -nowgl -ac -displayfd 720
GOTO :EOF
:GetIp ( aInterface , aIp )
(
SETLOCAL EnableExtensions EnableDelayedExpansion
FOR /f "tokens=3 delims=: " %%i IN ('netsh interface ip show address "%~1" ^| findstr IP') DO (
SET RET=%%i
)
)
(
ENDLOCAL
SET "%~2=%RET%"
EXIT /B
)
In WSL, edit ~/.bashrc file to add following lines:
export DISPLAY=$WSL_IF_IP:0
unset LIBGL_ALWAYS_INDIRECT
That's all to make WSL2 work automatically. The idea is to get the private LAN IP of WSL interface on Windows, and use Environment variable to pass it to WSL. WSL then updates this LAN IP to DISPLAY for X-Server connection.
The clipboard works well, too, with this setup. I tested this with a WSL2 install of Ubuntu 20.04 LTS.
I do not want to mess with public access to X server and Windows firewall. My solution is using ssh with X forwarding (works for VirtualBox as well). Additionally, WSL auto-forwards from host to guest listening sockets, so I don't care which IP is actually assigned to guest.
So the steps are these:
Install VcXSrv. Run it with all defaults but set Display number to 0 (-1 will choose 0 if no X instances are already running). Do not start any client in it (this gives a benefit that you can start more apps on the same X server instance).
Open WSL and configure ssh server. For me it's as simple as sudo service ssh start. Create a Windows shortcut with command line: wsl sudo service ssh start.
Install Git for Windows. I actually use it only because its version of ssh is capable of going into background with ssh -f. Windows version of ssh is buggy on this feature, otherwise it's suitable without going to background or with ssh -n.
Configure passwordless login from Git-Bash to the guest. ssh user#127.0.0.1 should work at this point, because the host port is forwarded to the guest.
Verify X forwarding works from Git-Bash: DISPLAY=127.0.0.1:0 ssh -Y user#127.0.0.1 xeyes. I think xeyes is installed with each X distribution.
Install file manager or terminal of your choice in WSL. For example, pcmanfm. Create a Windows shortcut: "C:\Program Files\Git\git-bash.exe" -c "DISPLAY=127.0.0.1:0 ssh -Y -f user#127.0.0.1 'bash -l -c pcmanfm >/dev/null 2>&1'". Here bash -l flag helps setting up environment which may or may not be important depending on apps you run.
Of course, I can do the same without git-bash by using VcXSrv built-in ssh client but it requires converting ssh keys to PuTTY format and I had git-bash already installed. Also, with built-in client display reuse did not work for me.
I would rather set up an ssh server in the guest,
install an X11 server like Xming on the host
and connect to localhost via putty with X11 forwarding.
No fiddling with firewall rules, host IP is not required.
I'm not sure why but none of the above answers worked for me. I'm running on an ROG Zephyrus with AMD and Nvidia graphics which I'm sure caused issues.
The firewall settings described by whme are important, but the linux environment variables did not work for me. I had several entries in the config file labeled as nameserver, non of which allowed connections.
I ended up setting them to:
export DISPLAY=$HOSTNAME:0.0
export LIBGL_ALWAYS_INDIRECT=
I'm using VcXsrv as the X-server. I had to also set the parameters to -nowgl
2021 answer for Windows 10
Check this answer if getting IP from resolv.conf doesn't work.
Find your Windows IP address using following command in your WSL2 (yes, .exe file inside linux):
ipconfig.exe
Use command below to set display (fill YOUR_IP_ADDRESS with your IP):
export DISPLAY=YOUR_IP_ADDRESS:0
Check if your GUI app works correctly.
Automation can be little different for each case but I'll give example:
ipconfig.exe | grep 'IPv4 Address' | grep '10\.' | cut -d ":" -f 2 | cut -d " " -f 2
Explanation: I found all IPv4 addresses (3 IPs in my case). I know that my IP starts only from '10.' so I chose this line using second grep. Next I processed whole line to get the IP only.
I found a solution that worked for me, following:
Set Graphics on WSL2
1. Start ssh service
1.1. Open WSL
1.2. Type: sudo service ssh start
2. Get Windows (WSL net) IP
2.1. Open Powershell
2.2. Type: (ipconfig | Select-String -Pattern 'WSL' -Context 1, 5).Context.PostContext | Select-String -Pattern 'IPv4'
2.3. Get the received IP
3. Set environment variable
3.1. In WSL2 terminal type: export DISPLAY=172.23.64.1:0.0 with the IP of the windows entity (2.3) instead of the place holder
4. Launch Xming
4.1. Open Xlaunch and go with the defaults In Specify parameter settings: Check No Access Control
5. Good luck!
Following link:
https://docs.google.com/document/d/1ao3vjbC3lCDc9kvybOT5PbuGhC4_k4g8LCjxX23VX7E
Here are two articles I wrote that walks through setting up x11 for different types of use cases:
Install a Program With a Graphical User Interface in WSL2: This article walks through installing vcxsrv, adding the environment variables to the bashrc configuration file, and programmatically scheduling vcxsrv to launch with command-line parameters at startup. It also covers installing and launching Firefox as a standalone program in WSL2.
Install Ubuntu Desktop with a Graphical User Interface in WSL2 This article walks through installing vcxsrv, dotNet, genie, and the Ubuntu desktop. It covers creating the scripts that exports the environment variables, launches vcxsrv, starts the gnome desktop environment, and creates the shortcut that ties them all together. It also covers running the Ubuntu desktop, preventing a screen lock bug, and installing the Snap Store.
I also experienced hardships in opening X11 GUIs from WSL.
I had a problem detecting the correct IP and sometimes the X11 server took weird offsets which sometimes appeared as random on 0-17.
I coded the following script to automate this issue, but it has few dependencies:
This was tested and run under CentOS7 image
install X11-apps on your linux distribution to have `xset
install "timeout" app
Execute script by source ./find_display_ip.sh. note the source! You will want to have DISPLAY environment variable on your running shell.
Run the script only through "Windows Terminal" or something that incoprates windows "PATH" inside the WSL shell. This didn't use to be default for me in windows prompt `cmd, for example.
Obviously make sure your X11 server has full access ("xhost +" or "X11 remote access" is full)
Without further a due, this is the script source code:
#!/bin/bash
start_index=$1
start=${start_index:-0}
# check current settings
declare -i stop=0
if [ ! -z "$DISPLAY" ]; then
timeout 1s xset -display $DISPLAY q &> /dev/null;
[[ "$?" -eq 0 ]] && echo "Already Set to $DISPLAY" && stop=1;
fi
# scan displays 0-17
for port in $(seq $start 17);
do
[[ 1 -eq $stop ]] && break;
grp="ipconfig.exe | grep IPv4 | tr -d '\015' | sed 's#.*: \(.*\)\$#\1:${port}.0#;'"
for ipd in $(eval $grp)
do
echo Trying $ipd;
timeout 1s xset -display $ipd q &> /dev/null;
# command was sucessful
[[ "$?" -eq 0 ]] && export DISPLAY=$ipd && echo $ipd was set && stop=1;
##echo "Trying next IP...";
done
done
I found there is a official document fro Ubuntu which is comprehensive for your reference. As we know, this tip will work on Debian/WSL2 as well.
https://wiki.ubuntu.com/WSL
Thanks for Kennyhyun and other people's shares. All of them are some how or some way works on my computer to enable X11 server on WSL2 hosted on Windows10. Since the WSL2 is as a VM not longer be the same infrastructure as WSL1 anymore. It did take me some time to go through it.
Please let me add something briefly about how to make app on WSL2 show up.
run 'ip route' on WLS2 terminal.
ip route
default via a.b.c.1 dev eth0
a.b.c.0/20 dev eth0 proto kernel scope link src x.x.x.x
add this IP address of the "dev eth0" into "export $DISPLAY="
export $DISPLAY=a.b.c.1:0.0
Run xming server.
Then you could run the APP which is running on the WSL2 Linux. But for the X11, you may need to follow the document from Ubuntu.
I've managed to work with the out-of-the-box VcXsrv firewall configuration (i.e., no need to override/disable any firewall rules) by using the LAN adapter IP of the Windows host. Added the below to my ~/.bash_aliases
export DISPLAY=$(pwsh.exe -c ipconfig | grep -A 3 lan | grep IPv4 | head -1 | awk '{ print $NF }'):0
where lan is my Connection-specific DNS Suffix (yours may differ, in which case you should replace it in the command line above).
The following workaround works for me:
Set-NetFirewallProfile -Name $(Get-NetConnectionProfile).NetworkCategory -DisabledInterfaceAliases $(Get-NetAdapter | Where-Object Name -like 'WSL').Name
My mistake was that I took the nameserver of my linux wsl2 instance while my xserver runs on windows. So the DISPLAY variable had to be set to my windows ipv4 address.
Just type ipconfig in powershell or cmd and use the ipv4 ethernet address.

Scripts launched from udev do not have DISPLAY access anymore?

I have a script that runs from udev when I plug in my external drive. It always worked. But after upgrading from Linux 3.8/Xorg 1.12/Mint 14 (Ubuntu 12.10 compatible) to Linux 3.11/Xorg 1.14/Mint 16 (Ubuntu 13.10 compatible), it doesn't work anymore.
The script still runs, but none of the commands that require the display work. I figured that out by quitting the udev daemon and manually run udevd --debug for verbose output (more below).
This script used to work in Mint 14/12.10:
export DISPLAY=:0
UUID=$1
DEV=$2
notify-send -t 700 "mounting $DEV ($UUID)"
gnome-terminal -t "Backing up home..." -x rsync long line of data
zenity --warning --text="Done."
But not anymore in Mint 16/13.10. In case you are wondering about possible solutions, I gradually added stuff and now it looks like this:
export DISPLAY=:0.0
xhost +local:
xhost +si:localuser:root
xhost +
DISPLAY=:0.0
export DISPLAY=:0.0
UUID=$1
DEV=$2
notify-send -t 700 "mounting $DEV ($UUID)"
gnome-terminal -t "Backing up home..." -x rsync long line of data
zenity --warning --text="Done." --display=:0.0
But it still doesn't work. udevd --debug still shows this:
'(err) 'No protocol specified'
'(err) ''
'(err) '** (gnome-terminal:24171): WARNING **: Could not open X display'
'(err) 'No protocol specified'
'(err) 'Failed to parse arguments: Cannot open display: '
'(err) 'No protocol specified'
'(err) ''
'(err) '** (zenity:24173): WARNING **: Could not open X display'
'(err) 'No protocol specified'
'(err) ''
'(err) '(zenity:24173): Gtk-WARNING **: cannot open display: :0.0'
'(err) 'No protocol specified'
Note that any bash logic works. Echoing test vars to >>/tmp/test.log works. It's just accessing the display that does not work anymore.
This is driving me crazy. What is the correct way to achieve this now?
Update 2013-12-20
So, in the previous Ubuntu, X commands would automatically find it's way to the current X using user.
Now, I seem to need these two things every time:
On the X using user:
xhost +si:localuser:root
On the root/udev side:
Copy X using users' ~/.Xauthority file to /root
This 'feels' like a step back in time. This only works scripted when I log in as the same user everytime, so I can copy the .Xauthority file from that users' home when the script executes.
What 'trick' did the old Ubuntu use to have this done auto'magic'ally?
Ok, I'm writing this answer to try and clarify the security model of the X server, as I understand it. I'm not an expert on the subject, so I may have got some (many?) things wrong. Also, many things are different in different distributions, or even different versions of the same distribution, as the OP noted.
There are two main ways to get authorized to connect to the X server:
The xhost way (Host Access): The server maintains a list of hosts, local users, groups, etc. that are allowed to connect to the server.
The xauth way (Cookie based): The server has a list of randomly generated cookies, and anybody showing one of these cookies will be granted access.
Now, the distribution specific stuff...
When the X server is launch by the start-up system, it is usually passed a command line of the form -auth <filename>. This file contains a list of initial cookies to be used for authorization. It is created before the X server is run using the xauth tool. Then just after the X server, the login manager is launch, and it is instructed to read the cookie from this same file, so it can connect.
Now, when user rodrigo logs in, it has to be authorized to connect to the server. That is done by the login manager, and it has two options:
It does the equivalent to: xhost +si:localuser:rodrigo.
It generates another cookie, adds it to the server and passes it to the user. This passing can be done in two ways:
It is written in the file $HOME/.Xauthority (home of the new user).
It is written somewhere else (/var/run/gdm/auth-for-rodrigo-xxxx) and the environment variable XAUTHORITY is set to the name of that file.
Also, it can do both things. Some login managers even add the root user to the list of authorized users by default (as if xhost +si:localuser:root).
But note that if you are not authorized to connect to the X server, you cannot add yourself to the list (running xhost + for example). The reason is the same as why you cannot open a house doof from the outside without a key... That's true even if you are root!
Does it mean that the root user cannot connect to the server? Absolutely not! But to get to that first you have to know how is the logged user configured to connect to the server. For that run as the logged user:
$ xhost
It will show a message and the list of authorized users, hosts or groups, if any:
access control enabled, only authorized clients can connect
SI:localuser:rodrigo
Then run:
$ echo $XAUTHORITY
To see where the authorization file is saved. If it is empty, then it will be ~/.Xauthority. Then:
$ xauth list :0
To see the list of your authorized cookies.
Now, if there are any cookie in the server, the root user should be able to connect making the XAUTHORITY environment variable point to the right cookie file. Note that in many setups, the cookie of the login manager is also kept around. Just look for it!
Another possibility for root access is to modify the Xsession files to add the command xhost +si:localuser:root and get permanent access. The details vary with the particular program used, but for gdm you would simply add an executable script in /etc/gdm/Init/ with the xhost command and it will be run automatically in the next boot.
PS: You can check your root access to the X server with sudo -i, but note that some sudo configurations may keep the DISPLAY, XAUTHORITY or HOME variables and modify the results of the tests.
EXAMPLE: This script should be able to connect you to the X server as root
export DISPLAY=:0
export XAUTHORITY=`ls /var/run/gdm/auth-for-gdm-*/database`
xrandr #just for show
Naturally, the path for the XAUTHORITY variable will depend on what login manager you are using (greeter). You can use the user file (you say it is in /home/redsandro/.Xauthority but I'm not so sure). Or you can use the greeter cookie. To get the greeter cookie you can use the following command:
$ pgrep -a Xorg
Which in my system gives:
408 /usr/bin/Xorg :0 -background none -verbose -auth /var/run/gdm/auth-for-gdm-gDg3Ij/database -seat seat0 -nolisten tcp vt1
So my file is /var/run/gdm/auth-for-gdm-gDg3Ij/database. The gDg3Ij is random and changes every time the server is restarted, that's why the ls ... trick.
The nice thing of using the GDM cookie instead of the user is that it does not depend on the user logged in. It will even work with no user at all!
UPDATE: From your latest comment I see that your X server command is:
/usr/bin/X :0 -audit 0 -auth /var/lib/mdm/:0.Xauth -nolisten tcp vt8
So there is the name of the cookie used to start the login manager. If I'm correct, that should be available all the time, if you are able to read the file. And you are root, so, the following lines should be enough to get you access to the display as root:
export DISPLAY=:0
export XAUTHORITY=/var/lib/mdm/:0.Xauth
zenity --info --text 'Happy New Year'
A quick search turned up the following:
X authentication is based on cookies -- secret little pieces of random
data that only you and the X server know... So, you need to let the
other user in on what your cookie is. One way to do this is as
follows: Before you issue the su or sudo (but after having ssh'ed into
the remote system if you are using ssh), request the cookie for the
current DISPLAY that's connecting to your X server:
$ xauth list $DISPLAY You'll get something like
somehost.somedomain:10 mit-magic-cookie-1
4d22408a71a55b41ccd1657d377923ae
Then, after having done su, tell the new user what the cookie is:
$ xauth add somehost.somedomain:10 MIT-MAGIC-COOKIE-1
4d22408a71a55b41ccd1657d377923ae
(just copy and paste the output of the above 'xauth list' onto 'xauth
add') That's it. Now, you should be able to start any X application.
For reference, here is the origin http://www.linuxquestions.org/questions/linux-newbie-8/xlib-connection-to-0-0-refused-by-server-xlib-no-protocol-specified-152556/
This is not pretty, but I have not seen any solutions yet. So it's the best one so far.
On the X using user:
xhost +si:localuser:root
On the root/udev side:
Copy X using users' ~/.Xauthority file to /root (* see note below)
Now it works. Try zenity --warning --text=Hooray
This only works when you know which user is going to be logged into X. So it's only acceptable when the computer is being used by a single user with a single user account.
*) Note
This is notable, because I tried the documented ways of xauth merge /home/redsandro/.Xauthority and $XAUTHORITY=/home/redsandro/.Xauthority. These documented methods just plain do nothing these days, even if root has permission to read it. You need to literally the whole .Xauthority file in stead of just pointing to it.
Newer versions of Ubuntu use different display managers, so you have to know which one you are using.
In Rodrigo's post, there is a hint showing how to discover it, using this command:
ls /var/run/gdm/auth-for-gdm-*/database
To check this, list the /var/run directory and use the "pgrep -a Xorg" command.
In Ubuntu 16* it'´s using sddm, so, you can use
ls /var/run/sddm* to export the XAUTHORITY variable.
The script would be like this:
#!/bin/bash
export DISPLAY=:0
export XAUTHORITY=`ls /var/run/sddm*`
HDMI_STATUS="$(cat /sys/class/drm/card0-HDMI-A-1/status)"
USER="your username"
export XAUTHORITY=/home/$USER/.Xauthority
export DISPLAY=:0
if [ "$HDMI_STATUS" = connected ];
then
sudo -u $USER pactl set-card-profile 0 output:hdmi-stereo+input:analog-stereo
else
sudo -u $USER pactl set-card-profile 0 output:analog-stereo+input:analog-stereo
fi
exit 0
then run:
sudo chmod 755 /usr/local/bin/toggle-sound
echo 'ACTION=="change", SUBSYSTEM=="drm", RUN+="/usr/local/bin/toggle-sound"' | sudo tee /etc/udev/rules.d/99-hdmi-sound.rules
sudo udevadm control --reload-rules
I had to use this in Kali Linux 2016 to get it to work:
#!/bin/bash
set -x
xhost local:root
export DISPLAY=:0.0
su root -c 'zenity --notification --text="I am a notification!"'
If calling the script directly from udev doesn't work, why not start a systemd service which calls that script?
Here's my solution:
First is the udev rule that runs media-storage-unplugged.service when a device (or partition) that has ID_PART_ENTRY_UUID is unplugged
/etc/udev/rules.d/storage-unplugged.rules:
ACTION=="remove", KERNEL=="sd[a-z][0-9]", ENV{ID_PART_ENTRY_UUID}=="replace-with-your-uuid", SYMLINK+="storage", RUN+="/usr/bin/systemctl --no-block start media-storage-unplugged.service"
/etc/systemd/system/media-storage-unplugged.service: (service file)
[Unit]
Description=Triggered when storage is unplugged
[Service]
Type=oneshot
ExecStart=/usr/local/bin/storage_unplugged
[Install]
WantedBy=multi-user.target
/usr/local/bin/storage_unplugged (get creative here)
#!/bin/bash
notify-send-to-user "storage unplugged"
exit 0
/usr/local/bin/notify-send-to-user
#!/bin/bash
function ns() {
#Detect the name of the display in use
local display=":$(ls /tmp/.X11-unix/* | sed 's#/tmp/.X11-unix/X##' | head -n 1)"
#Detect the user using such display (NOTE: Didn't work on Arch linux since the "who" command doesn't show which display the user is using)
#local user=$(who | grep '('$display')' | awk '{print $1}' | head -n 1)
#Statically assign user:
local user="user" # Replace with your user
#Detect the id of the user
local uid=$(id -u $user)
sudo -u $user DISPLAY=$display DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/$uid/bus notify-send "$#"
}
ns "$#"
Adapt this method to your needs :)

write a shell script to ssh to a remote machine and execute commands

I have two questions:
There are multiple remote linux machines, and I need to write a shell script which will execute the same set of commands in each machine. (Including some sudo operations). How can this be done using shell scripting?
When ssh'ing to the remote machine, how to handle when it prompts for RSA fingerprint authentication.
The remote machines are VMs created on the run and I just have their IPs. So, I cant place a script file beforehand in those machines and execute them from my machine.
There are multiple remote linux machines, and I need to write a shell script which will execute the same set of commands in each machine. (Including some sudo operations). How can this be done using shell scripting?
You can do this with ssh, for example:
#!/bin/bash
USERNAME=someUser
HOSTS="host1 host2 host3"
SCRIPT="pwd; ls"
for HOSTNAME in ${HOSTS} ; do
ssh -l ${USERNAME} ${HOSTNAME} "${SCRIPT}"
done
When ssh'ing to the remote machine, how to handle when it prompts for RSA fingerprint authentication.
You can add the StrictHostKeyChecking=no option to ssh:
ssh -o StrictHostKeyChecking=no -l username hostname "pwd; ls"
This will disable the host key check and automatically add the host key to the list of known hosts. If you do not want to have the host added to the known hosts file, add the option -o UserKnownHostsFile=/dev/null.
Note that this disables certain security checks, for example protection against man-in-the-middle attack. It should therefore not be applied in a security sensitive environment.
Install sshpass using, apt-get install sshpass then edit the script and put your linux machines IPs, usernames and password in respective order. After that run that script. Thats it ! This script will install VLC in all systems.
#!/bin/bash
SCRIPT="cd Desktop; pwd; echo -e 'PASSWORD' | sudo -S apt-get install vlc"
HOSTS=("192.168.1.121" "192.168.1.122" "192.168.1.123")
USERNAMES=("username1" "username2" "username3")
PASSWORDS=("password1" "password2" "password3")
for i in ${!HOSTS[*]} ; do
echo ${HOSTS[i]}
SCR=${SCRIPT/PASSWORD/${PASSWORDS[i]}}
sshpass -p ${PASSWORDS[i]} ssh -l ${USERNAMES[i]} ${HOSTS[i]} "${SCR}"
done
This work for me.
Syntax : ssh -i pemfile.pem user_name#ip_address 'command_1 ; command 2; command 3'
#! /bin/bash
echo "########### connecting to server and run commands in sequence ###########"
ssh -i ~/.ssh/ec2_instance.pem ubuntu#ip_address 'touch a.txt; touch b.txt; sudo systemctl status tomcat.service'
There are a number of ways to handle this.
My favorite way is to install http://pamsshagentauth.sourceforge.net/ on the remote systems and also your own public key. (Figure out a way to get these installed on the VM, somehow you got an entire Unix system installed, what's a couple more files?)
With your ssh agent forwarded, you can now log in to every system without a password.
And even better, that pam module will authenticate for sudo with your ssh key pair so you can run with root (or any other user's) rights as needed.
You don't need to worry about the host key interaction. If the input is not a terminal then ssh will just limit your ability to forward agents and authenticate with passwords.
You should also look into packages like Capistrano. Definitely look around that site; it has an introduction to remote scripting.
Individual script lines might look something like this:
ssh remote-system-name command arguments ... # so, for exmaple,
ssh target.mycorp.net sudo puppet apply
The accepted answer sshes to machines sequentially. In case you want to ssh to multiple machines and run some long-running commands like scp concurrently on them, run the ssh command as a background process.
#!/bin/bash
username="user"
servers=("srv-001" "srv-002" "srv-002" "srv-003");
script="pwd;"
for s in "${servers[#]}"; do
echo "sshing ${username}#${s} to run ${script}"
(ssh ${username}#${s} ${script})& # Run in background
done
wait # If removed, you can run some other script here
If you are able to write Perl code, then you should consider using Net::OpenSSH::Parallel.
You would be able to describe the actions that have to be run in every host in a declarative manner and the module will take care of all the scary details. Running commands through sudo is also supported.
For this kind of tasks, I repeatedly use Ansible which allows to duplicate coherently bash scripts in several containets or VM. Ansible (more precisely Red Hat) now has an additional web interface AWX which is the open-source edition of their commercial Tower.
Ansible: https://www.ansible.com/
AWX:https://github.com/ansible/awx
Ansible Tower: commercial product, you will probably fist explore the free open-source AWX, rather than the 15days free-trail of Tower
There is are multiple ways to execute the commands or script in the multiple remote Linux machines.
One simple & easiest way is via pssh (parallel ssh program)
pssh: is a program for executing ssh in parallel on a number of hosts. It provides features such as sending input to all of the processes, passing a password to ssh, saving the output to files, and timing out.
Example & Usage:
Connect to host1 and host2, and print "hello, world" from each:
pssh -i -H "host1 host2" echo "hello, world"
Run commands via a script on multiple servers:
pssh -h hosts.txt -P -I<./commands.sh
Usage & run a command without checking or saving host keys:
pssh -h hostname_ip.txt -x '-q -o StrictHostKeyChecking=no -o PreferredAuthentications=publickey -o PubkeyAuthentication=yes' -i 'uptime; hostname -f'
If the file hosts.txt has a large number of entries, say 100, then the parallelism option may also be set to 100 to ensure that the commands are run concurrently:
pssh -i -h hosts.txt -p 100 -t 0 sleep 10000
Options:
-I: Read input and sends to each ssh process.
-P: Tells pssh to display output as it arrives.
-h: Reads the host's file.
-H : [user#]host[:port] for single-host.
-i: Display standard output and standard error as each host completes
-x args: Passes extra SSH command-line arguments
-o option: Can be used to give options in the format used in the configuration file.(/etc/ssh/ssh_config) (~/.ssh/config)
-p parallelism: Use the given number as the maximum number of concurrent connections
-q Quiet mode: Causes most warning and diagnostic messages to be suppressed.
-t: Make connections time out after the given number of seconds. 0 means pssh will not timeout any connections
When ssh'ing to the remote machine, how to handle when it prompts for
RSA fingerprint authentication.
Disable the StrictHostKeyChecking to handle the RSA authentication prompt.
-o StrictHostKeyChecking=no
Source: man pssh
This worked for me. I made a function. Put this in your shell script:
sshcmd(){
ssh $1#$2 $3
}
sshcmd USER HOST COMMAND
If you have multiple machines that you want to do the same command on you would repeat that line with a semi colon. For example, if you have two machines you would do this:
sshcmd USER HOST COMMAND ; sshcmd USER HOST COMMAND
Replace USER with the user of the computer. Replace HOST with the name of the computer. Replace COMMAND with the command you want to do on the computer.
Hope this helps!
You can follow this approach :
Connect to remote machine using Expect Script. If your machine doesn't support expect you can download the same. Writing Expect script is very easy (google to get help on this)
Put all the action which needs to be performed on remote server in a shell script.
Invoke remote shell script from expect script once login is successful.

How to send data to local clipboard from a remote SSH session

Borderline ServerFault question, but I'm programming some shell scripts, so I'm trying here first :)
Most *nixes have a command that will let you pipe/redirect output to the local clipboard/pasteboard, and retrieve from same. On OS X these commands are
pbcopy, pbpaste
Is there anyway to replicate this functionality while SSHed into another server? That is,
I'm using Computer A.
I open a terminal window
I SSH to Computer B
I run a command on Computer B
The output of Computer B is redirected or automatically copied to Computer A's clipboard.
And yes, I know I could just (shudder) use my mouse to select the text from the command, but I've gotten so used to the workflow of pipping output directly to the clipboard that I want the same for my remote sessions.
Code is useful, but general approaches are appreciated as well.
My favorite way is ssh [remote-machine] "cat log.txt" | xclip -selection c. This is most useful when you don't want to (or can't) ssh from remote to local.
Edit: on Cygwin ssh [remote-machine] "cat log.txt" > /dev/clipboard.
Edit: A helpful comment from nbren12:
It is almost always possible to setup a reverse ssh connection using SSH port forwarding. Just add RemoteForward 127.0.0.1:2222 127.0.0.1:22 to the server's entry in your local .ssh/config, and then execute ssh -p 2222 127.0.0.1 on the remote machine, which will then redirect the connection to the local machine. – nbren12
I'm resurrecting this thread because I've been looking for the same kind of solution, and I've found one that works for me. It's a minor modification to a suggestion from OSX Daily.
In my case, I use Terminal on my local OSX machine to connect to a linux server via SSH. Like the OP, I wanted to be able to transfer small bits of text from terminal to my local clipboard, using only the keyboard.
The essence of the solution:
commandThatMakesOutput | ssh desktop pbcopy
When run in an ssh session to a remote computer, this command takes the output of commandThatMakesOutput (e.g. ls, pwd) and pipes the output to the clipboard of the local computer (the name or IP of "desktop"). In other words, it uses nested ssh: you're connected to the remote computer via one ssh session, you execute the command there, and the remote computer connects to your desktop via a different ssh session and puts the text to your clipboard.
It requires your desktop to be configured as an ssh server (which I leave to you and google). It's much easier if you've set up ssh keys to facilitate fast ssh usage, preferably using a per-session passphrase, or whatever your security needs require.
Other examples:
ls | ssh desktopIpAddress pbcopy
pwd | ssh desktopIpAddress pbcopy
For convenience, I've created a bash file to shorten the text required after the pipe:
#!/bin/bash
ssh desktop pbcopy
In my case, i'm using a specially named key
I saved it with the file name cb (my mnemonic (ClipBoard). Put the script somewhere in your path, make it executable and voila:
ls | cb
Found a great solution that doesn't require a reverse ssh connection!
You can use xclip on the remote host, along with ssh X11 forwarding & XQuartz on the OSX system.
To set this up:
Install XQuartz (I did this with soloist + pivotal_workstation::xquartz recipe, but you don't have to)
Run XQuartz.app
Open XQuartz Preferences (+,)
Make sure "Enable Syncing" and "Update Pasteboard when CLIPBOARD changes" are checked
ssh -X remote-host "echo 'hello from remote-host' | xclip -selection clipboard"
Reverse tunnel port on ssh server
All the existing solutions either need:
X11 on the client (if you have it, xclip on the server works great) or
the client and server to be in the same network (which is not the case if you're at work trying to access your home computer).
Here's another way to do it, though you'll need to modify how you ssh into your computer.
I've started using this and it's nowhere near as intimidating as it looks so give it a try.
Client (ssh session startup)
ssh username#server.com -R 2000:localhost:2000
(hint: make this a keybinding so you don't have to type it)
Client (another tab)
nc -l 2000 | pbcopy
Note: if you don't have pbcopy then just tee it to a file.
Server (inside SSH session)
cat some_useful_content.txt | nc localhost 2000
Other notes
Actually even if you're in the middle of an ssh session there's a way to start a tunnel but i don’t want to scare people away from what really isn’t as bad as it looks. But I'll add the details later if I see any interest
There are various tools to access X11 selections, including xclip and XSel. Note that X11 traditionally has multiple selections, and most programs have some understanding of both the clipboard and primary selection (which are not the same). Emacs can work with the secondary selection too, but that's rare, and nobody really knows what to do with cut buffers...
$ xclip -help
Usage: xclip [OPTION] [FILE]...
Access an X server selection for reading or writing.
-i, -in read text into X selection from standard input or files
(default)
-o, -out prints the selection to standard out (generally for
piping to a file or program)
-l, -loops number of selection requests to wait for before exiting
-d, -display X display to connect to (eg localhost:0")
-h, -help usage information
-selection selection to access ("primary", "secondary", "clipboard" or "buffer-cut")
-noutf8 don't treat text as utf-8, use old unicode
-version version information
-silent errors only, run in background (default)
-quiet run in foreground, show what's happening
-verbose running commentary
Report bugs to <astrand#lysator.liu.se>
$ xsel -help
Usage: xsel [options]
Manipulate the X selection.
By default the current selection is output and not modified if both
standard input and standard output are terminals (ttys). Otherwise,
the current selection is output if standard output is not a terminal
(tty), and the selection is set from standard input if standard input
is not a terminal (tty). If any input or output options are given then
the program behaves only in the requested mode.
If both input and output is required then the previous selection is
output before being replaced by the contents of standard input.
Input options
-a, --append Append standard input to the selection
-f, --follow Append to selection as standard input grows
-i, --input Read standard input into the selection
Output options
-o, --output Write the selection to standard output
Action options
-c, --clear Clear the selection
-d, --delete Request that the selection be cleared and that
the application owning it delete its contents
Selection options
-p, --primary Operate on the PRIMARY selection (default)
-s, --secondary Operate on the SECONDARY selection
-b, --clipboard Operate on the CLIPBOARD selection
-k, --keep Do not modify the selections, but make the PRIMARY
and SECONDARY selections persist even after the
programs they were selected in exit.
-x, --exchange Exchange the PRIMARY and SECONDARY selections
X options
--display displayname
Specify the connection to the X server
-t ms, --selectionTimeout ms
Specify the timeout in milliseconds within which the
selection must be retrieved. A value of 0 (zero)
specifies no timeout (default)
Miscellaneous options
-l, --logfile Specify file to log errors to when detached.
-n, --nodetach Do not detach from the controlling terminal. Without
this option, xsel will fork to become a background
process in input, exchange and keep modes.
-h, --help Display this help and exit
-v, --verbose Print informative messages
--version Output version information and exit
Please report bugs to <conrad#vergenet.net>.
In short, you should try xclip -i/xclip -o or xclip -i -sel clip/xclip -o -sel clip or xsel -i/xsel -o or xsel -i -b/xsel -o -b, depending on what you want.
If you use iTerm2 on the Mac, there is an easier way. This functionality is built into iTerm2's Shell Integration capabilities via the it2copy command:
Usage: it2copy
Copies to clipboard from standard input
it2copy filename
Copies to clipboard from file
To make it work, choose iTerm2-->Install Shell Integration menu item while logged into the remote host, to install it to your own account. Once that is done, you'll have access to it2copy, as well as a bunch of other aliased commands that provide cool functionality.
The other solutions here are good workarounds but this one is so painless in comparison.
This is my solution based on SSH reverse tunnel, netcat and xclip.
First create script (eg. clipboard-daemon.sh) on your workstation:
#!/bin/bash
HOST=127.0.0.1
PORT=3333
NUM=`netstat -tlpn 2>/dev/null | grep -c " ${HOST}:${PORT} "`
if [ $NUM -gt 0 ]; then
exit
fi
while [ true ]; do
nc -l ${HOST} ${PORT} | xclip -selection clipboard
done
and start it in background.
./clipboard-daemon.sh&
It will start nc piping output to xclip and respawning process after receiving portion of data
Then start ssh connection to remote host:
ssh user#host -R127.0.0.1:3333:127.0.0.1:3333
While logged in on remote box, try this:
echo "this is test" >/dev/tcp/127.0.0.1/3333
then try paste on your workstation
You can of course write wrapper script that starts clipboard-daemon.sh first and then ssh session. This is how it works for me. Enjoy.
Allow me to add a solution that if I'm not mistaken was not suggested before.
It does not require the client to be exposed to the internet (no reverse connections), nor does it use any xlibs on the server and is implemented completely using ssh's own capabilities (no 3rd party bins)
It involves:
Opening a connection to the remote host, then creating a fifo file on it and waiting on that fifo in parallel (same actual TCP connection for everything).
Anything you echo to that fifo file ends up in your local clipboard.
When the session is done, remove the fifo file on the server and cleanly terminate the connections together.
The solution utilizes ssh's ControlMaster functionality to use just one TCP connection for everything so it will even support hosts that require a password to login and prompt you for it just once.
Edit: as requested, the code itself:
Paste the following into your bashrc and use sshx host to connect.
On the remote machine echo SOMETHING > ~/clip and hopefully, SOMETHING will end up in the local host's clipboard.
You will need the xclip utility on your local host.
_dt_term_socket_ssh() {
ssh -oControlPath=$1 -O exit DUMMY_HOST
}
function sshx {
local t=$(mktemp -u --tmpdir ssh.sock.XXXXXXXXXX)
local f="~/clip"
ssh -f -oControlMaster=yes -oControlPath=$t $# tail\ -f\ /dev/null || return 1
ssh -S$t DUMMY_HOST "bash -c 'if ! [ -p $f ]; then mkfifo $f; fi'" \
|| { _dt_term_socket_ssh $t; return 1; }
(
set -e
set -o pipefail
while [ 1 ]; do
ssh -S$t -tt DUMMY_HOST "cat $f" 2>/dev/null | xclip -selection clipboard
done &
)
ssh -S$t DUMMY_HOST \
|| { _dt_term_socket_ssh $t; return 1; }
ssh -S$t DUMMY_HOST "rm $f"
_dt_term_socket_ssh $t
}
More detailed explanation is on my website:
https://xicod.com/2021/02/09/clipboard-over-ssh.html
The simplest solution of all, if you're on OS X using Terminal and you've been ssh'ing around in a remote server and wish to grab the results of a text file or a log or a csv, simply:
1) Cmd-K to clear the output of the terminal
2) cat <filename> to display the contents of the file
3) Cmd-S to save the Terminal Output
You'll have the manually remove the first line and last line of the file, but this method is a bit simpler than relying on other packages to be installed, "reverse tunnels" and trying to have a static IP, etc.
This answer develops both upon the chosen answer by adding more security.
That answer discussed the general form
<command that makes output> | \
ssh <user A>#<host A> <command that maps stdin to clipboard>
Where security may be lacking is in the ssh permissions allowing <user B> on host B> to ssh into host A and execute any command.
Of course B to A access may already be gated by an ssh key, and it may even have a password. But another layer of security can restrict the scope of allowable commands that B can execute on A, e.g. so that rm -rf / cannot be called. (This is especially important when the ssh key doesn't have a password.)
Fortunately, ssh has a built-in feature called command restriction or forced command. See ssh.com, or
this serverfault.com question.
The solution below shows the general form solution along with ssh command restriction enforced.
Example Solution with command restriction added
This security enhanced solution follows the general form - the call from the ssh session on host-B is simply:
cat <file> | ssh <user-A>#<host A> to_clipboard
The rest of this shows the setup to get that to work.
Setup of ssh command restriction
Suppose the user account on B is user-B, and B has an ssh key id-clip, that has been created in the usual way (ssh-keygen).
Then in user-A's ssh directory there is a file
/home/user-A/.ssh/authorized_keys
that recognizes the key id-clip and allows ssh connection.
Usually the contents of each line authorized_keys is exactly the public key being authorized, e.g., the contents of id-clip.pub.
However, to enforce command restriction that public key content is prepended (on the same line) by the command to be executed.
In our case:
command="/home/user-A/.ssh/allowed-commands.sh id-clip",no-agent-forwarding,no-port-forwarding,no-user-rc,no-x11-forwarding,no-pty <content of file id-clip.pub>
The designated command "/home/user-A/.ssh/allowed-commands.sh id-clip", and only that designated command, is executed whenever key id-clip is used initiate an ssh connection to host-A - no matter what command is written the ssh command line.
The command indicates a script file allowed-commands.sh, and the contents of that that script file is
#/bin/bash
#
# You can have only one forced command in ~/.ssh/authorized_keys. Use this
# wrapper to allow several commands.
Id=${1}
case "$SSH_ORIGINAL_COMMAND" in
"to-clipboard")
notify-send "ssh to-clipboard, from ${Id}"
cat | xsel --display :0 -i -b
;;
*)
echo "Access denied"
exit 1
;;
esac
The original call to ssh on machine B was
... | ssh <user-A>#<host A> to_clipboard
The string to-clipboard is passed to allowed-commands.sh by the environment variable SSH_ORIGINAL_COMMAND.
Addition, we have passed the name of the key, id-clip, from the line in authorized_keyswhich is only accessed by id-clip.
The line
notify-send "ssh to-clipboard, from ${Id}"
is just a popup messagebox to let you know the clipboard is being written - that's probably a good security feature too. (notify-send works on Ubuntu 18.04, maybe not others).
In the line
cat | xsel --display :0 -i -b
the parameter --display :0 is necessary because the process doesn't have it's own X display with a clipboard,
so it must be specificied explicitly. This value :0 happens to work on Ubuntu 18.04 with Wayland window server. On other setups it might not work. For a standard X server this answer might help.
host-A /etc/ssh/sshd_config parameters
Finally a few parameters in /etc/ssh/sshd_config on host A that should be set to ensure permission to connect, and permission to use ssh-key only without password:
PubkeyAuthentication yes
PasswordAuthentication no
ChallengeResponseAuthentication no
AllowUsers user-A
To make the sshd server re-read the config
sudo systemctl restart sshd.service
or
sudo service sshd.service restart
conclusion
It's some effort to set it up, but other functions besides to-clipboard can be constructed in parallel the same framework.
Not a one-liner, but requires no extra ssh.
install netcat if necessary
use termbin: cat ~/some_file.txt | nc termbin.com 9999. This will copy the output to the termbin website and prints the URL to your output.
visit that url from your computer, you get your output
Of course, do not use it for sensitive content.
#rhileighalmgren solution is good, but pbcopy will annoyingly copy last "\n" character, I use "head" to strip out last character to prevent this:
#!/bin/bash
head -c -1 | ssh desktop pbcopy
My full solution is here : http://taylor.woodstitch.com/linux/copy-local-clipboard-remote-ssh-server/
Far Manager Linux port supports synchronizing clipboard between local and remote host. You just open local far2l, do "ssh somehost" inside, run remote far2l in that ssh session and get remote far2l working with your local clipboard.
It supports Linux, *BSD and OS X; I made a special putty build to utilize this functionality from windows also.
For anyone googling their way to this:
The best solution in this day and age seem to be lemonade
Various solutions is also mentioned in the neovim help text for clipboard-tool
If you're working over e.g. a pod in a Kubernetes cluster and not direct SSH, so that there is no way for your to do a file transfer, you could use cat and then save the terminal output as text. For example in macOS you can do Shell -> Export as text.

Resources