Sandboxing to allow multiple processes open the same port - linux

Background
I have a command-line application that I use to connect to a remote device on port 1234. I cannot change the port number, and I do not have access to the source to rebuild this tool. I'm currently working in a lab where all ports except SSH are blocked. To get around this, I create a tunnel, i.e.:
ssh -L 1234:remotehost:1234 sshuser#remotehost
Now, I can just point my CLI program at localhost:1234 to connect with my CLI tool to the desired host.
Problem
This CLI tool needs to run for about an hour straight, and I have about 200 remote hosts to test with it. I would like to parallelize this task. Unfortunately, I can only create a single tunnel on my local machine using port 1234.
Question
Is there a (trivial/simple/automated) way to jail/sandbox my CLI tool so that I can launch 100 instances in parallel (i.e. via a shell script) so that each instance "thinks" it's talking to port 1234? For example, does Docker or KVM provide some sort of anonymous/on-demand compute node feature that I could setup rapidly? I'd rather not have to resort to manually deploying and managing a slew of VirtulBox hosts via vagrant.

The simple answer is that you can use multiple IP addresses locally. Each local IP address on the client will allow you to create another tunnel. Currently, you are using localhost. But your client also has an IP address. You can prove my point by trying this syntax:
ssh -f -N -L 127.0.0.1:1234:remotehost1:1234 sshuser#remotehost1 # this is default
ssh -f -N -L <local-IP1>:1234:remotehost2:1234 sshuser#remotehost2 # specifying non-default value <local-IP1>
Now, you just need to figure out how to give your client multiple IP addresses (secondary addresses). Then you can expand this beyond 2 parallel sessions.
I've also added -f and -N to your ssh syntax to put ssh into the background (-f) and to not issue any commands.
Using -R tunnels in the past, I've found that I need to enable GatewayPorts on the server (/etc/ssh/sshd_config). In the case of -L , I don't see the need. However, the ssh man-page explicitly mentioned GatewayPorts associated with the -L function. You may need to play around a bit. I just tried this out on my Mac and I was able to get it going without any GatewayPorts considerations.

Related

WSL2 use "localhost" to access Windows service

I'm using WSL2 on Windows 10.
My dev stack is using a local webserver (localwp or wamp) on the host OS.
I use WSL2 as the main terminal (SSH, Git, SASS, automation tools, ...).
What I need is a way to connect to my host services (MySql) from the WSL2 system using a server name instead of a random IP address.
It is already possible for the Windows host to connect to WSL2 services with "localhost". Is there a solution to do it the other way?
You should use hostname.local to access Windows from WSL2 because that will use the correct IP. Note that hostname should be replaced with the result of the hostname command run in WSL2.
You can check the IP by running ping $(hostname).local from WSL2.
You also need to add a firewall rule to allow traffic from WSL2 to Windows. In an elevated PowerShell prompt run this:
New-NetFirewallRule -DisplayName "WSL" -Direction Inbound -InterfaceAlias "vEthernet (WSL)" -Action Allow
The command above should allow you to access anything exposed by Windows from WSL, no matter what port, however bear in mind that any apps you've launched get an automated rule created for them when you first launch them, blocking access from public networks (this is when you get a prompt from Windows Firewall, asking whether the app should be allowed to accept connections from public networks).
If you don't explicitly allow, they will be blocked by default, which also blocks connections from WSL. So you might need to find that inbound rule, and change it from block to allow (or just delete it).
See info here:
https://github.com/microsoft/WSL/issues/4585#issuecomment-610061194
Well, your title and your question body don't seem quite aligned.
The question title says "use localhost", but then in the body you say "using a server name."
Accessing the Windows 10 service via the name "localhost" from WSL2? Let's just go with "no". I can think of a possibility of how to make it work, but it would be complicated.
But I think the second is really what you are looking for, so a couple of options that I can think of for accessing the Windows host services by hostname in WSL2:
First, and hopefully the easiest, WSL2 supports mDNS (WSL1 did not), so you should be able to access the Windows host as {hostname}.local (where {hostname} is the name of the Windows host (literally, in bash, ping $(hostname).local, since the assigned WSL2 hostname is that of the host Windows 10 computer). That works for me. While I don't recall having to do anything special to enable this, this Super User answer seems to indicate that you have to turn it on manually.
The second option would be to add your Windows host IP to /etc/hosts. If your Windows IP is static, then you could just add it manually to /etc/hosts and be done. If it's dynamic, then you might want to script it. You can retrieve it from inside WSL2 via:
powershell.exe "(Test-Connection -ComputerName (hostname) -Count 1).IPV4Address.IPAddressToString" (and other methods) and then use something like sed to change /etc/hosts.
Add the following code to ~/.bashrc or ~/.zshrc, and then use winhost to access the host ip。
sed -i -e '/winhost/d' /etc/hosts
win_ip=$(cat /etc/resolv.conf | grep nameserver | awk '{ print $2 }')
win_host="$win_ip winhost"
echo $win_host >> /etc/hosts
The last time I was facing this issue,
I downgraded to WSL1, and all the connections started working perfectly.
You can use:
wsl --set-version Ubuntu 1
This is the easiest approach to fix all connection related issues in WSL2.

How to disable ssh-agent forwarding

ssh-agent forwarding can be accomplished with ssh -A ....
Most references I have found state that the local machine must configure ~/.ssh/config to enable AgentForwarding with the following code:
Host <trusted_ip>
ForwardAgent yes
Host *
ForwardAgent no
However, with this configuration, I am still able to see my local machines keys when tunneling into a remote machine, with ssh -A user#remote_not_trusted_ip, and running ssh-add -l.
From the configuration presented above, I would expect that the ssh-agent forwarding would fail and the keys of the local machine would not be listed by ssh-add -l.
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
It is the default behavior. If you do not allow it in ~/.ssh/config it will not be forwarded. But the command-line arguments have higher priority so it overwrites what is defined in the configuration,as explained in the manual page for ssh_config:
ssh(1) obtains configuration data from the following sources in the following order:
command-line options
user's configuration file (~/.ssh/config)
system-wide configuration file (/etc/ssh/ssh_config)
So as already said, you just need to provide correct arguments to ssh.
So back to the questions:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
Because the command-line argument -A has higher priority than the configuration files.
How can I prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
Do not use -A command-line option if you do not want forward your ssh-agent. Use -a command-line option instead.
You are using -A option to connect. man ssh says :
-A Enables forwarding of the authentication agent connection.
You should connect without -A, just using :
ssh user#remote_not_trusted_ip
CLI args will have priority on ssh config file.
By the way, if you want to connect to your trusted ip without forwarding, you can also use :
ssh -a user#trusted_ip
-a Disables forwarding of the authentication agent connection.
This is over a year old, but I encountered the same issue and landed on a config option that works.
I had a problem when I connected from my home computer to my work computer that Git commands no longer worked. I figured out that it was because the connecting home computer's public key was forwarded, which was not configured for that GitHub account.
The -a command line options fixed the problem by not forwarding the authentication agent connection. I also thought that the equivalent ~/.ssh/config option would be this:
ForwardAgent no
When that didn't work I looked for other configuration variables, and finally found that this one worked.
IdentityAgent none
This part of the man-page is crucial:
Since the first obtained value for each parameter is used, more
host-specific declarations should be given near the beginning of the
file, and general defaults at the end.
Put your Host * with ForwardAgent yes at the end and the specific Host with ForwardAgent, not at the start of the .ssh/config
Not an answer to the question, and maybe just semantics:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
My understanding is that authentication keys are never "forwarded" to a remote computer. Rather ssh-agent forwards authentication challenges from a remote server back to the computer that holds the authentication private key through whatever chain of remote computers the ssh connection is running through.

How to run multiple ssh sessions from a remote server

I have deployed a topology of VMs in a VNET on Azure. There is one jumpbox which has access to all these machines and is part of VNET. There are some 25 machines which this jumpbox provides access to.
I want to be able to simultaneously run commands and scripts on all the VMs through this jumpbox.
I installed cssh and it shows following error:
Can't find DISPLAY -- guessing unix:0' at /usr/share/perl5/App/ClusterSSH.pm line 1981.
Can't connect to display unix:0': No such file or directory at /usr/share/perl5/X11/Protocol.pm line 2264.
See this answer here: https://unix.stackexchange.com/a/76777
In essence:
Setup Public Key authentication between the jumpbox and your servers.
for host in $(cat hosts.txt); do ssh "$host" "$command" > "output.$host" ; done
pssh is probably the better tool for this job:
https://www.linux.com/news/parallel-ssh-execution-and-single-shell-control-them-all
cssh should also work, just don't do X11 stuff with it or make sure you have X11 forwarding enabled. Actually, i'm lying, i have no idea if it works without xterm.

docker: SSH access directly into container

Up to now we use several linux users:
system_foo#server
system_bar#server
...
We want to put the system users into docker container.
linux user system_foo --> container system_foo
The changes inside the servers are not problem, but remote systems use these users to send us data.
We need to make ssh system_foo#server work. The remote systems can't be changed.
I would be very easy if there would be just one system per linux operating system (pass port 22 to the container). But there are several.
How can we change from the old scheme to docker containers and keep the service ssh system_foo#server available without changes at the remote site?
Please leave a comment if you don't understand the question. Thank you.
Let's remember however that having ssh support in a container is typically an anti-pattern (unless it's your container only 'concern' but then what would be the point of being able to ssh in. Refer to http://techblog.constantcontact.com/devops/a-tale-of-three-docker-anti-patterns/ for information about that anti-pattern
nsenter could work for you. First ssh to the host and then nsenter to the container.
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)`
nsenter --target $PID --mount --uts --ipc --net --pid
source http://jpetazzo.github.io/2014/06/23/docker-ssh-considered-evil/
Judging by the comments, you might be looking for a solution like dockersh. dockersh is used as a login shell, and lets you place every user that logins to your instance into an isolated container.
This probably won't let you use sftp though.
Note that dockersh includes security warnings in their README, which you'll certainly want to review:
WARNING: Whilst this project tries to make users inside containers
have lowered privileges and drops capabilities to limit users ability
to escalate their privilege level, it is not certain to be completely
secure. Notably when Docker adds user namespace support, this can be
used to further lock down privileges.
Some months ago, I helped my like this. It's not nice, but works. But
pub-key auth needs to be used.
Script which gets called via command in .ssh/authorized_keys
#!/usr/bin/python
import os
import sys
import subprocess
cmd=['ssh', 'user#localhost:2222']
if not 'SSH_ORIGINAL_COMMAND' in os.environ:
cmd.extend(sys.argv[1:])
else:
cmd.append(os.environ['SSH_ORIGINAL_COMMAND'])
sys.exit(subprocess.call(cmd))
file system_foo#server: .ssh/authorized_keys
command="/home/modwork/bin/ssh-wrapper.py" ssh-rsa AAAAB3NzaC1yc2EAAAAB...
If the remote system does ssh system_foo#server the SSH-Daemon at server executes the comand given in .ssh/authorized_keys. This command does a ssh to a different ssh-daemon.
In the docker container, there needs to run ssh-daemon which listens on port 2222.

Forwarding X11 without SSH? How do I run local apps on another Pc running X Server?

I am using Cygwin X and Debian. I can forward my X session via SSH but what happens is that I seem to loose the display forwarding in the X session once in a while (from Cygwin to Linux). So i am guessing that that is an imnplementation thing with Cygwin because I never loose X11 display in the same ssh session when I use Linux to Linux.
This also happens when a X11 forwarded app tries to fork another process lets say I run Thunderbird and I click on a url inside an email. Naturally Thurderbird will try to start the default web browser but it is not doing it with Cygwin X server and here is the message I get when SSH session gives up the display for various reasons that I am not able to know.
"Error: cannot open display: localhost:10.0"
The other issue is that since the ssh gives up the display variable, I have to restart my ssh session to get it working which also kills other apps that I might be running during the ssh session.
Anyway after struggling with this for a while I am thinking that I want to be able to open my apps on another display without using ssh forwarding. I am using it internally and it is almost a closed lan so I am not worried about the security for now. I just want to be able to run the app on the Linux then see the app on the Pc that is running Cygwin.
I tried basic DISPLAY variable thing like "export DISPLAY=MY_CYGWIN_PC_IP:0.0" (on Linux Pc) but it does not work.
So I am wondering about how I can achieve this. What are the proper settings to achieve what i need?
Your direction was OK. export DISPLAY is what you want. But it is not enough.
On the target, you need to type
xhost +from.where.the.windows.are.coming.com
It gives the X server the permission to allow remote windows from this computer.
Beware, it is not really secure! A possible attacker could not only windows shown by you, but even control your mouse/keyboard. But for simple solutions, or if you can trust the remote machine and the network between you, it may be ok.
If not, there is an advanced authorization, based on preshared keys. It is named xauth. Google for xauth.
The Xorg server has an option to disable the remote windows, and there are distributions, (f.e. ubuntu!) who turn this option by default on. You can test it - if you can telnet to the tcp port 6000, it is allowed.
If you are using ssh -X, don't. Use ssh -Y
Cygwin XWin server randomly loses connection
Basically to work as old times , we need enable xdmcp on display manager and use X11 , Xwayland seems to me that doesn't work either.
sddm doesn't support xdmcp , but gdm does , you need edit /etc/gdm/custom.conf and add
[security]
DisallowTCP=false
[xdmcp]
Enable=true
xhost + ip_of_remote_computer
echo $DISPLAY (the number of the display usually :0 or :1)
after you can verify :
netstat -l | grep xdmcp
udp 0 0 0.0.0.0:xdmcp 0.0.0.0:*
lsof -i :xdmcp
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
gdm 862335 root 12u IPv4 71774686 0t0 UDP *:xdmcp
on remote host :
export DISPLAY="ip_of_server:0" (see if is 0 or other number in echo $DISPLAY on server mention above )
xclock &
References:
http://www.softpanorama.org/Xwindows/Troubleshooting/can_not_open_display.shtml
https://tldp.org/HOWTO/html_single/XDMCP-HOWTO/
https://wiki.archlinux.org/title/XDMCP

Resources