SVN With Two-Factor Authentication - security

I work at an organization with stringent security requirements, sometimes excessively so. My project team is trying to create an SVN repository, and we are having difficulties setting one up to comply with both our needs and our security requirements.
Our IT department requires us to authenticate ourselves with two-factor authentication. Each developer has an RSA token that must be used to log in to the repository host machines via SSH. The value displayed on the token changes once per minute and each value can be used only once.
The developers require the capability to store passwords. This prevents us from using svn+ssh to log in to the repository. Since the RSA token changes once a minute, we can't store the SSH passwords. Worse, the RSA token would reduce us to one SVN operation each minute. This is flatly unacceptable, especially since we have scripts that chain multiple SVN operations together.
We attempted to compromise by opening an SSH tunnel with port forwarding. We would open up a tunnel using ssh user#hostmachine -L 3690:localhost:3690 to forward all SVN requests on our local machine to the secure machine, where an svnserve process was running. This meant we could log in with two-factor authentication, and then use a separate SVN username and password (which could be stored) with our utilities.
Unfortunately, we noticed that we didn't need the tunnel; port 3690 was available to any computer for whom the hostname was visible. This is unacceptable to IT, and our sysadmin thinks that svnserve is the problem, so she is wondering if we have to go back to svn+ssh.
Is there any solution that works? Is our sysadmin correct? Is there an option on svnserve that will force it to listen only to traffic from localhost?

use:
svnserve -dr /my/repo --listen-host 127.0.0.1
This way the service will only listen on the loopback interface. When you connect with ssh use:
ssh -L3690:127.0.0.1:3690 user#svnserver.mycompany.com
also see:
vince#f12 ~ > svnserve --help
usage: svnserve [-d | -i | -t | -X] [options]
Valid options:
-d [--daemon] : daemon mode
-i [--inetd] : inetd mode
-t [--tunnel] : tunnel mode
-X [--listen-once] : listen-once mode (useful for debugging)
-r [--root] ARG : root of directory to serve
-R [--read-only] : force read only, overriding repository config file
--config-file ARG : read configuration from file ARG
--listen-port ARG : listen port
[mode: daemon, listen-once]
--listen-host ARG : listen hostname or IP address
[mode: daemon, listen-once]
-T [--threads] : use threads instead of fork [mode: daemon]
--foreground : run in foreground (useful for debugging)
[mode: daemon]
--log-file ARG : svnserve log file
--pid-file ARG : write server process ID to file ARG
[mode: daemon, listen-once]
--tunnel-user ARG : tunnel username (default is current uid's name)
[mode: tunnel]
-h [--help] : display this help
--version : show program version information

svnserve might have options to only listen on localhost, but this sounds like a firewall configuration issue.
If port 3690 isn't meant to accessible externally, it should be blocked by the firewall. It shouldn't matter whether svnserve or anything else is listening on that port. svnserve can then continue to listen on 3690, but will only receive connections from localhost because others are blocked by the firewall.

Related

Tiger VNC Creating Session On loopback ip address

installed TigerVNC on Centos 8.3 and tried to run it with vncserver Command but it is giving me this message "vncserver has been replaced by a systemd unit."
I have also followed the instruction from this file /usr/share/doc/tigervnc/HOWTO.md and created a vnc session. the session is accessible only on loopback ip of the machine.
Result of : netstat -tulpn Command:
tcp 0 0 127.0.0.1:5905 0.0.0.0:* LISTEN 2645/Xvnc
tcp6 0 0 ::1:5905 :::* LISTEN 2645/Xvnc
how can i change loopback ip of vnc session to machine ip.
Minhaj:
I ran into this today. TigerVNC has been changed with the version 8.x Fedora kernel. I dug a bit & found it is related to "an upstream decision." What this means in simple English is that the project team made a design decision. I personally agree with the design changes since it brings greater control and security to VCN than previous versions. This is not to suggest the actual VNC protocol is SSL enabled. You should still employ best practices like using firewalld to prevent access to VCN ports and using SSH tunneling to get to the console, etc.
To get started, you'll need to do a bit of simple configuration work as described in /usr/share/doc/tigervnc/HOWTO.md. Start by reading the instructions in the file.
All tasks must be run with root priv, so use the sudo utility for all of them.
TASK 1: At the simplest level, begin by opening the file /etc/tigervnc/vncserver.users
Create an entry for each user that will use the service. For example:
:1=hwojteczko
:2=esong
Note the digit preceding each user name. This is the VNC console number that will be assigned to each user. Save the file.
TASK 2: Inspect the /usr/share/xsessions file to confirm the type of desktop installed on the system. The default desktop is gnome, but there are others, so be mindful of this.
TASK 3: Next, you'll need to modify the Xvnc options file. Fortunately, there are some commented entries already left in place, which can be removed. Open the file /etc/tigervnc/vncserver-config-defaults, remove the comments as shown below, but also add the desktop to the session config within the stanza. This will not likely be there, so it is easy to miss this step. See example below:
securitytypes=vncauth,tlsvnc
desktop=sandbox
geometry=2000x1200
localhost
alwaysshared
session=gnome
TASK 4: As the user, set a VNC password using vncpasswd . This will be similar to what you are accustomed to with previous versions of TigerVNC, but it WILL NOT start TigerVNC.
IMPORTANT: For the next task, you must make sure that you, or the user, is not logged into a desktop session. For those like me who develop code on Linux, this is an easy way to get tripped up. This is not a concern if you are accessing a remote server.
TASK 5: Start the VNC Service for the correct user session. See below:
systemctl start vncserver#:1
You'll see there is no output to speak of. Use sysctl and check the status. It is best to wait about 10-15 seconds before doing so to ensure the startup does not fail.
systemctl status vncserver#:1
TASK 6: Now, you can check to see that port 5901 is open with nmap, as in:
nmap -PN localhost
Which should report something like:
PORT STATE SERVICE
22/tcp open ssh
80/tcp open http
111/tcp open rpcbind
631/tcp open ipp
5901/tcp open vnc-1
now you can ssh to the host and tunnel VNC traffic securely, such as:
ssh hwojteczko#172.16.129.5 -N -L localhost:5901:localhost:5901
TASK 7: When you are done, don't forget to shutdown TigerVCN using systemctl, as in:
systemctl stop vncserver#:1
Happy coding......
h

Sandboxing to allow multiple processes open the same port

Background
I have a command-line application that I use to connect to a remote device on port 1234. I cannot change the port number, and I do not have access to the source to rebuild this tool. I'm currently working in a lab where all ports except SSH are blocked. To get around this, I create a tunnel, i.e.:
ssh -L 1234:remotehost:1234 sshuser#remotehost
Now, I can just point my CLI program at localhost:1234 to connect with my CLI tool to the desired host.
Problem
This CLI tool needs to run for about an hour straight, and I have about 200 remote hosts to test with it. I would like to parallelize this task. Unfortunately, I can only create a single tunnel on my local machine using port 1234.
Question
Is there a (trivial/simple/automated) way to jail/sandbox my CLI tool so that I can launch 100 instances in parallel (i.e. via a shell script) so that each instance "thinks" it's talking to port 1234? For example, does Docker or KVM provide some sort of anonymous/on-demand compute node feature that I could setup rapidly? I'd rather not have to resort to manually deploying and managing a slew of VirtulBox hosts via vagrant.
The simple answer is that you can use multiple IP addresses locally. Each local IP address on the client will allow you to create another tunnel. Currently, you are using localhost. But your client also has an IP address. You can prove my point by trying this syntax:
ssh -f -N -L 127.0.0.1:1234:remotehost1:1234 sshuser#remotehost1 # this is default
ssh -f -N -L <local-IP1>:1234:remotehost2:1234 sshuser#remotehost2 # specifying non-default value <local-IP1>
Now, you just need to figure out how to give your client multiple IP addresses (secondary addresses). Then you can expand this beyond 2 parallel sessions.
I've also added -f and -N to your ssh syntax to put ssh into the background (-f) and to not issue any commands.
Using -R tunnels in the past, I've found that I need to enable GatewayPorts on the server (/etc/ssh/sshd_config). In the case of -L , I don't see the need. However, the ssh man-page explicitly mentioned GatewayPorts associated with the -L function. You may need to play around a bit. I just tried this out on my Mac and I was able to get it going without any GatewayPorts considerations.

How to disable ssh-agent forwarding

ssh-agent forwarding can be accomplished with ssh -A ....
Most references I have found state that the local machine must configure ~/.ssh/config to enable AgentForwarding with the following code:
Host <trusted_ip>
ForwardAgent yes
Host *
ForwardAgent no
However, with this configuration, I am still able to see my local machines keys when tunneling into a remote machine, with ssh -A user#remote_not_trusted_ip, and running ssh-add -l.
From the configuration presented above, I would expect that the ssh-agent forwarding would fail and the keys of the local machine would not be listed by ssh-add -l.
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
It is the default behavior. If you do not allow it in ~/.ssh/config it will not be forwarded. But the command-line arguments have higher priority so it overwrites what is defined in the configuration,as explained in the manual page for ssh_config:
ssh(1) obtains configuration data from the following sources in the following order:
command-line options
user's configuration file (~/.ssh/config)
system-wide configuration file (/etc/ssh/ssh_config)
So as already said, you just need to provide correct arguments to ssh.
So back to the questions:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
Because the command-line argument -A has higher priority than the configuration files.
How can I prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
Do not use -A command-line option if you do not want forward your ssh-agent. Use -a command-line option instead.
You are using -A option to connect. man ssh says :
-A Enables forwarding of the authentication agent connection.
You should connect without -A, just using :
ssh user#remote_not_trusted_ip
CLI args will have priority on ssh config file.
By the way, if you want to connect to your trusted ip without forwarding, you can also use :
ssh -a user#trusted_ip
-a Disables forwarding of the authentication agent connection.
This is over a year old, but I encountered the same issue and landed on a config option that works.
I had a problem when I connected from my home computer to my work computer that Git commands no longer worked. I figured out that it was because the connecting home computer's public key was forwarded, which was not configured for that GitHub account.
The -a command line options fixed the problem by not forwarding the authentication agent connection. I also thought that the equivalent ~/.ssh/config option would be this:
ForwardAgent no
When that didn't work I looked for other configuration variables, and finally found that this one worked.
IdentityAgent none
This part of the man-page is crucial:
Since the first obtained value for each parameter is used, more
host-specific declarations should be given near the beginning of the
file, and general defaults at the end.
Put your Host * with ForwardAgent yes at the end and the specific Host with ForwardAgent, not at the start of the .ssh/config
Not an answer to the question, and maybe just semantics:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
My understanding is that authentication keys are never "forwarded" to a remote computer. Rather ssh-agent forwards authentication challenges from a remote server back to the computer that holds the authentication private key through whatever chain of remote computers the ssh connection is running through.

Ssh public key authentication without changing system files

I am changing different parameters like RSAAuthentication, PubkeyAuthentication and PasswordAuthentication (sudo vim /etc/ssh/sshd_config) to disable ssh password authentication to force ssh login via public key only.
The experiments are adversely affecting many users as they suddenly find "Connection refused" while trying to ssh to the server. I want to avoid these experiments. Is there any work around to enable public key authentication without touching system files like /etc/ssh/ssd_config?
Sure. Set up an alternative configuration file, and run sshd on another port while you are experimenting:
cp sshd_config sshd_config_working
/usr/sbin/sshd -p 2222 -f sshd_config_working
Now you can connect with:
ssh -p 2222 user#localhost
And you can make as many changes as you want until you it working as desired. At that point, copy your _working config back to the main config file and restart sshd.
Alternatively, stop mucking about on a production server and set up a virtual machine or cotainer for testing, where you can modify the sshd configuration as much as you want without affecting anybody.

gitolite: PTY allocation request failed on channel 0

Both jenkins (the ci-server) and my git repository are hosted on the same server. The git repo is controlled by gitolite. If I access the repository from outside, for instance from my workstation I get
ssh git#arrakis
PTY allocation request failed on channel 0
hello simou, this is git#arrakis running gitolite3 v3.0-12-ge0ed141 on git 1.7.3.4
R W testing
Connection to arrakis closed.
Which is fine I guess (besides the PTY... warning)
Now back to the server, I'd like jenkins to be able to connect to my git repository as well.
jenkins#arrakis:~> ssh git#arrakis
gitolite: PTY allocation request failed on channel 0
Logging onto arrakis as user git (the gitolite user):
git#arrakis:~> cat ~git/.ssh/authorized_keys
command="/home/git/gitServer/gitolite/src/gitolite-shell jenkins",no-port-forwarding,no-x11-forwarding,no-agent-forwarding,no-pty ssh-rsa <PUBLIC-KEY> jenkins#arrakis
The "no-pty" entry made me suspicious, so I removed it from authorized_keys and tried again:
jenkins#arrakis:~> ssh git#arrakis
hello jenkins, this is git#arrakis running gitolite3 v3.0-12-ge0ed141 on git 1.7.3.4
R W testing
Connection to arrakis closed.
This solves my issue at this point, but I'm not sure about the consequences of removing "no-pty".
And why does it only affect the local access, since the remote access doesn't seem to be affected at all?
openSUSE 11.4 (x86_64)
VERSION = 11.4
CODENAME = Celadon
The difference in behavior between your workstation and your server is likely due to using different versions of the OpenSSH client (ssh) on each system (not remote versus local). The client will request a pty from the server unless -T is given, or the RequestTTY configuration option is set to no (the latter was first available in OpenSSH 5.9). The difference in behavior arises in how the client deals with having this request denied by the server (e.g. because no-pty is given in the applicable authorized_keys entry):
Before OpenSSH 5.6:
the client will display the “PTY allocation request failed” message, and
continue in “no pty” mode
In OpenSSH 5.6-5.8:
the client will display the “PTY allocation request failed” message, and
abort the connection
In OpenSSH 5.9 (and later):
the client will display the “PTY allocation request failed” message, and
if -t was not given, and RequestTTY is auto (the default), then
continue in “no pty” mode
else (-t given, or the RequestTTY configuration option is yes or force)
abort the connection
Since your server’s ssh appears to abort when its pty allocation request is rejected, it is probably running OpenSSH 5.6-5.8 (at least for the client binary). Likewise, since your workstation’s ssh shows the warning, but continues, it is probably running an OpenSSH from before 5.6, or one that is 5.9-or-later. You can check your versions with ssh -V.
You can prevent the difference in behavior by using always giving the -T option so that the client (any version) never requests a pty from the server:
ssh -T git#YourServer
During actual Git access, the client never tries to allocate a pty because Git will give the client a specific command to run (e.g. ssh server git-upload-pack path/to/repository) instead of requesting an “interactive” session (e.g. ssh server). In other words, no-pty should not have been causing problems for actual Git access; it only affected your authentication testing (depending on which version of the OpenSSH client you were running) because the lack of a command argument causes an implicit pty allocation request (for an “interactive” session).
From the OpenSSH 5.6 release announcement:
Kill channel when pty allocation requests fail. Fixed stuck client
if the server refuses pty allocation (bz#1698)
bz#1698 seems to be a reference to a report logged in the “Portable OpenSSH” Bugzilla.
From the check-in message of OpenSSH clientloop.c rev 1.234:
improve our behaviour when TTY allocation fails: if we are in
RequestTTY=auto mode (the default), then do not treat at TTY
allocation error as fatal but rather just restore the local TTY
to cooked mode and continue. This is more graceful on devices that
never allocate TTYs.
If RequestTTY is set to "yes" or "force", then failure to allocate
a TTY is fatal.
To know why it affects only the local access, you would need to debug it like in this article.
ssh -vvv git#arrakis
If your /etc/ssh/sshd_config SSH daemon config file contains the (un-commented) line SyslogFacility AUTHPRIV, you can have a look at your SSH logs in /var/log/secure.
That being said, check out GitoliteV3: I don't think it uses no-pty in the current setup.
Beside Chris Johnsen very complete answer note that giving explicitely the info command will not show the PTY warning:
ssh git#arrakis info
In that case SSH considers that this is not an interactive session and will not request a TTY.

Resources