I am changing different parameters like RSAAuthentication, PubkeyAuthentication and PasswordAuthentication (sudo vim /etc/ssh/sshd_config) to disable ssh password authentication to force ssh login via public key only.
The experiments are adversely affecting many users as they suddenly find "Connection refused" while trying to ssh to the server. I want to avoid these experiments. Is there any work around to enable public key authentication without touching system files like /etc/ssh/ssd_config?
Sure. Set up an alternative configuration file, and run sshd on another port while you are experimenting:
cp sshd_config sshd_config_working
/usr/sbin/sshd -p 2222 -f sshd_config_working
Now you can connect with:
ssh -p 2222 user#localhost
And you can make as many changes as you want until you it working as desired. At that point, copy your _working config back to the main config file and restart sshd.
Alternatively, stop mucking about on a production server and set up a virtual machine or cotainer for testing, where you can modify the sshd configuration as much as you want without affecting anybody.
Related
To work remotely I need to SSH into the main server and then again into the departmental server.
I would like to set up a tunnel using sublime text 3 wbond sftp package to view and edit files remotely but I can't seem to find any information for setting up a tunnel. Is this even possible?
The reason I'm interested in this particular package is because I am unable to install any packages locally on the server, hence using something like rsub is not possible.
Any other suggestions besides sublime sftp are welcome.
I'm not sure the SFTP plugin would allow to do this directly.
What i would suggest is for you to use ssh -L to create a tunnel.
ssh -L localhost:random_unused_port:target_server:22 username_for_middle_server#middle_server -nNT
Use the password/identity_file for the middle server
The -nNT is to avoid opening an interactive shell in the middle server.
IMPORTANT: You need to keep the ssh -L command running so keep that shell open.
In this way you can connect to the target_server as such:
ssh username_for_target_server#localhost -p random_port_you_allocated
Similarly you can setup the SFTP plugin file as such
{
...
"host":"localhost",
"user":"username_for_target_server",
"ssh_key_file": "path_to_target_server_key",
"port":"random_port_you_allocated",
....
}
As a sidenote, always use the same port to tunnel to the same server, otherwise, with the default ssh configuration, you will be warned of a "Man in the middle attack" because the signature saved in the .ssh/known_hosts will not match with the previous one. This can be avoided by disabling this feature but I wouldn't recommend it.
ssh-agent forwarding can be accomplished with ssh -A ....
Most references I have found state that the local machine must configure ~/.ssh/config to enable AgentForwarding with the following code:
Host <trusted_ip>
ForwardAgent yes
Host *
ForwardAgent no
However, with this configuration, I am still able to see my local machines keys when tunneling into a remote machine, with ssh -A user#remote_not_trusted_ip, and running ssh-add -l.
From the configuration presented above, I would expect that the ssh-agent forwarding would fail and the keys of the local machine would not be listed by ssh-add -l.
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
How can i prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
It is the default behavior. If you do not allow it in ~/.ssh/config it will not be forwarded. But the command-line arguments have higher priority so it overwrites what is defined in the configuration,as explained in the manual page for ssh_config:
ssh(1) obtains configuration data from the following sources in the following order:
command-line options
user's configuration file (~/.ssh/config)
system-wide configuration file (/etc/ssh/ssh_config)
So as already said, you just need to provide correct arguments to ssh.
So back to the questions:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
Host *
ForwardAgent no
Because the command-line argument -A has higher priority than the configuration files.
How can I prevent ssh-agent from forwarding keys to machines not explicitly defined in the ~/.ssh/config?
Do not use -A command-line option if you do not want forward your ssh-agent. Use -a command-line option instead.
You are using -A option to connect. man ssh says :
-A Enables forwarding of the authentication agent connection.
You should connect without -A, just using :
ssh user#remote_not_trusted_ip
CLI args will have priority on ssh config file.
By the way, if you want to connect to your trusted ip without forwarding, you can also use :
ssh -a user#trusted_ip
-a Disables forwarding of the authentication agent connection.
This is over a year old, but I encountered the same issue and landed on a config option that works.
I had a problem when I connected from my home computer to my work computer that Git commands no longer worked. I figured out that it was because the connecting home computer's public key was forwarded, which was not configured for that GitHub account.
The -a command line options fixed the problem by not forwarding the authentication agent connection. I also thought that the equivalent ~/.ssh/config option would be this:
ForwardAgent no
When that didn't work I looked for other configuration variables, and finally found that this one worked.
IdentityAgent none
This part of the man-page is crucial:
Since the first obtained value for each parameter is used, more
host-specific declarations should be given near the beginning of the
file, and general defaults at the end.
Put your Host * with ForwardAgent yes at the end and the specific Host with ForwardAgent, not at the start of the .ssh/config
Not an answer to the question, and maybe just semantics:
Why is the machine #remote_not_trusted_ip able to access the ssh-agent forwarded keys even though the ~/.ssh/config file states the following?
My understanding is that authentication keys are never "forwarded" to a remote computer. Rather ssh-agent forwards authentication challenges from a remote server back to the computer that holds the authentication private key through whatever chain of remote computers the ssh connection is running through.
[SSH] "Could not open a connection to your authentication agent". error
I am trying to add ssh keys into my ssh agent. I start by making sure that the ssh-agent is running.
exec ssh-agent bash
I make sure that ssh-agent is running.
ps axu | grep [s]sh
and get the following
root 1562 ... ssh-agent bash
The env variables are set correctly.
SSH_AGENT_PID=1562
SSH_AUTH_SOCK=/tmp/ssh-699iHAxuK4xX/agent.1561
However when I try to add the private key using
sudo ssh-add ~/.ssh/peter-key
I get the ssh error
Could not open a connection to your authentication agent.
I have tried the suggestions on stackoverflow and serverfault but nothing.
Note: I am running a linux machine on one of the free tier AWS machines with ubuntu. My instance's security group allow (temporarily) all incoming and outgoing ssh connections from any IP address. Anyone know what the error could be?
Just use
ssh-add ~/.ssh/peter-key
...not...
sudo ssh-add ~/.ssh/peter-key
Using sudo (optionally/configurably, but typically) clears a number of environment variables, including the ones you just verified were set. (Compare output of sudo env and plain env to see this effect).
If you must use sudo to read the key, then you can ensure that the necessary environment variable is set on the other side by doing so explicitly yourself:
sudo env "SSH_AUTH_SOCK=$SSH_AUTH_SOCK" ssh-add ~/.ssh/peter-key
However, it's possible for security-sensitive programs working with UNIX domain sockets to check the ownership and permission of software on the other end of that socket, and to refuse to communicate with anything running on a user account different from what they expect, so it's possible that this approach may not be future-proof against security features added to ssh-agent.
I have two Amazon EC2 Ubuntu instances. When I connect to one of them, I can do
ssh ubuntu#54.123.4.56
and the shell uses the correct keyfile from my ~/.ssh directory.
I just set up a new instance, and I'm trying to figure out how to replicate that behavior for this new one. It's a minor thing, just driving me nuts. When I log in with:
ssh -i ~/.ssh/mykey.pem ubuntu#54.987.6.54
it works fine, but with just
ssh ubuntu#54.987.6.54
I get:
Permission denied (publickey).
I have no idea how I managed to get it to work this way for the first server, but I'd like to be able to run ssh into the second server without the "-i abc.pem" argument. Permissions are 600:
-r-------- 1 mdexter mdexter 1692 Nov 11 20:40 abc.pem
What I have tried: I copied the public key from authorized_keys on the remote server and pasted it to authorized_keys on the local server, with mdexter#172.12.34.56 (private key) because I thought that might be what created the association in the shell between that key and that server for the shell.
The only difference I can recall between how I set up the two servers is that with the first, I created a .ppk key in PuTTy so that I could connect through FileZilla for SFTP. But I think SSH is still utilizing the .pem given by Amazon.
How can I tell the shell to just know to always use my .pem key for that server when SSHing into that particular IP? It's trivial, but I'm trying to strengthen my (rudimentary) understanding of public/private keys and I'm wondering if this plays into that.
You could solve this in 3 ways:
By placing the contents of your ~/.ssh/mykey.pem into ~/.ssh/id_rsa on the machine where you are ssh'ing into 2nd instance. Make sure you also change the permissions of ~/.ssh/id_rsa to 600.
Using ssh-agent (ssh-agent will manage the keys for you)
Start ssh-agent
eval `ssh-agent -s`
Add the key to ssh-agent using ssh-add
ssh-add mykey.pem
Using ssh-config file:
You could use ssh config file. From the machine where you are trying to ssh, keep the following contents in the ~/.ssh/config file (make sure to give this file 600 permissions):
Host host2
HostName 54.987.6.54
Port 22
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Once you do that now you could access do the ssh like this:
ssh host2
After performing any of the above steps you should be able to ssh into your second instance with out specifying the key path.
Note: The second option requires you to add the key using ssh-add every time you logout and log back in so to make that a permanent injection see this SO question.
I work at an organization with stringent security requirements, sometimes excessively so. My project team is trying to create an SVN repository, and we are having difficulties setting one up to comply with both our needs and our security requirements.
Our IT department requires us to authenticate ourselves with two-factor authentication. Each developer has an RSA token that must be used to log in to the repository host machines via SSH. The value displayed on the token changes once per minute and each value can be used only once.
The developers require the capability to store passwords. This prevents us from using svn+ssh to log in to the repository. Since the RSA token changes once a minute, we can't store the SSH passwords. Worse, the RSA token would reduce us to one SVN operation each minute. This is flatly unacceptable, especially since we have scripts that chain multiple SVN operations together.
We attempted to compromise by opening an SSH tunnel with port forwarding. We would open up a tunnel using ssh user#hostmachine -L 3690:localhost:3690 to forward all SVN requests on our local machine to the secure machine, where an svnserve process was running. This meant we could log in with two-factor authentication, and then use a separate SVN username and password (which could be stored) with our utilities.
Unfortunately, we noticed that we didn't need the tunnel; port 3690 was available to any computer for whom the hostname was visible. This is unacceptable to IT, and our sysadmin thinks that svnserve is the problem, so she is wondering if we have to go back to svn+ssh.
Is there any solution that works? Is our sysadmin correct? Is there an option on svnserve that will force it to listen only to traffic from localhost?
use:
svnserve -dr /my/repo --listen-host 127.0.0.1
This way the service will only listen on the loopback interface. When you connect with ssh use:
ssh -L3690:127.0.0.1:3690 user#svnserver.mycompany.com
also see:
vince#f12 ~ > svnserve --help
usage: svnserve [-d | -i | -t | -X] [options]
Valid options:
-d [--daemon] : daemon mode
-i [--inetd] : inetd mode
-t [--tunnel] : tunnel mode
-X [--listen-once] : listen-once mode (useful for debugging)
-r [--root] ARG : root of directory to serve
-R [--read-only] : force read only, overriding repository config file
--config-file ARG : read configuration from file ARG
--listen-port ARG : listen port
[mode: daemon, listen-once]
--listen-host ARG : listen hostname or IP address
[mode: daemon, listen-once]
-T [--threads] : use threads instead of fork [mode: daemon]
--foreground : run in foreground (useful for debugging)
[mode: daemon]
--log-file ARG : svnserve log file
--pid-file ARG : write server process ID to file ARG
[mode: daemon, listen-once]
--tunnel-user ARG : tunnel username (default is current uid's name)
[mode: tunnel]
-h [--help] : display this help
--version : show program version information
svnserve might have options to only listen on localhost, but this sounds like a firewall configuration issue.
If port 3690 isn't meant to accessible externally, it should be blocked by the firewall. It shouldn't matter whether svnserve or anything else is listening on that port. svnserve can then continue to listen on 3690, but will only receive connections from localhost because others are blocked by the firewall.