i use this commands to configure ssh on router (via java). i cant seem to get it working (don't know the error)
router# aaa new-model username cisco password 0 cisco
router#ip domain-name rtp.cisco.com
router#crypto key generate rsa
router#ip ssh time-out 60
router#ip ssh authentication-retries 2
router#line vty 0 4 transport input SSH
By default rsa key name is composed by host name and domain name, and since ssh needs an rsa key in order to work, we need to specifiy a domain name before configuring the rsa key and then the ssh. An rsa key name would be somthing like this:
You need to change the hostname first with Hostname >name<* in config mode
Related
I'm using one service which is connected to remote host via ssh. I don't want to store or write ssh keys on that service, I want pass keys to service and execute ssh connection to another host using passed keys before.
To connect to host I used: ssh user#host -i /path/to/key.
How can I use key as the text not a specific file?
I tried ssh user#host -i "key-text-example". It doesn't work like that.
Not as a literal answer to your question, but as the best way to meet your actual need (of connecting via SSH to a remote machine via a system you don't trust to store your private key) -- you should use SSH agent forwarding.
When you pass your private key to a remote system, even transiently, it can be captured; if an attacker is recording everything that goes on on the system with Sysdig, for example, the writes over the FIFO from the process substitution (or the reads done by the SSH client process) will show up plain as day.
Instead of passing the private key to the remote system, agent forwarding sends the request for a signature back from the remote system to your origin machine. (There are even SSH agents for Android, so you can have the request forwarded to your phone -- presumably a device you trust -- such that the private key never leaves it). Similarly, a hardware device such as a YubiKey can store your private key and perform signature operations on behalf of a SSH client -- on behalf of a remote machine when agent forwarding is requested.
For the simple case:
local$ [[ $SSH_AUTH_SOCK ]] || eval "$(ssh-agent -s)"
local$ ssh-add # load the key into your local agent
local$ ssh -A host1 # connects to host1 with agent forwarding enabled
host1$ ssh host2 # asks the ssh agent on "local" to authenticate to host2
host2$
I am writing a relatively small bash script that is supposed to update DNS records for a server behind a NAT which might change its external IP address. Essentially a free DynDNS using my DNS provider's API.
I am retrieving the server's IP address using a simple query to an external service. But for the sake of security, before pointing my DNS A record to a new arbitrary IP address given to my by an external service I first need to verify that this indeed is the server's IP address. And this check would need to involve a cryptography step since an active MITM attack could be taking place and just forwarding traffic to the server's real IP address.
So what would be the simplest way (if possible through bash) to verify that this is indeed the server's IP address?
I presume you mean that the bash script is running somewhere other than the server whose IP you need to determine?
The obvious solution would be to connect using ssh with strict host checking (and a remembered server key) or via SSL with certificate versification (you could use a self-signed certificate). The former is a bit easier to do out of the box.
Assuming that $IP is the server's new external IP address, this works by first acquiring the servers SSH keys by running ssh-keyscan on localhost and generating a temporary known hosts file. It then substitutes 127.0.0.1 with the given $IP and initiates an ssh session with the temporary known hosts file to the remote IP address. If the session is established and the key verification is successful the command will exit cleanly. Otherwise it will output the Host key verification failed. message. This will work even if authentication with the server fails as host key verification is done before authentication. The script finally checks whether the ssh output includes the given error message and returns valid or invalid correspondingly.
TMP_KNOWN_HOSTS=$(mktemp)
ssh-keyscan 127.0.0.1 > $TMP_KNOWN_HOSTS
sed -i "s/127\.0\.0\.1/$IP/" $TMP_KNOWN_HOSTS
RESPONSE=$(ssh -n -o "UserKnownHostsFile $TMP_KNOWN_HOSTS" -o "StrictHostKeyChecking yes" $IP true 2>&1)
if ! [[ $RESPONSE = *"Host key verification failed."* ]]; then
echo "valid"
else
echo "invalid"
fi
I have to configure hadoop cluster. For that it is required that all systems should be able to ssh each other in passwordless mode. Due to security, I have allowed only key based ssh (no password). There are 5 systems in cluster. I have to generated single key value pair. How to configure all other systems to use this key pair only such that they can ssh each other in passwordless mode.
I'm assuming you mean Linux machines.
There must be a ~/.ssh directory chmod 700 on each machine under the account that will originate or receive the connections.
The (private) key must be generated without a password.
Don't forget that by default weak (<2048 bit) keys are not accepted by ssh recently.
The following must be done to originate a connection.
Your private key must be placed in ~/.ssh/id_rsa or ~/.ssh/id_dsa as appropriate. You may use another name, but then it must be included on a -i option on the machine originating the request to explicitly indicate the private key.
Your private key must be chmod 600.
Now for allowing a machine to receive a request:
Your public key must be placed in a file called ~/.ssh/authorized_keys under the account that will receive the connections. You may place other keys that are allowed to connect via this account in here as well. A particularly tricky thing if you are in vi and pasting the key into the file from the paste buffer in PuTTY is that the key starts with a "ssh-". If you are not in insert mode, the first "s" will put vi in insert mode and the rest of the key will look just fine. But you'll be missing an "s" at the beginning of the key. It took days for me to find that.
I like to chmod 600 ~/.ssh/authorized_keys, but it's usually not required.
Now, you must have the host fingerprint added to the cache. Go to machine A, and ssh to machine B. The first time, you will get a query like "Do you want to add . . . to the host key cache?". This will stop your automated process very effectively. You have a few choices, which are up to your situation: a. manually ssh 20 times from each of 5 machines to the other 4 and say "yes". b. You could get the file called "known_hosts" (this is what ssh calls the "cache") and combine entries so that the same host_keys can be copied to all machines. c. You can put host fingerprints in /etc/ssh/ssh_known_hosts. d. Put the fingerprints in DNS (see man ssh). e. Just turn it off (NOT RECOMMENDED) by setting StrictHostKeyChecking in your ssh configuration.
For the history, I have a local VM (Virtualbox) with OS debian and in this VM I have been developed a Web application. I log in with ssh protocol.
Today, I'm facing a strange troubleshooting. I tried to connect with ssh to my local VM and got the following message:
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
SHA256:_______________________________________.
Please contact your system administrator.
Add correct host key in /Users/_____/.ssh/known_hosts to get rid of this message.
Offending RSA key in /Users/______/.ssh/known_hosts:5
RSA host key for 192.168.1.6 has changed and you have requested strict checking.
Host key verification failed.
I understand that the fingerprint of my local VM has been changed, and i wonder, if it is possible to change the public fingerprint by itself.
I'm trying to understand if there is man in the middle.
Thank you for your time :)
Maybe this can help you https://superuser.com/questions/421997/what-is-a-ssh-key-fingerprint-and-how-is-it-generated
check if exist other machine with the same IP (maybe static IP), you can use "arping" for that
(I post as answer because I can't comment)
I have a public/private key pair for ssh connections to a server S, but now, even if do a ssh to another device that does't need any key authentication, I always have the message:
> ssh user#192.168.0.10
Enter passphrase for key '/home/user/.ssh/id_dsa':
user#192.168.0.10's password:
Usually I hit enter in the first question (leaving it blank) and I type the user's password in the second question.
But as I want to write some scripts to automatize some things, the "Enter passphrase for key '/home/user/.ssh/id_dsa': " message bothers me.
Why it appears for every connection request? Can I do something so it won't ask me that for every connection? Just with the server S?
Thanks
Assuming you're using Linux ssh-agent to store your keys so you don't have to keep typing it.
Using ssh-agent to manage your keys
Based on this ServerFault answer:
ssh -o PubkeyAuthentication=no host.example.org
To avoid typing it every single time, you can add something like this to ~/.ssh/config
Host host.example.org
PubkeyAuthentication no