Passwordless ssh connection from Windows - linux

How can I create an ssh key from Windows and install it on a Linux host using OpenSSH to log in without a password for each connection?

CREATE AND INSTALL SSH KEY
First of all, we need to create a new key in the Windows pc (where we start the connection) using:
ssh-keygen -t rsa
Don't change the default path or remember where you saved the key, it will be used for the next command.
Press enter another two times to avoid using a passphrase (if you don't want it).
After that, if you haven't change the default path, the key will be created into {USERPROFILE}\.ssh\id_rsa.pub.
Now, you can usually use the command ssh-copy-id for installing the key on the remote host, but unfortunately this command is not available on Windows, so we have to install it using this command:
type $env:USERPROFILE\.ssh\id_rsa.pub | ssh {REMOTE_HOST} "cat >> .ssh/authorized_keys"
or if your key is not in the default path:
type {RSA_KEY_PATH} | ssh {REMOTE_HOST} "cat >> .ssh/authorized_keys"
and replace the {RSA_KEY_PATH} with your RSA path.
Replace {REMOTE_HOST} with the remote host IP/Name (like pi#192.168.0.1), launch the command, insert the password if required, and the work is done!
IMPORTANT!
SETTING UP .ssh FOLDER
If the ~/.ssh folder is not existing in your remote host, you need to configure them, this is usually done by the command ssh-copy-id, but we can not access to this power from Windows!
You need to connect to the remote host in ssh and create the .ssh directory and the authorized_keys file for the first time:
ssh {REMOTE_HOST}
Create the .ssh directory:
mkdir ~/.ssh
Set the right permissions:
chmod 700 ~/.ssh
Create the authorized_keys file:
touch ~/.ssh/authorized_keys
Set the right permissions:
chmod 600 ~/.ssh/authorized_keys
NOTE
The authorized_keys is not a folder, if you try to create it using mkdir, the SSH connection passwordless will not work, and if you debug the ssh on the host, you will notice an error/log similar to:
~/.ssh/authorized_keys is not a key file.
ADD YOUR SSH KEY ON YOUR AGENT
Run those two lines on your Windows pc to add the created key on your cmd/powershell:
ssh-agent $SHELL
ssh-add

Related

Unable to connect via ssh with public key authentication method

On my Windows 10, I am running into the problem of not being able to connect to m Vagrant virtual machine via ssh user with public key authentication method at git bash using command such as
$ ssh -v lauser#127.0.0.1 -p 2222 -i ~/.ssh/id_rsa
I would be prompted for password, as if the public key I copied to in the ~/.ssh/Authorized_keys file inside the vm were not seen. Meanwhile,the password authentication method works, as well as 'vagrant ssh'.
I have made sure to
create key pairs locally, create a .ssh directory at the remote, and add pub key string to the remote's .ssh /authorized_keys file; both the .ssh and the .ssh /authorized_keys file are owned by the user(lauser), and set at 700 and 644
edit the /etc/ssh/sshd_config file on vm to use
RSAAuthentication yes
PubkeyAuthentication yes
and restarted the sshd server (with 'sudo service ssh restart').
verify that firewall has been disabled temporarily to eliminate any complication.
verify that there is only one vm running, all others are either in 'suspend' or 'halt' mode.
confirm the file type by 'file ~/.ssh/authorized_keys', and get confirmation '~/.ssh/authorized_keys: OpenSSH RSA public key'
verify that the keys match by comparing the output from 'sudo cat ~/.ssh/authorized_keys' in vm and the output from ' cat ~/.ssh/id_rsa.pub' at the local.
but still I get Permission denied (publickey) when trying to connect through public key authentication.
It sounds like you've done everything correctly so far. When I run in to this problem, it's usually due to directory permissions on the target user's home directory (~), ~/.ssh or ~/.ssh/authorized_keys.
See this answer on SuperUser.
I faced same challenges when the home directory on the remote did not have correct privileges. Changing permissions from 777 to 744 helped me

SSH automatic login invalidation

Let's say I have two unix machines, shell1 and shell2 and I want to connect automatically without password from user1#shell1 to user2#shell2.
So I execute ssh-copy-id -i /home/user1/.ssh/id_rsa.pub user2#shell2, confirm host adding and insert user2 of shell2 password and I have automatic ssh login. Good!
But my question is: what happens if user2#shell2 changes password? Will the automatic login behave as before or will I have to register again user1#shell1 against user2#shell2?
SSH public/private key authentication is independent of passwords you set.
The key stored(as authorized keys) on the machine you want to connect matches with the private key of the user trying to connect.
for example.
#!/bin/bash
#here the user is ubuntu
mkdir -p /home/ubuntu/.ssh
echo 'ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDBR1l4eRUrSK4YPruFtV0Z5rVYCeZN/aTv69fWScP1PHTRHc0hlK2NL97RmDQq6oCgkUibbBWdKx+jfjlu2UxNhWOTIeW3SIiVxLyRZTWBcwyaUfn2LOQO6DVuUfc+D2crBCRCI61xUHHx8ObamhW8FjWWugbBa2bdP8JcMu4H/jr+nOVfRE99n/FLUdDoiClDQpJOh1YzNwbHNZdkxrEaTuLbPF+81fGcR3OtSvacJBtldCjjtwnuB/eZ1vMzaa0IiW629amKnEhuhM3wCl8OEX8v++c8ifmxEPmuoVqbg2i1ePPVMJ/zbWerhkAFz4xvYhXCJ0DgLx52MtBw3C2f niks#ubuntu' >> /home/ubuntu/.ssh/authorized_keys
chown ubuntu.ubuntu /home/ubuntu/.ssh
chown ubuntu.ubuntu /home/ubuntu/.ssh/authorized_keys
chmod go-rwx /home/ubuntu/.ssh
chmod go-rwx /home/ubuntu/.ssh/authorized_keys
This script using your own key and your machine will be ready to connect via ssh.

How to make key based ssh user?

I am new to Ubuntu-Linux,i have to create a ssh user in remote system and generate its key. and access this system by key_file through the command.
ssh -i key_file user#host
Can any body tell me how can i do ?
On the system you are trying to connect to, the public key (usually id_rsa.pub or something similar) needs to be added to the authorized_keys file.
If the user is brand new and the authorized_keys file doesn't exist yet, this command will create it for you.
cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
Next just make sure sshd is running on the host and you should be able to connect with the command you posted.
on remote-server-
ssh-keygen
ssh-copy-id user#host
cd .ssh
make a copy of the file id_rsa and give any body who want to access this server/system.
on the other system
ssh -i id_rsa user#host
If you want to connect to another host as user "user", what you need is the public key of the user that is going to open that connection, i.e. the user you are logged in on your desktop computer or some server you are coming from, not for the user, you are logging in to on the remote host.
You can check, if the keys for your current user are already created in $HOME/.ssh; there you should find something like "id_rsa" and "id_rsa.pub" (for rsa keys). If they don't exist, you create them by calling
ssh-keygen -t rsa
The public key that is generated that way, id_rsa.pub in this example, has to be put in a file ${HOME of user on remote host}/.ssh/authorized_keys on the target host.
If this file does not exist on the remote host or if even .ssh does not exist, you have to create those files with the following permissions:
.ssh 700
.ssh/authorized_keys 600
See http://www.openssh.com/faq.html#3.14 for details.
A detailed description of the process can be found here:
https://help.github.com/articles/generating-ssh-keys/

Changing user to root when connected to a linux server and copying files

My script is coded in a way that doesn't allow you to connect to a server directly by root. This code basically copies files from a server to my computer and it works but I don't have access to many files because only root can access them. How can I connect to a server as a user and then copy its files by switching to root?
Code I want to change:
sshpass -p "password" scp -q -r username#74.11.11.11:some_directory copy_it/here/
In other words, I want to be able to remotely copy files which are only accessible to root on a remote server, but don't wish to access the remote server via ssh/scp directly as root.
Is it possible through only ssh and not sshpass?
If I understand your question correctly, you want to be able to remotely copy files which are only accessible to root on the remote machine, but you don't wish to (or can't) access the remote machine via ssh/scp directly as root. And a separate question is whether it could be done without sshpass.
(Please understand that the solutions I suggest below have various security implications and you should weigh up the benefits versus potential consequences before deploying them. I can't know your specific usage scenario to tell you if these are a good idea or not.)
When you ssh/scp as a user, you don't have access to the files which are only accessible to root, so you can't copy all of them. So you need to instead "switch to root" once connected in order to copy the files.
"Switching to root" for a command is accomplished by prefixing it with sudo, so the approach would be to remotely execute commands which copy the files via sudo to /tmp on the remote machine, changes their owner to the connected user, and then remotely copy them from /tmp:
ssh username#74.11.11.11 "sudo cp -R some_directory /tmp"
ssh username#74.11.11.11 "sudo chown -R username:username /tmp/some_directory"
scp -q -r username#74.11.11.11:/tmp/some_directory copy_it/here/
ssh username#74.11.11.11 "rm -r /tmp/some_directory"
However, sudo prompts for the user's password, so you'll get a "sudo: no tty present and no askpass program specified" error if you try this. So you need to edit /etc/sudoers on the remote machine to authorize the user to use sudo for the needed commands without a password. Add these lines:
username ALL=NOPASSWD: /bin/cp
username ALL=NOPASSWD: /bin/chown
(Or, if you're cool with the user being able to execute any command via sudo without being prompted for password, you could instead use:)
username ALL=NOPASSWD: ALL
Now the above commands will work and you'll be able to copy your files.
As for avoiding using sshpass, you could instead use a public/private key pair, in which a private key on the local machine unlocks a public key on the remote machine in order to authenticate the user, rather than a password.
To set this up, on your local machine, type ssh-keygen. Accept the default file (/home/username/.ssh/id_rsa). Use an empty passphrase. Then append the file /home/username/.ssh/id_rsa.pub on the local machine to /home/username/.ssh/authorized_keys on the remote machine:
cat /home/username/.ssh/id_rsa.pub | ssh username#74.11.11.11 \
"mkdir -m 0700 -p .ssh && cat - >> .ssh/authorized_keys && \
chmod 0600 .ssh/authorized_keys"
Once you've done this, you'll be able to use ssh or scp from the local machine without password authorization.

authorized_keys does not present for new user

I want to setup an ssh key in a machine of Linux running under AWS in EC2 cloud.
For that firstly, I installed cygwin, then I followed the following steps :
ssh-keygen -t dsa -f ~/.ssh/<key name> -C "<username of remote server>#<ip>"
cat ~/.ssh/<key name>.pub | ssh <username of remote server>#<ip> "cat >> ~/.ssh/authorized_keys"
Now the 1st statement executes successfully but the 2nd statement shows
bash: /home/<username of server>/.ssh/authorized_keys: No such file exists
Prior to this, I connected to the remote machine in root mode and created the user, that I am specifying at the command 1, 2 (username)
And I saw that the file is not present in the remote server for the user I created explicitly, but it is present for the user root.
bash: /home//.ssh/authorized_keys: No such file exists
When you create a new user, the ~/.ssh directory is not created by default. You will have to create the ~/.ssh/ directory and ~/.ssh/authorized_keys file yourself.
On your server, check whether ~/.ssh or ~/.ssh/authorized_keys exists. Looking at the error you have, it seems that it does not.
When you create a new linux instance, you specify a key pair that you want to use. You have a choice of creating a key pair, and downloading the public key, or uploading a private key.
In your steps, you never reference the key pair you specified when you created the instance. So the 2nd command should be something like:
cat ~/.ssh/<key name>.pub | ssh -i ~/.ssh/<key specified when launching instance> ec2-user#<public id> ...
ec2-user may be different depending on what AMI you used to create your instance - ubuntu is the default user for ubuntu instances, for example.

Resources