I am trying to create some users with Amazon Linux 2 using adduser command but not able to create ssh files for login for those users.
If someone can help me out with the steps it would be really helpful
This is standard Linux. It is actually unrelated to the fact that you are using an Amazon EC2 instance.
In a typical organisation, users should generate their own keypairs and then provide the public half to the SysAdmins so they can put it in the ~/.ssh/authorized_keys file. This way, even the SysAdmins don't have the private half of the keypair.
See: How to use ssh-keygen to generate a new SSH key
Alternatively, users can generate a keypair in the Amazon EC2 console. The public keypair will be retained by AWS for launching future instances and the private keypair will be provided as a downloaded .pem file.
They can extract the public key from a private key with:
ssh-keygen -y -f key.pem > key.pub
See: Use RSA private key to generate public key?
Related
I understand that one could connect to a Virtual Server by either SSH Public-private Keys Authentication OR by merely using a username and password.
If SSH Public-private Keys Authentication type is used and i decide to give another user access to the server, must I share my private key with him or he can create his own private key that would still work with the public key on the Virtual Machine?
For the SSH key, the public key and the private key are one to one. So the first, you can use the same private key to access the VM from different on-premise machines via the same user.
And second, if you want to use the same public key for different users, then you need to share the public key with different users in the VM by setting the authorized_keys file. It means you need to add the authorized_keys file in each users' /home/user/.ssh path.
Update:
Here is the screenshot that adds or updates the user with the public SSH key for the existing VM:
I have to provide docker image that will be used in CI, and it should has git ssh authorization built-in.
To achieve this I would need to put a private key of git service user inside, but I am out of ideas how to do it without exposing key content itself.
I understand that if end-user is able to authorize with this key from within the container it is basically the same as giving him the key itself, but I am alright that user can perform any ssh auth required operations from withing the image, but do not want key to be extracted and used somewhere else.
So far I thought about copying encrypted key and providing the password to key manager during the build with ssh-add, but of course it won't work because it will save password only for current shell session. Are there any best practices to do such a thing?
Using the vmware-vmx builder I don't see a way to specify the SSH private key to use. The only options I see are:
ssh_username
ssh_password
SSH access using passwords are disabled and I only allow public key SSH. Thanks.
You should use ssh_private_key_file.
I want to deploy a repository to server using SFTP Protocol on every Push, I have done it successfully for FTP Protocol,
But the problem is when i have ppk file not a password.I don't have any clue about how to deploy it using bit bucket pipeline only using hostname and ppk file.
is there anyone who can help me with this?
For those who find this via Google:
There is a area in Settings > SSH keys in you Bitbucket admin that allows you to enter a SSH private/public key pair. Any keypair you enter here will automatically be used to authenticate requests you make via Pipelines. You can only enter one keypair, so you will need to make sure the public key is added to every machine in your deployment pipeline.
Your steps are:
Generate a new SSH keypair on your computer or use your existing PPK
If you use the PPK you already have you will need to use PuttyGen to generate OpenSSH public and private keys from it.
Add the keypair to Bitbucket via the setting menu
Ensure the public key is correctly installed on every machine in your pipeline
You should probably also add the host fingerprint of any services you are pushing to to prevent new fingerprint errors. This is below the area where you add SSH keypairs in Bitbucket.
Add password for your user
$ sudo passwd USERNAME
Enter new UNIX password:
Retype new UNIX password:
Enable password authentication by editing /etc/ssh/sshd_config: change PasswordAuthentication no to PasswordAuthentication yes
sudo /etc/init.d/ssh restart
I am due to go on application support shortly and one of the steps in order to do that is to verify that I can login to all of our application nodes.
In order to login to an application node, you first need to login to a jumpbox, then from here you need to login to the application node.
All login is done via ssh.
ssh user#jumpbox
ssh user#applicationnode
uname -a - > Verified login
This is going to be a neanderthal task that will occur on roughly 1000 nodes.
Consequently I am trying to automate the process.
I have tried using Fabric library in Python, setting up a gateway to point at the jumpbox, but I still get prompted for password, which would take away from the automation.
Is this automated in the industry by any devops tools?
this has two answers: one for the future, and one for now.
Doing automated remote login correctly
In the future you should generate a public/private key pair, and list the public key in your remote user's .ssh/authorized keys. That will make access password-less, if your local SSH uses the right private key to authenticate:
rsa-keygen -t rsa -f deploykey
cp deploykey .ssh/id_rsa
#and insert the content of deploykey.pub into the remote user's .ssh/authorized keys
Doing mass access using password
Now, I don't know which OS you're using, but just assuming you're using a linux or *BSD
http://sourceforge.net/projects/sshpass/ will allow you to non-interactively supply your password.