EC2 ssh-add identity doesn't "stick" - linux

I'm trying to connect my Atlassian BitBucket with an AWS EC2.
I followed all the right steps and it's working. The one thing that got me into trouble was launching the ssh-agent with eval ssh-agent -s and then ssh-add mybitbucket.pub to add the identity.
However, the issue is that the identity does not persist. Meaning that if log back in, in order to do any git operations, I have to do eval ssh-agent -s and ssh-add mybitbucket.pub again.
[root#ip-10-0-1-112 themes]# ssh-add -l
The agent has no identities.
Any recommended workarounds?
Steps taken so far:
Login EC2
Sudo su -
ssh-keygen -t rsa
eval ssh-agent -s
ssh-add mybitbucket.pub
copy the key in BitBucket's web interface.
Thanks!

In the case where you only need the key when you are ssh'd to the instance, you can set up ssh-agent forwarding. This means that when you connect to a specified host, the remote server is allowed to use the keys from your local ssh-agent in order to connect to things, such as the bitbucket account.
So, what you could do is add your public key to the BitBucket account, which would then allow you to access BitBucket via ssh because your local machine has your private key. Then, by enabling ssh-agent forwarding, when you ssh to the EC2 instance, you allow that instance to use your private key access BitBucket without ever storing your private key on the instance.
Here's an article on how to set this up:
https://developer.github.com/guides/using-ssh-agent-forwarding/
In short, add the following to your ~/.ssh/config:
Host example.com
ForwardAgent yes
Where example.com is the public IP of your AWS instance, or the EIP assigned to it, etc.

Related

Gitalb SSH into ubuntu server

in my local dev windows machine I generated shh key using PuttyGen. I also pasted public key into gitlab ssh keys section so now are linked.
I can correcty use ssh now from my windows manchine but I want to use it also in my production server which uses ubuntu.
For example I wan to ssh clone a repository into my ubuntu machine, where and how should I add the ssh keys to my ubuntu server so I can link it with gitlab.
I used this tutorial to generate ssh keys in windows with Putty.
https://ourcodeworld.com/articles/read/1421/how-to-create-a-ssh-key-to-work-with-github-and-gitlab-using-puttygen-in-windows-10
How should I add the ssh keys to my Ubuntu server so I can link it with GitLab?
Ideally, you would create a dedicated key pair on your Ubuntu server, in order to be able to clone GitLab repositories.
On that Ubuntu server, go to your $HOME folder of your account 'user' (replace user by the actual user name you are login with on that server).
cd
# assuming you do not have a default key yet:
ssh-keygen -t rsa -P "" -f ~/.ssh/id_rsa
# copy ~/.ssh/id_rsa.pub to your GitLab account
# Check the key is working
ssh -Tv git#gitlab.com
# Use your key to clone repositories
git clone git#gitlab.com:me/myRepository

How do I access a remote (local gitlab instance on remote server) repository over SSH?

The setup is as follows:
remote private server far far away
remote private server has private gitlab instance on port XXXX
remote private server is configured to allow SSH sign-on via SSH key
gitlab instance on port XXXX of remote private server requires SSH key authentication using different SSH key
How can I clone that repository onto my local machine, and push/pull data remotely given that setup?
This is how I access it locally when I am not far, far away from remote private server:
git clone git#XXX.XXX.XX.X:REPODIR/repo_name.git
In this case, XXX.XXX.XX.X is the IP of the local git-lab instance on the remote network.
Is there anyway to tunnel into the remote network and access the gitlab instance by proxy (forgive me for using the word wrong likely).
Thank you.
Ok, mostly thanks to #o11c for this, although here are my findings that led me to be able to clone my repo remotely.
Disclaimer: ProxyJump (-J see ssh manpage) is the shorthand, more modern, version of this but I couldn't get it working -- if anyone wants to update with their implementation of ProxyJump that would be useful!
SSH into your remote account to the main server with port to your gitlab or other application instance, using your main identity (this can be in ~/.ssh or you can manually reference it with -i)
ssh -ND 3131 nkunes#XXX.XXX.1.146 -i ../../keys/XXX-ssh &
I then source this bash script in the shell I intend to run git commands (notice the ProxyCommand usage instead of ProxyJump, this is the old method of doing this yet it works well for me. also notice the 127.0.0.1:PORT should be swapped with your application's port)
alias ssh="ssh -o ProxyCommand='/usr/bin/nc -X 4 -x 127.0.0.1:3131 %h %p'"
export GIT_SSH=~/Desktop/XXX-eng/ssh-access/ssh-proxy.sh
export PRE_SSH_ALIAS_PROMPT="$PS1"
export PS1="<< SSH ALIAS >>$PS1"
Where ssh-proxy.sh is defined as follows: (again, swap the port out for your application, and possibly use ProxyJump if want modern implementation)
ssh -o ProxyCommand='/usr/bin/nc -X 4 -x 127.0.0.1:3131 %h %p' "$#"
Then, you can clone normally using:
git clone git#XXX.XXX.XX.X:REPODIR/repo_name.git

How do I use SSH in a Jenkins pipeline?

I have some Jenkins jobs defined using a Jenkins Pipeline Model Definition, which builds NPM projects. I use Docker containers to build these projects (using a common image with
just Node.js + npm + yarn).
The results of the builds are contained in the dist/ folder that I zipped using a zip pipeline command.
I want to copy this ZIP file to another server using SSH/SCP (with private key authentication). My private key is added to the Jenkins environment (credentials manager), but when I use Docker containers, an SSH connection cannot be established.
I tried to add agent { label 'master' } to use the master Jenkins node for file transfer, but it seems to create a clean workspace with new Git fetch, and without my built files.
After I tried the SSH Agent Plugin, I have this output:
Identity added: /srv/jenkins3/workspace/myjob-TFD#tmp/private_key_370451445598243031.key (rsa w/o comment)
[ssh-agent] Started.
[myjob-TFD] Running shell script
+ scp -r dist test#myremotehost:/var/www/xxx
$ docker exec bfda17664965b14281eef8670b34f83e0ff60218b04cfa56ba3c0ab23d94d035 env SSH_AGENT_PID=1424 SSH_AUTH_SOCK=/tmp/ssh-k658r0O76Yqb/agent.1419 ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 1424 killed;
[ssh-agent] Stopped.
Host key verification failed.
lost connection
How do I add a remote host as authorized?
I had a similar issue. I did not use the label 'master', and I identified that the file transfer works across slaves when I do it like this:
Step 1 - create SSH keys in a remote host server, include the key to authorized_keys
Step 2 - Create credential using SSH keys in Jenkins, use the private key from the remote host
Use the SSH agent plugin:
stage ('Deploy') {
steps{
sshagent(credentials : ['use-the-id-from-credential-generated-by-jenkins']) {
sh 'ssh -o StrictHostKeyChecking=no user#hostname.com uptime'
sh 'ssh -v user#hostname.com'
sh 'scp ./source/filename user#hostname.com:/remotehost/target'
}
}
}
Use the SSH agent plugin:
SSH Agent Plugin
SSH Agent Plugin
When using this plugin you can use the global credentials.
To add a remote host to known hosts and hopefully cope with your error try to manually ssh from the Jenkins host to the target host as the Jenkins user.
Get on the host where Jenkins is installed. Type
sudo su jenkins
Now use ssh or scp like
ssh username#server
You should be prompted like this:
The authenticity of host 'server (ip)' can't be established.
ECDSA key fingerprint is SHA256:some-weird-string.
Are you sure you want to continue connecting (yes/no)?
Type yes. The server will be permanently added as a known host. Don't even bother passing a password, just Ctrl + C and try running a Jenkins job.
Like #haschibaschi recommends, I also use the ssh-agent plugin. I have a need to use my personal UID credentials on the remote machine, because it doesn't have any UID Jenkins account. The code looks like this (using, for example, my personal UID="myuid" and remote server hostname="non_jenkins_svr":
sshagent(['e4fbd939-914a-41ed-92d9-8eededfb9243']) {
// 'myuid#' required for scp (this is from UID jenkins to UID myuid)
sh "scp $WORKSPACE/example.txt myuid#non_jenkins_svr:${dest_dir}"
}
The ID e4fbd939-914a-41ed-92d9-8eededfb9243 was generated by the Jenkins credentials manager after I created a global domain credentials entry.
After creating the credentials entry, the ID is found under the "ID" column on the credentials page. When creating the entry, I selected type 'SSH Username with private key' ('Kind' field), and copied the RSA private key I had created for this purpose under the myuid account on host non_jenkins_svr without a passphrase.

Adding ssh keys to ssh-agent fails w/ running agent, environment variables set

[SSH] "Could not open a connection to your authentication agent". error
I am trying to add ssh keys into my ssh agent. I start by making sure that the ssh-agent is running.
exec ssh-agent bash
I make sure that ssh-agent is running.
ps axu | grep [s]sh
and get the following
root 1562 ... ssh-agent bash
The env variables are set correctly.
SSH_AGENT_PID=1562
SSH_AUTH_SOCK=/tmp/ssh-699iHAxuK4xX/agent.1561
However when I try to add the private key using
sudo ssh-add ~/.ssh/peter-key
I get the ssh error
Could not open a connection to your authentication agent.
I have tried the suggestions on stackoverflow and serverfault but nothing.
Note: I am running a linux machine on one of the free tier AWS machines with ubuntu. My instance's security group allow (temporarily) all incoming and outgoing ssh connections from any IP address. Anyone know what the error could be?
Just use
ssh-add ~/.ssh/peter-key
...not...
sudo ssh-add ~/.ssh/peter-key
Using sudo (optionally/configurably, but typically) clears a number of environment variables, including the ones you just verified were set. (Compare output of sudo env and plain env to see this effect).
If you must use sudo to read the key, then you can ensure that the necessary environment variable is set on the other side by doing so explicitly yourself:
sudo env "SSH_AUTH_SOCK=$SSH_AUTH_SOCK" ssh-add ~/.ssh/peter-key
However, it's possible for security-sensitive programs working with UNIX domain sockets to check the ownership and permission of software on the other end of that socket, and to refuse to communicate with anything running on a user account different from what they expect, so it's possible that this approach may not be future-proof against security features added to ssh-agent.

Define a set keyfile for Ubuntu to use when SSHing into a server

I have two Amazon EC2 Ubuntu instances. When I connect to one of them, I can do
ssh ubuntu#54.123.4.56
and the shell uses the correct keyfile from my ~/.ssh directory.
I just set up a new instance, and I'm trying to figure out how to replicate that behavior for this new one. It's a minor thing, just driving me nuts. When I log in with:
ssh -i ~/.ssh/mykey.pem ubuntu#54.987.6.54
it works fine, but with just
ssh ubuntu#54.987.6.54
I get:
Permission denied (publickey).
I have no idea how I managed to get it to work this way for the first server, but I'd like to be able to run ssh into the second server without the "-i abc.pem" argument. Permissions are 600:
-r-------- 1 mdexter mdexter 1692 Nov 11 20:40 abc.pem
What I have tried: I copied the public key from authorized_keys on the remote server and pasted it to authorized_keys on the local server, with mdexter#172.12.34.56 (private key) because I thought that might be what created the association in the shell between that key and that server for the shell.
The only difference I can recall between how I set up the two servers is that with the first, I created a .ppk key in PuTTy so that I could connect through FileZilla for SFTP. But I think SSH is still utilizing the .pem given by Amazon.
How can I tell the shell to just know to always use my .pem key for that server when SSHing into that particular IP? It's trivial, but I'm trying to strengthen my (rudimentary) understanding of public/private keys and I'm wondering if this plays into that.
You could solve this in 3 ways:
By placing the contents of your ~/.ssh/mykey.pem into ~/.ssh/id_rsa on the machine where you are ssh'ing into 2nd instance. Make sure you also change the permissions of ~/.ssh/id_rsa to 600.
Using ssh-agent (ssh-agent will manage the keys for you)
Start ssh-agent
eval `ssh-agent -s`
Add the key to ssh-agent using ssh-add
ssh-add mykey.pem
Using ssh-config file:
You could use ssh config file. From the machine where you are trying to ssh, keep the following contents in the ~/.ssh/config file (make sure to give this file 600 permissions):
Host host2
HostName 54.987.6.54
Port 22
User ubuntu
IdentityFile ~/.ssh/mykey.pem
Once you do that now you could access do the ssh like this:
ssh host2
After performing any of the above steps you should be able to ssh into your second instance with out specifying the key path.
Note: The second option requires you to add the key using ssh-add every time you logout and log back in so to make that a permanent injection see this SO question.

Resources