What else can I try in order to get to my AWS EC2 instance [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Closed 6 years ago.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Improve this question
I have an AWS EC2 instance of Ubuntu 14.04. It's been about 6 months since I've logged into it, and now I can't get logged in.
I get Permission denied (Public Key)
The thing is, I backed up my .pem file in 3 places, and none of them work. I'm pretty experienced with AWS, and I've never had this happen before.
The command I'm using is ssh -v -i mykey.pem ubuntu#192.168.0.1
The output I'm getting from the command is this:
OpenSSH_7.1p2, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /usr/local/etc/ssh_config
debug1: Connecting to ec2-192-168-0-1.compute-1.amazonaws.com [192.168.0.1] port 22.
debug1: Connection established.
debug1: permanently_set_uid: 0/0
debug1: key_load_public: No such file or directory
debug1: identity file mykey.pem type -1
debug1: key_load_public: No such file or directory
debug1: identity file mykey.pem-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.1
debug1: Remote protocol version 2.0, remote software version PaloAltoNetworks_0.2
debug1: no match: PaloAltoNetworks_0.2
debug1: Authenticating to ec2-192-168-0-1.compute-1.amazonaws.com:22 as 'ubuntu'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-sha1 none
debug1: kex: client->server aes128-ctr hmac-sha1 none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<7680<8192) sent
debug1: got SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: got SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Server host key: ssh-rsa SHA256:Mt8dMlt7QdgQ9kiju3OATK43jnN9oV2pZ4oGZdd46PA
debug1: Host 'ec2-192-168-0-1.compute-1.amazonaws.com' is known and matches the RSA host key.
debug1: Found key in /root/.ssh/known_hosts:34
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: mykey.pem
debug1: Authentications that can continue: publickey
debug1: No more authentication methods to try.
Permission denied (publickey).
I have tried rebooting the machine several times.
I've tried this from 3 different locations, one with no firewall at all, and I get the same thing (except of course the lines about firewalls).
I finally gave up on trying to SSH in, and decided to use the AWS Management Console connect (A Java SSH Client directly from my browser (Java required))
This has failed. It doesn't support Chrome, and when I try from Firefox, it just freezes up, and never does anything...with no error in the browser console (that I can find...I'm kind of noob at browser troubleshooting).
It does the same thing in Edge and IE. I have tried this on all 3 computers, on Ubuntu Desktop, and Windows 10 with no luck.
After that fail, I found somewhere that said I can save it to a snapshot, and start a new instance from that snapshot, and use a different .pem file, just in case all 3 of my copies were somehow magically corrupt. I tried that, and the clone I made wouldn't ever start correctly (1/2 status check).
Is there anything I haven't tried?
EDIT 1
I have also tried changing the permissions of the .pem file to 400 and 600, as well as deleting the known_hosts file. Neither of these proved to be a solution.

Do you have any monitoring on the instance? If the disk filled up, that might explain some of the problems, but Cloudwatch won't be able to tell you how much space is in use. This might explain why an AMI won't boot correctly. You should be able to get to the boot log from the AWS console which might have some information in it. If the problem is disk space, you can launch another instance from your AMI but specify a large disk.
Is it possible that the instance was hacked somehow? If someone took it over, they may have changed/removed the key, or even changed the port sshd is listening to.
If your instance is truly hosed, and you want to get the data off it, you should be able to take a snapshot, create a new volume from that snapshot, and mount the resulting volume on a new instance.

Run
chmod 400 mykey.pem
And then try ssh again. This could be the error for Permission denied (Public Key).
I've had problems when the permissions on my key were to open.

Related

ssh (and git) authentication issues on external port/ip (local ip works fine)

I am trying to set up a git repository on a server machine that is remote-accessible over the internet.
I have succeeded in getting git working over local/internal IP addresses. Within local LAN, I have private-key-based authentication working for SSH (password logins disabled), and I can clone, push, and pull successfully using Git and SSH, e.g.;
ssh USER#192.168.1.xxx
[-> accepts public key, gives me a remote console prompt as "USER", etc]
git clone git+ssh://USER#192.168.1.xxx//gitdir/project.git
[-> creates a local clone as desired, commits and push work, no problems seemingly]
However, I am now trying to access this machine via an external/internet IP in the same way, and I don't understand the behavior it's giving me.
I have enabled port forwarding on my router for port 22 to the server machine.
I have opened port 22 in software on "UFW" on the server machine.
As far as I can tell, I have no settings on my router, SSH configs, or UFW that would block any specific web address or otherwise cause problems on either my local machine or the server. The server should accept a connection from any external address accessing via port 22, and both my local machine and the server allow outgoing connections in general.
I am using Ubuntu 17.04 on the local machine, if that matters.
Both the server and the local machine are currently on the same LAN/connected to the same router.
I have DISABLED "ChallengeResponseAuthentication" and "PasswordAuthentication", and have ENABLED "PubkeyAuthentication" in my "sshd_config".
I have quadruple-checked that I was using the correct IP, and used copy-paste rather than manual typing. Unless I am truly missing something, I am attempting to connect to the correct machine.
For a direct SSH login, I get this (using the -v flag):
LOCALUSER#LOCALMACHINE:~$ ssh -v -i ~/.ssh/[PRIVATE_KEY] USER#[IP6_EXTERNAL_IP]
OpenSSH_7.4p1 Ubuntu-10, OpenSSL 1.0.2g 1 Mar 2016
debug1: Reading configuration data /home/[HOME]/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug1: Connecting to [IP6_EXTERNAL_IP] [[IP6_EXTERNAL_IP]] port 22.
debug1: Connection established.
debug1: identity file /home/[HOME]/.ssh/[PRIVATE_KEY] type 4
debug1: key_load_public: No such file or directory
debug1: identity file /home/[HOME]/.ssh/[PRIVATE_KEY]-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.4p1 Ubuntu-10
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4p1 Ubuntu-10
debug1: match: OpenSSH_7.4p1 Ubuntu-10 pat OpenSSH* compat 0x04000000
debug1: Authenticating to [IP6_EXTERNAL_IP] as 'USER'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: chacha20-poly1305#openssh.com MAC: <implicit> compression: none
debug1: kex: client->server cipher: chacha20-poly1305#openssh.com MAC: <implicit> compression: none
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:[SHA_HASH]
debug1: Host '[IP6_EXTERNAL_IP]' is known and matches the ECDSA host key.
debug1: Found key in /home/[HOME]/.ssh/known_hosts:4
debug1: rekey after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey after 134217728 blocks
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ssh-rsa,ssh-dss,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521>
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: publickey
debug1: Offering ED25519 public key: /home/[HOME]/.ssh/[PRIVATE_KEY]
debug1: Authentications that can continue: publickey,password
debug1: Next authentication method: password
USER#[IP6_EXTRNAL_IP]'s password:
...and no password I supply works. I don't know why the server is even asking for a password, I disabled password logins globally in "sshd_config", and I have triple-checked that there is no exception to that for this user, as well.
Git similarly asks for my remote user's password when I try to clone from the server via the external IP (because it's working through SSH of course), if I do the following to set it to use an external IP, instead of the original local IP:
git remote set-url origin ssh+git://USER#[REMOTE_IP]//gitdir/project.git
Any help understanding what I'm missing here would be much appreciated; thanks.
Figured it out:
I am unclear as to why, but my system does not like it when I try to access my server via an external public ip, from inside my own LAN. I assume this is because of some kind of packet confusion as the local machine and the server machine share the same public ip via the router (confirmation on this would be appreciated).
When I tried to connect from another network, things worked correctly and as expected. I am asked for my public key and immediately rejected with no password prompt if I don't supply the right one, and both SSH and git via SSH seem to work as desired over the internet after I change my repo source to use the public ip. Any further details were covered by editing ~/.ssh/config to set key identities and hosts.
Marking as solved. Thank you.

Git clone via SSH issue

I want to clone a git repository to my ubuntu through ssh, but I'm getting the following error:
Permission denied (publickey). fatal: Could not read from remote
repository.
My public key is added in the agent and I have used it already on Windows but when I tried it on Linux it didn't work.
Every remote git repo is associated with some login that will be performed on the remote system in order to gain access to the repo directory. This login attempt is failing, because (a) your SSH key is not being recognized (or, is not being correctly served by an SSH-agent on your computer), and (b) password-login is not an alternative.
To help diagnose the problem, remove git from the picture. Use git remote -v to find the user/host that is being attempted, and try a direct ssh login to that account. (It will fail.) Diagnose the problem as you would for any similar ssh-only issue. Once you are able to log-in, you will be able to clone.
Git does it's thing over ssh (in your case) or https. It's generally better to debug connection problems using the underlying command and not through Git, you'll get better diagnostics and can use normal ssh debugging techniques.
Try connecting to the same remote just using ssh -v (ssh in verbose mode). If it's git clone git#github.com:schwern/dotfiles.git then try ssh -v git#github.com. Just the user and host. And yes, the user should be git, Github identifies you by your ssh key.
You should get something like this...
$ ssh -v git#github.com
OpenSSH_7.2p1, OpenSSL 1.0.2h 3 May 2016
debug1: Reading configuration data /Users/schwern/.ssh/config
debug1: Reading configuration data /opt/local/etc/ssh/ssh_config
debug1: Connecting to github.com [192.30.253.113] port 22.
debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file /Users/schwern/.ssh/id_rsa type -1
...a whole lot of ssh looking for your ssh keys...
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /Users/schwern/.ssh/github
debug1: Authentications that can continue: publickey
debug1: Trying private key: /Users/schwern/.ssh/id_rsa
debug1: Trying private key: /Users/schwern/.ssh/id_dsa
...a whole lot of trying ssh keys...
debug1: No more authentication methods to try.
Permission denied (publickey).
The important parts are where it looks for and offers keys. If you don't see your Github key in there, then you need to figure out why. If you do see your Github key in there, then you should check that it is what Github thinks is your key.
What you want to see is this.
$ ssh -v git#github.com
OpenSSH_7.2p1, OpenSSL 1.0.2h 3 May 2016
debug1: Reading configuration data /Users/schwern/.ssh/config
debug1: Reading configuration data /opt/local/etc/ssh/ssh_config
debug1: Connecting to github.com [192.30.253.113] port 22.
debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file /Users/schwern/.ssh/id_rsa type -1
...ssh finding your keys...
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /Users/schwern/.ssh/github
debug1: Server accepts key: pkalg ssh-rsa blen 279
debug1: Authentication succeeded (publickey).
Authenticated to github.com ([192.30.253.113]:22).
...Yay! You're in!...
debug1: channel 0: new [client-session]
debug1: Entering interactive session.
debug1: pledge: network
debug1: Requesting authentication agent forwarding.
PTY allocation request failed on channel 0
Hi schwern! You've successfully authenticated, but GitHub does not provide shell access.
debug1: client_input_channel_req: channel 0 rtype exit-status reply 0
debug1: channel 0: free: client-session, nchannels 1
Connection to github.com closed.
Transferred: sent 2936, received 1796 bytes, in 0.2 seconds
Bytes per second: sent 13380.7, received 8185.2
debug1: Exit status 1

Why does scp sporadically fail, when doing multiple scps in parallel?

I have a small application that's trying to do a dozen parallel "scp" runs, pulling files from a remote system. Usually, it runs fine.
Sometimes, one or two of the scp runs quietly dies.
("quiet" if pulling from Linux. If pulling from HP-UX, I get a message
like Connection reset by peer.)
If I add "-v" to my scp commands, then when a failure occurs, I see that I'm
getting "ssh_exchange_identification: read: Connection reset by peer"
(on Linux ... haven't tried the -v on HP-UX).
Here's the "scp -v" output for a typical run, with the point where a 'bad'
run and a 'good' run diverge indicated:
Executing: program /usr/bin/ssh host wilbur, user (unspecified), command scp -v -p -f /home/sieler/source/misc/[p-q]*.[ch]
OpenSSH_6.9p1, LibreSSL 2.1.8
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 51: Applying options for *
debug1: Connecting to wilbur [10.84.3.61] port 22.
debug1: Connection established.
debug1: identity file /Users/sieler/.ssh/id_rsa type 1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/sieler/.ssh/id_rsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/sieler/.ssh/id_dsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/sieler/.ssh/id_dsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/sieler/.ssh/id_ecdsa type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/sieler/.ssh/id_ecdsa-cert type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/sieler/.ssh/id_ed25519 type -1
debug1: key_load_public: No such file or directory
debug1: identity file /Users/sieler/.ssh/id_ed25519-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.9
'bad' and 'good' runs match up to this point, then...
Bad:
ssh_exchange_identification: read: Connection reset by peer
Good:
debug1: Remote protocol version 2.0, remote software version OpenSSH_5.3
debug1: match: OpenSSH_5.3 pat OpenSSH_5* compat 0x0c000000
debug1: Authenticating to wilbur:22 as 'sieler'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr umac-64#openssh.com none
debug1: kex: client->server aes128-ctr umac-64#openssh.com none
...
Although the usual host machine for the script and scp runs is a Mac, running OS X 10.11.4, the problem was been reproduced to/from several combinations
of Mac/Linux/HP-UX (enough to rule out it being a Mac or HP-UX specific problem).
IIRC, using scp to pull from Linux to Mac has had the problem,
as well as pulling from HP-UX to Mac, and pulling from Linux to HP-UX.
Haven't tried pulling from Mac or HP-UX to Linux.
Is there something about scp/ssh/openssh that parallel usage sometimes fails?
If I run sshd on the Linux system with -ddd, then the demon stops after
the first scp accesses it (the scp has no problem),
and the other eleven scp runs fail.
Thanks
This is probably caused by the limitation of parallel sessions in sshd_config. By default, server is configured to do "random early drop", which means refusing new connections, if amount of active is bigger than some limit. The responsible option is MaxStartups (from man sshd_config):
MaxStartups
Specifies the maximum number of concurrent unauthenticated connections to the SSH daemon. Additional connections will be dropped until authentication succeeds or the LoginGraceTime expires for a connection. The default is 10:30:100.
Alternatively, random early drop can be enabled by specifying the three colon separated values “start:rate:full” (e.g. "10:30:60"). sshd(8) will refuse connection attempts with a probability of “rate/100” (30%) if there are currently “start” (10) unauthenticated connections. The probability increases linearly and all connection attempts are refused if the number of unauthenticated connections reaches “full” (60).
Bumping the value to something bigger than the amount of connections you expect should solve your problem. Otherwise, you can set LogLevel DEBUG3 in sshd_config to see more logs in system log.
But when you are connecting to the same server, it is better to use connection multiplexing. It will be faster and you will not have these problems. Check out ControlMaster option in ssh_config or just check my similar answer for fast excursion to this "magic".

Azure acs ssh login keeping failing with "permission denied (public key)"

I have successfully deployed a mesos cluster on azure container service using article deploy an container service cluster. I used azure cli on OS X for creating the cluster. As part of the process I created a new ssh key pair:
ssh-keygen -t rsa -b 2048
After deployment went successful I'm trying to ssh into the end point but receiving "Permission Denied (Public Key)"
ssh -L 80:localhost:80 -N azureuser#xyz.eastus2.cloudapp.azure.com -p 2200 -v
The verbose [not all but last few lines]
debug1: Host '[xyz.eastus2.cloudapp.azure.com]:2200' is known and matches the RSA host key.
debug1: Found key in /var/root/.ssh/known_hosts:2
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /var/root/.ssh/id_rsa
debug1: Trying private key: /var/root/.ssh/id_dsa
debug1: No more authentication methods to try.
Permission denied (publickey).
I don't recall any issues while creating the ssh keys but may be something I've missed just not sure what it cloud be.
I am not using local port forwarding, following worked for me : ssh -i /<path>/id_rsa username#masteralias.westus.cloudapp.azure.com -p 2200 -v.
Also if you try creating the cluster using https://github.com/Azure/azure-quickstart-templates/tree/master/101-acs-mesos, in the parameter screen you are told the following regarding the key (in tooltip) "Configure all linux machines with the SSH RSA public key string. Your key should
include three parts, for example 'ssh-rsa AAAAB...snip...UcyupgH azureuser#linuxvm" . So please make sure that your key has 3 parts as mentioned

EC2 keypair works in one instance but fails on other - Permission denied (publickey)

I have read many posts on this subject but none helped me solve my issue.
I have a machine amazon ec2 which I connect using this SSH command:
ssh -i /Library/AWS/glrpopulis.pem ec2-user#54.225.154.23
I've never had problems with this command until now. It just stopped working, the following message is displayed: Permission denied (publickey). out of nowhere!
I really can't understand why suddenly the same command I use almost everyday is failing to work. Probably I've changed something I wasn't supposed to, but I'm having a really hard time figuring out what.
I was creating a service for a web application (atlassian bamboo) when that happened the first time, but I'm not sure if this relates to the error.
I have reboot the machine a couple of times and tried over and over again, with no success.
The complete output with the -v option is displayed bellow:
mac-pipo:~ felipereis$ ssh -v -i /Library/AWS/glrpopulis.pem ec2-user#54.225.154.23
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: Connecting to 54.225.154.23 [54.225.154.23] port 22.
debug1: Connection established.
debug1: identity file /Users/felipereis/.ssh/id_rsa type 1
debug1: identity file /Users/felipereis/.ssh/id_rsa-cert type -1
debug1: identity file /Users/felipereis/.ssh/id_dsa type -1
debug1: identity file /Users/felipereis/.ssh/id_dsa-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_6.2
debug1: Remote protocol version 2.0, remote software version OpenSSH_6.2
debug1: match: OpenSSH_6.2 pat OpenSSH*
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: server->client aes128-ctr hmac-md5-etm#openssh.com none
debug1: kex: client->server aes128-ctr hmac-md5-etm#openssh.com none
debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP
debug1: SSH2_MSG_KEX_DH_GEX_INIT sent
debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY
debug1: Server host key: RSA 19:ef:f1:2b:56:dd:86:ec:42:65:ff:1d:6b:64:0f:f3
debug1: Host '54.225.154.23' is known and matches the RSA host key.
debug1: Found key in /Users/felipereis/.ssh/known_hosts:12
debug1: ssh_rsa_verify: signature correct
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: Roaming not allowed by server
debug1: SSH2_MSG_SERVICE_REQUEST sent
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Offering RSA public key: /Users/felipereis/.ssh/id_rsa
debug1: Authentications that can continue: publickey
debug1: Offering RSA public key: /Library/AWS/glrpopulis.pem
debug1: Authentications that can continue: publickey
debug1: Trying private key: /Users/felipereis/.ssh/id_dsa
debug1: No more authentication methods to try.
Permission denied (publickey).
UPDATE:
* I have just tested and I'm able to use the same key (glrpopulis.pem) to connect to a different ec2 instance, so maybe is something going on the first machine
Sounds like the keys under ~/.ssh/authorized_keys got messed up or the file got deleted.
Try the following:
Stop your EC2 instance
Detach your root Volume (/dev/sda1) -- Assuming this is Volume A
Spin up a new EC2 instance of the same type and same credentials.
Attach Volume A to that new instance as /dev/sdf
ssh connect to his new instance.
mkdir -p /mnt/xvdf
mount /dev/xvdf /mnt/xvdf
cp ~/.ssh to /mnt/xvdf/home/ec2-user/.
chmod 700 /mnt/xvdf/home/ec2-user
chmod 600 /mnt/xvdf/home/ec2-user/authorized_keys
Shutdown new instance
Detach Volume A on new instance
Reattach Volume A on /dev/sda1 on original instance.
Start original instance.
You should be able to login now.
Depending on your AMI, the public key might be being added to the authorized_keys file of a different user to ec2-user.
To find out, you can view the boot log for the instance in the EC2 console, and it should output the username that cloud-init is using as the "default user". Mine has a line like this:
ci-info: +++++++++++++++++++++Authorized keys from /home/ec2-user/.ssh/authorized_keys for user ec2-user++++++++++++++++++++++
You can also try logging in as root as that will sometimes give an error like 'Please login as the user "ec2-user" rather than the user "root".'
This happened to me, and it was because I had updated my version of cloud-init, which is what adds the public key to authorized_keys. The default config file (/etc/cloud/cloud.cfg) was replaced, causing the default user to change from "ec2-user" to "cloud-user".
I fixed this issue by changing the system_info section of the new /etc/cloud/cloud.cfg to this:
...
system_info:
...
default_user:
name: ec2-user
sudo: ALL=(ALL) NOPASSWD:ALL
...
You can then create a new AMI from that instance, and it should setup ec2-user correctly again.

Resources