I am facing a very minor isue and not able to figure it out :(
I have 2 AWS accounts(Preprod and Prod)
First AWS A/c is a preprod account where I have created a jumphost(preprod_Server1 and login to instance(i.e preprod_Server2 )
preprod_Server1>> ssh vin#preprod_Server2 (This works fine,password less SSH)
Now IN second AWS account i.e PROD
I have created a server_prod1 (jumphost ) and I have taken an AMI of preprod_Server2.
I have copied id_rsa public key in authorised file of preprod_Server2 and then taken the AMI image of it.
Now I launch an instance with an AMI of preprod_Server2
when I login from server_prod1 it's not allowing me.
prod_Server1>> ssh vin#preprod_Server2 ( I get permission denied)
Note: only pem file is different in preprod and prod account is that an issue.
I am not aware of the root credentials of preprod_Server2.
In prod server I am able to login to other instance only AMI of preprod_Server2 is giving problem.
the key is excluded from the AMI when you built it.
Another key is injected into the machine when you launch it (everytime you launch an EC2 machine you have to specify the key).
This works by copying the id_rsa publickey of prod_Server1 to authorised keys file in preprod_Server2. And then take the AMI of preprod_Server2.
prod_Server1>> ssh vin#preprod_Server2 (This works)
Note : somehow Initially when I copied the keys there was an extra space which resulted in problem.
Related
I'm trying to figure out a way to run Terraform from inside an AWS WorkSpace. Is that even possible? Has anyone made this work?
AWS WorkSpaces doesn't apply the same concept with an Instance Profile and associated IAM role that is attached to it.
I'm pretty confident, however, that you can run Terraform in an AWS WorkSpace just as you can do it from your personal computer. With Internet access (or VPC endpoints), it can reach AWS APIs and just requires a set of credentials, which in this case (without instance profiles) would be an AWS Access Key and Secret Access Key - just like on your local computer.
I tried to find set aws-cli locally using IAM role & without using access key/secret access key. But unable to get information from meta url[http://169.256.169.256/latest/meta-data].
I am running Ec2 instance with Ubuntu Server 16.04 LTS (HVM), SSD Volume Type - ami-f3e5aa9c.I have tried to configure aws-cli on that instance.I am not sure what type of role/policy/user needed to get aws-cli configured in my Ec2 instance.
Please provide me step by step guide to achieve that.I just need direction.So useful link also appreciated.
To read Instance Metadata, you dont need to configure the AWS CLI. The problem in your case, is you are using a wrong URL to read the Instance Metadata. The correct URL to use is http://169.254.169.254/ . For example, if you want to read the AMI id of the Instance, you can use the follow command.
curl http://169.254.169.254/latest/meta-data/ami-id
However, if you would like to configure the AWS cli without using the Access/Secret Keys. Follow the below steps.
Create an IAM instance profile and Attach it to the EC2 instance
Open the IAM console at https://console.aws.amazon.com/iam/.
In the navigation pane, choose Roles, Create role.
On the Select role type page, choose EC2 and the EC2 use case. Choose Next: Permissions.
On the Attach permissions policy page, select an AWS managed policy that
grants your instances access to the resources that they need.
On the Review page, type a name for the role and choose Create role.
Install the AWS CLI(Ubuntu).
Install pip if it is not installed already.
`sudo apt-get install python-pip`
Install AWS CLI.
`pip install awscli --upgrade --user`
Configure the AWS CLI. Leave AWS Access Key ID and AWS Secret Access
Key as blank as we want to use a Role.
$ aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: us-west-2
Default output format [None]: json
Modify the Region and Output Format values if required.
I hope this Helps you!
AWS Documentation on how to setup an IAM role for EC2
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
I am a part of a batch account or a headless user account on a remote machine. To ssh passwordless to the batch account, I have appended my .ssh/authorized_keys to the batch account's .ssh/authorized_keys. This ssh as the batch account user to the remote machine works fine.
Now, I have the need to copy certain files from this headless user account's directories to my machine. So, whenever I do
scp batch_user_account#remote_machine:file_address local_machine_address
it asks for the batch_user_account's password, which I am not aware of.
I also tried to offer my private key as identity file like:
scp -i ~/.ssh/id_rsa batch_user_account#remote_machine:file_address local_machine_address
But this also gives me a permission denied error to the batch user account's folder.
Am I doing something incorrect here?
Can anyone guide me here?
Thank you.
I tried the same task (of copying files from batch user account on remote machine to another machine A) by choosing a different machine B instead of A. I wanted to see if the error reproduced. To ssh passwordless to the batch account on remote machine through this new machine, I appended my .ssh/authorized_keys to the batch account's .ssh/authorized_keys. On this new machine, the command
scp batch_user_account#remote_machine:file_address local_machine_address
worked fine. So, I realized there were permission issues that I had to solve. When I changed the permissions on the file destination machine, it worked.
I'm trying to setup continuous deployment for an Azure website using bitbucket.
The problem is I'm using a submodule (which I own) that Azure doesn't have permission to, because it doesn't add that by default.
I'm trying to figure out how to add an SSH key so that Azure can connect and get the submodule.
Steps I've taken.
Created a New Public/Private Key with PuttyGen, Added the public key to my bitbucket account under the name Azure
FTPed into Azure, and added both the public and private key files (.ppk) to the .ssh directory (yeah I didn't know which one I was suppose to add). They are named azurePrivateKey.ppk, and azurePublicKey.
Updated my config file to look like this
HOST *
StrictHostKeyChecking no
Host bitbucket.org
HostName bitbucket.org
PreferredAuthentications publickey
IdentityFile ~/.ssh/azurePrivateKey.ppk
(no clue if that's right)
Updated my Known Hosts to look like this
bitbucket.org,131.103.20.168, <!--some key here...it was here when i opened the file, assuming it's the public key for the repo i tried to add-->
bitbucket.org,131.103.20.168, <!--the new public key i tried to add-->
And I still get the same error, no permission to get submodule. So i'm having trouble figuring out which step I did incorrectly as I've never done this before.
Better late then never, and it could be usefull for others :
A Web App already have a ssh key, to get it :
https://[web-site-name].scm.azurewebsites.net/api/sshkey?ensurePublicKey=1
You can then add this key to you git repo deploy key.
I've never set that up in Azure but some general rules of thumb for handling SSH keys:
The private key in $HOME/.ssh/ must have file mode 600 (RW only for the owner)
You need both, the public and the private key in this folder, usually named id_rsa and id_rsa.pub but you can change the filename to whatever you like
You have to convert the private key generated by puttykeygen to a OpenSSH2 compatible format, see How to convert SSH keypairs generated using PuttyGen
known_hosts stores the public keys of the servers you've already connected to. That's useful to make sure that you are really connecting to the same server again. more detailed information on this topic
HTH
So if you like me had multiple private submodules on the same github account as the app service is deployed at you can give your service access to all your modules by moving the deployment key.
Go to the repo where your service is hosted.
In settings go to deploy keys.
Remove the deployment key.
Get the public key from https://[your-web-app].scm.azurewebsites.net/api/sshkey?ensurePublicKey=1
Add the key to your SSH keys in the account settings for github.
If you have modules on several accounts you can add the same key to each account.
After this the service can access private repos on all accounts with the key.
Scenario:
I have a running ec2 instance but don't have the key pair for the instance.
I have a ftp-user account set up but don't have root access.
I want to duplicate the running instance to a new instance go gain root access.
Problem:
When I try to create a new instance, from a snapshot of the old one, putty says "Server refused our key" when trying to ssh into it...
This is what I did:
Created a snapshot of the old instance's ebs volume
From the snapshot I created an image
Made sure the architecture and kernel-id matched the old instance
I launched a new instance from the image
Created a new key pair
Created a new security group and made sure port 22 was open
Assigned an elastic ip to the instance
I downloaded and converted the key pair .pem file with puTTYgen
Loaded .pem file into puTTYgen
Used SSH-2 RSA 1024
Saved private key
Tried to ssh into the instance with putty (BUT FAILING)
Used elastic ip address
Tried with usernames: "ec2-user", "root", "ubuntu", "bitnami"
What could be wrong?
The image and your new instance still use the original keypair. Unless you prepare the instance to accept a new key at launch, it will not.
What you need to do is attach the volume to a new instance entirely, (created from a public ami). Mount the volume and edit the user's authorized_keys file on that volume. Put in your new key, and then move it back to the original instance.