I use Amazon EC2 to host some web sites and databases.
I have a new developer joining me tomorrow.
If I create an IAM User, and attach the "AmazonEC2FullAccess - arn:aws:iam::aws:policy/AmazonEC2FullAccess- Provides full access to Amazon EC2 via the AWS Management Console.) policy to him,
will he be able to access secrets stored inside the linux ec2 instances created in the past. Basically, does this policy somehow allow access to pre-created linux instances.
EDIT: what if he/ she attempts a disk recovery procedure? for example, mount the disk of a vm in a new ec2 instance
When you give AmazonEC2FullAccess access to the user he will be able to see all the EC2 instances in the AWS account. Even if you don't provide him the key to pre-created EC2 instances he will be able to take AMI of the pre created EC2 instance and launch it with a new key and get access to that instance.
He can also do disk recovery procedure as in you mentioned in your use case. So you have some of the below options.
Do not provide AmazonEC2FullAccess ask him what specification he needs for the server and launch the EC2 as per the specification and provide him ssh jailed user access to that EC2 instance.
Set up cloud trail so that you can monitor the resources created by that user for any suspicious activity https://aws.amazon.com/cloudtrail/
Third option is as you mentioned he is developer just provide him deployment and git access to the application running on the EC2 instance.
The IAM role only gives someone access to the AWS EC2 API, where you can do things like create new instances, shutdown existing instances, etc. This does not give someone access to login to any EC2 servers. For that you would need to give someone the SSH key (for Linux) or password (for Windows) that was setup when the server was created.
Related
I have developed a SaaS app using MEAN that is working perfect on my local machines and server now I have deploy my app on AWS EC2 instance.
now I have problem with my server whenever I request with big data query my ec2 instance / server stop I cannot access it from putty or FileZilla.
Should I use other hosting service or there is my app infrastructure problem?
(sorry for bad English)
It seems like your EC2 instance is out of resources, hence not responding to the Putty/FileZilla apps.
You may check the CPU% on the monitoring tab in EC2 console, or via CloudWatch.
Also, You may install and configure CloudWatchAgent on your instance to get improved logging of RAM and also application logs.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html
If the problems is resources (CPU, RAM, Disk), You can change your instance type to a more appropriate one.
BTW, instead of using Putty/FileZilla, you can connect with you instance via the connect tab or session manager (see attached image). Right click on the instance name, and choose "connect".
I am using Amazon EC2 using Elastic Beanstalk deployment process through Visual Studio all is working well, except that when the application is deployed it does not have by default write permission; so I had to manually Remote Desktop the individual machine; and give it write permission through IIS site and under permissions.
How can I automate this process, since amazon servers adds on to load balancer using auto-scaling etc.?
Or If I change one, the other to follow will copy the exact same thing, which I had done manually?
I am little confused, first time deploying, please help?
Thanks
Yes, you can use ebextensions config to set permissions on the directory after the instance spins up. Here is an example of someone creating a directory and setting the permissions on the new directory, you should be able to adapt to your circumstances:
AWS Beanstalk ebextensions on windows
Is there a way to use IAM to manage developer access to an EC2 instance? (ssh not ec2 API).
Not the EC2 rest API or the online console but to manage individual ssh or ftp access to a server?
What you are looking for is a linux Pluggable Authentication Module (PAM) that talks to the AWS IAM service.
This is not available out of the box in an image, but have a look here:
https://github.com/denismo/aws-iam-ldap-bridge
This project allows you to sync an LDAP server with IAM, and then you can configure your sshd to use the LDAP server.
That might work for you.
You can use either of these two projects: https://github.com/widdix/aws-ec2-ssh or https://github.com/kislyuk/keymaker
They amount to synchronising the IAM accounts to the user accounts and can pull the SSH keys in. They both rely on a cron job to keep them up to date.
No, IAM is meant to control access to EC2 resources. Logging into an instance via SSH cannot be qualified as same. Anyone who has .pem key and can log into the instance (presuming the ssh access is allowed in security groups).
I have just built a cloud using AWS.
It includes a few servers including an RDS.
The RDS server is running an accounting application. I have limited access to this server to a certain security group, I've also set up some group policy.
What I wanted to know is, what security can I put in place to protect the server.. i.e. before users even get logged into it?
Maybe something like a dial in VPN, something that users have to authenticate with before they have the option of accessing the RDS server.
It sounds like this might be a fit for an Amazon private cloud. The pricing is the same as for public instances, the thing you pay for is the VPN access on a per connection / per hour basis.
We are in the process of setting up our IT infrastructure on Amazon EC2.
Assume a setup along the lines of:
X production servers
Y staging servers
Log collation and Monitoring Server
Build Server
Obviously we have a need to have various servers talk to each other. A new build needs to be scp'd over to a staging server. The Log collator needs to pull logs from production servers. We are quickly realizing we are running into trouble managing access keys. Each server has its own key pair and possibly its own security group. We are ending up copying *.pem files over from server to server kind of making a mockery of security. The build server has the access keys of the staging servers in order to connect via ssh and push a new build. The staging servers similarly has access keys of the production instances (gulp!)
I did some extensive searching on the net but couldnt really find anyone talking about a sensible way to manage this issue. How are people with a setup similar to ours handling this issue? We know our current way of working is wrong. The question is - what is the right way ?
Appreciate your help!
Thanks
[Update]
Our situation is complicated by the fact that at least the build server needs to be accessible from an external server (specifically, github). We are using Jenkins and the post commit hook needs a publicly accessible URL. The bastion approach suggested by #rook fails in this situation.
A very good method of handling access to a collection of EC2 instances is using a Bastion Host.
All machines you use on EC2 should disallow SSH access to the open internet, except for the Bastion Host. Create a new security policy called "Bastion Host", and only allow port 22 incoming from the bastion to all other EC2 instances. All keys used by your EC2 collection are housed on the bastion host. Each user has their own account to the bastion host. These users should authenticate to the bastion using a password protected key file. Once they login they should have access to whatever keys they need to do their job. When someone is fired you remove their user account to the bastion. If a user copies keys from the bastion, it won't matter because they can't login unless they are first logged into the bastion.
Create two set of keypairs, one for your staging servers and one for your production servers. You can give you developers the staging keys and keep the production keys private.
I would put the new builds on to S3 and have a perl script running on the boxes to pull the lastest code from your S3 buckets and install them on to the respective servers. This way, you dont have to manually scp all the builds into it everytime. You can also automate this process using some sort of continuous build automation tools that would build and dump the build on to you S3 buckets respectively. Hope this helps..