Managing inter instance access on EC2 - security

We are in the process of setting up our IT infrastructure on Amazon EC2.
Assume a setup along the lines of:
X production servers
Y staging servers
Log collation and Monitoring Server
Build Server
Obviously we have a need to have various servers talk to each other. A new build needs to be scp'd over to a staging server. The Log collator needs to pull logs from production servers. We are quickly realizing we are running into trouble managing access keys. Each server has its own key pair and possibly its own security group. We are ending up copying *.pem files over from server to server kind of making a mockery of security. The build server has the access keys of the staging servers in order to connect via ssh and push a new build. The staging servers similarly has access keys of the production instances (gulp!)
I did some extensive searching on the net but couldnt really find anyone talking about a sensible way to manage this issue. How are people with a setup similar to ours handling this issue? We know our current way of working is wrong. The question is - what is the right way ?
Appreciate your help!
Thanks
[Update]
Our situation is complicated by the fact that at least the build server needs to be accessible from an external server (specifically, github). We are using Jenkins and the post commit hook needs a publicly accessible URL. The bastion approach suggested by #rook fails in this situation.

A very good method of handling access to a collection of EC2 instances is using a Bastion Host.
All machines you use on EC2 should disallow SSH access to the open internet, except for the Bastion Host. Create a new security policy called "Bastion Host", and only allow port 22 incoming from the bastion to all other EC2 instances. All keys used by your EC2 collection are housed on the bastion host. Each user has their own account to the bastion host. These users should authenticate to the bastion using a password protected key file. Once they login they should have access to whatever keys they need to do their job. When someone is fired you remove their user account to the bastion. If a user copies keys from the bastion, it won't matter because they can't login unless they are first logged into the bastion.

Create two set of keypairs, one for your staging servers and one for your production servers. You can give you developers the staging keys and keep the production keys private.
I would put the new builds on to S3 and have a perl script running on the boxes to pull the lastest code from your S3 buckets and install them on to the respective servers. This way, you dont have to manually scp all the builds into it everytime. You can also automate this process using some sort of continuous build automation tools that would build and dump the build on to you S3 buckets respectively. Hope this helps..

Related

Azure ElasticSearch config file and how to add security

I install the azure plugin for elastic search according to this tutorial.
Azure Elastic which is using the template from here
github.com/Azure/azure-quickstart-templates/tree/master/elasticsearch
After it is deployed, I am able to connect to the kabana from the tutorial link above. If I like to install security for the Azure Elastic Search, how would be possible?
Furthermore, how do I access the elasticsearch.yaml for the config to further customisation ?
I tried to access the VM but there are only two ip i can link from the azure portal which is the jumpbox and also the kabana public ip.
Tried searching the /etc/ folder but didnt get to see the elastic folder after I remote into the server.
Please see this photo for the IP in Azure Portal.
I am also very new into ARM (Azure Resource Manager) which now consists multiple nodes of server connected together. It would be best , if someone could help explain how elastic search install into here. As far as I know the master node will proper assign the task to the data node after the request is handled at the client node.
The Elastic version is v2.3.1
Please help.
Once you install use the quickstart to install your cluster (of a single node it sounds like), you are in complete control.
In the case of the template, the jumpbox exists as an access point to pivot into the rest of the cluster. This way you can avoid ever giving your Elasticsearch instances a public IP address, thereby reducing the chance for a driveby attack to take place on your cluster -- because it's never exposed! For what it's worth, this is a pretty common strategy in operational isolation.
So, to get started, you should be able to SSH into the jumpbox, and from there you can use the private address of the Elasticsearch VM to SSH to it, from the jumpbox.
SSH into jump box
SSH into the rest of the private VMs
Once you have done that, then you should be able to access the elasticsearch.yml file.
How do you add security? The only official way to install security in Elasticsearch is to use the Shield plugin. This allows you to encrypt communication to/from Elasticsearch, as well as provide authentication.
Elastic, the company behind Elasticsearch and Kibana, has its own Azure Quick Start for Elasticsearch that does most of what the template you used does, but it also adds security to it. It may prove to be easier to delete the old cluster and start one from there.

Azure VM - why FTP transfers lead to complete disconnect?

I have a virtual machine with the FTP server configured.
I'm transferring files in ACTIVE mode and at a random file I get disconnected.
I cannot reconnect to the FTP server nor connect remotely to the machine.
I have to restart the machine and wait a while to regain access.
What can I do in this situation to prevent the complete disconnect?
I ended up using the Passive mode, even though it does not suit me because the Active mode kept failing.
You need more than just those two ports open - the design of FTP (either passive or active) is that the FTP server will send data back on a randomised range of ports (see: http://slacksite.com/other/ftp.html) which presents a problem when using a stateless service like Azure's Load Balancing that requires Endpoints that must be explicitly opened. This setup guide is best to see how to achieve what you want on an Azure VM: http://itq.nl/walkthrough-hosting-ftp-on-iis-7-5-a-windows-azure-vm-2/ (and is linked from the SO post referenced by Grady).
You most likely need to open the FTP Endpoint on the VM: This answer will give you some backgroudn you how to add endpoints: How to Setup FTP on Azure VM
You can also use powershell to add endpoint: Add Azure Endpoint

Is there a way to use IAM to manage developer access to an EC2 instance? (ssh not ec2 api)

Is there a way to use IAM to manage developer access to an EC2 instance? (ssh not ec2 API).
Not the EC2 rest API or the online console but to manage individual ssh or ftp access to a server?
What you are looking for is a linux Pluggable Authentication Module (PAM) that talks to the AWS IAM service.
This is not available out of the box in an image, but have a look here:
https://github.com/denismo/aws-iam-ldap-bridge
This project allows you to sync an LDAP server with IAM, and then you can configure your sshd to use the LDAP server.
That might work for you.
You can use either of these two projects: https://github.com/widdix/aws-ec2-ssh or https://github.com/kislyuk/keymaker
They amount to synchronising the IAM accounts to the user accounts and can pull the SSH keys in. They both rely on a cron job to keep them up to date.
No, IAM is meant to control access to EC2 resources. Logging into an instance via SSH cannot be qualified as same. Anyone who has .pem key and can log into the instance (presuming the ssh access is allowed in security groups).

Multiple Web Sites/Roles on Azure, Impact of staging server

I'm looking to set up two web roles or websites on my Azure Cloud Service.
The websites need to share the same database schema. I use NHibernate ORM, so I have to make sure that both projects are always using the same data model, or else it will cause major problems.
I've researched setting up multiple websites on a single web role (which seems odd to me, can't I just run multiple web roles, each with a single site)?
http://msdn.microsoft.com/en-us/library/windowsazure/gg433110.aspx
Like any good developer, I use a staging server. If I have to manually set the domain name is configuration files, how will azure know not to be sending people who visit that domain to the staging server?! I.E. If they visit blah.foo.com and I have two deployments (staging and production), is IIS going to be able to know only to send people to the production environment?
Please advise on the best way to go about doing this.
First, you can certainly have multiple web roles, each with a single site; however, each role instance will be deployed to different virtual machines. For example, if you do set up two web roles when you deploy this with one instance each then there are two virtual machines you'll be paying for. If you want the SLA to apply to your deployment you'd need to actually set the instance count to 2 for each web role, which now means you have four virtual machines running. By combining web sites onto the same web role you'll cut down on the number of instances you need to run and still get the SLA; however, that option is not without some considersations. The link you provided is how you can set up multiple websites to run on the same virtual machine when deployed. Note that there are some gotchas with using that method. I'd suggest reading Michael Collier's Tips for Publishing Multiple Sites in a Web Role.
Second, if you do NOT need to have a lot of control over the virtual machine (such as registering special components, etc.) you might want to look at Windows Azure Web Sites as an option. You can elect to take one of the paid levels of Web Sites and still have dedicated machines, but you can deploy the websites separately. I will say though, that your requirement of having both sites in lock step because they share the underlying database schema means that it will be less likely you will want to deploy separate changes, but it is still possible.
Finally, regarding the staging server. If you are testing locally you'll want to modify your hosts file to get the host names to point to your local address. Wade Wegner has a post on Running Multiple Websites in a Windows Azure Web Role. Once you deploy to Windows Azure you'd want to change your hosts file back, or comment them out. If you are using the actual idea of the Staging deployment slot you can use the same trick with the hosts file to point to the IP address of the staging deployment when testing.

How to restrict user access to only deployments via Capistrano? (deployment workflow issue)

I look for good practices for deploying with capistrano.
I would like to start out with a short description how I used to do deployment.
capistrano is installed locally on a developer's computer. I deploy thought gateway with capistrano option :gateway. Firstly, I thought that with :gateway option I need to have ssh connection only to gateway host, but it turns out that I need ssh connection (public key) to all hosts where I want to deploy to.
I would like to find a convenient and secure way to deploy application.
For example, in case when new developer starts working, is much more convinient to put his *public_key* only on gateway server and not on all applications servers. On the other hand I don't want him to have any connection to servers in particular ssh to gateway, just because he is developer, he needs to do only deployments.
If you are aware of good practices for deploying with capistrano, please, let me know.
Create special user accounts for every developer on the gateway machine as well as on the rest of the server machines. This you will have to do using the abilities your OS and ssh gives you. Make the developer accounts don't have the ability to login via a shell to the gateway etc.
I can't provide you with all the details, but I think I might have directed you in the right direction. You can ask on Server Fault for the details how it is possible to allow an user to login and do only certain tasks on the server.
Digression/Opinion: It's better to have developers which you trust to do the deployments. If you do not trust a dev, better do not let him do crucial things like i.e. deployment to a production server.

Resources