I am using Amazon EC2 using Elastic Beanstalk deployment process through Visual Studio all is working well, except that when the application is deployed it does not have by default write permission; so I had to manually Remote Desktop the individual machine; and give it write permission through IIS site and under permissions.
How can I automate this process, since amazon servers adds on to load balancer using auto-scaling etc.?
Or If I change one, the other to follow will copy the exact same thing, which I had done manually?
I am little confused, first time deploying, please help?
Thanks
Yes, you can use ebextensions config to set permissions on the directory after the instance spins up. Here is an example of someone creating a directory and setting the permissions on the new directory, you should be able to adapt to your circumstances:
AWS Beanstalk ebextensions on windows
Related
We are pretty new to AWS and looking to deploy multiple services into one EC2 instance.
Each micro-service is developed in its own repository.
Each service will have its own endpoint URL
Services may talk to each other
Services can be updated/deployed separately
Do we need a beanstalk for each? I hope not.
Thank you in advance
So the way we tackled a similar issue at our workplace was to leverage the multi-container docker platform supported by Elastic Beanstalk in most AWS regions.
The way this works in brief is, we had dedicated repositories for each of our services in ECR (Elastic Container Registry) where the different "versioned" images were deployed using a deploy script.
Once that is configured and set up, all you would need is deploy a Dockerrun.aws.json file which basically highlights all the apps you would want to deploy as part of the docker cluster into 1 EC2 instance (make sure it is big enough to handle multiple applications). This is the file where one would also highlight link between applications (so they can talk to one another), port configurations, logging drivers and groups (yea we used AWS CloudWatch for logging) and many other fields. This JSON is very similar to one's docker-compose.yml which is used to bring up your stack for local development and testing.
I would suggest checking out the sample example configuration that Amazon provides for more information. Also, I found the docker documentation to be pretty helpful in this regard.
Hope this helps!!
It is not clear if you have a particular tool in mind. If you are using any tool for deployment of a single micro-service, multiple should be the same.
How does one deploy multiple micro-services in Node on a single AWS
EC2 instance?
Each micro-service is developed in its own repository.
Services can be updated/deployed separately
This should be the same as deployment of a single micro-service. As long as they have different path and port that they are running on, it should be fine.
Each service will have its own endpoint URL
You can use nginx as a reverse proxy which can redirect your request from port 80 to the required port of your micro service.
Services may talk to each other
This again should not be an issue. You can either call them directly with the port number or via fully qualified name and come back via nginx.
I have deployed node.js code on aws elasticbeanstalk creating a new environment. The app is successfully deployed. I want to access the files. I used ssh to the remote machine but the I can't find the code
Elastic Beanstalk places the deployed code in /var/app/current
Note that you shouldn't be making changes on the Elastic Beanstalk server directly.
Adding to the last answer, remember that you need to select enable SSH to your instances when launching application. Else, you won't be able to SSH into any AWS Elastic Beanstalk instance.
If you found this question but you're not using ssh, you could download the zip after clicking on a version in the console.
I use Amazon EC2 to host some web sites and databases.
I have a new developer joining me tomorrow.
If I create an IAM User, and attach the "AmazonEC2FullAccess - arn:aws:iam::aws:policy/AmazonEC2FullAccess- Provides full access to Amazon EC2 via the AWS Management Console.) policy to him,
will he be able to access secrets stored inside the linux ec2 instances created in the past. Basically, does this policy somehow allow access to pre-created linux instances.
EDIT: what if he/ she attempts a disk recovery procedure? for example, mount the disk of a vm in a new ec2 instance
When you give AmazonEC2FullAccess access to the user he will be able to see all the EC2 instances in the AWS account. Even if you don't provide him the key to pre-created EC2 instances he will be able to take AMI of the pre created EC2 instance and launch it with a new key and get access to that instance.
He can also do disk recovery procedure as in you mentioned in your use case. So you have some of the below options.
Do not provide AmazonEC2FullAccess ask him what specification he needs for the server and launch the EC2 as per the specification and provide him ssh jailed user access to that EC2 instance.
Set up cloud trail so that you can monitor the resources created by that user for any suspicious activity https://aws.amazon.com/cloudtrail/
Third option is as you mentioned he is developer just provide him deployment and git access to the application running on the EC2 instance.
The IAM role only gives someone access to the AWS EC2 API, where you can do things like create new instances, shutdown existing instances, etc. This does not give someone access to login to any EC2 servers. For that you would need to give someone the SSH key (for Linux) or password (for Windows) that was setup when the server was created.
I created a nodejs virtual machine instance on google cloud (compute engine). I also created 3 mongodb instances on the compute engine. Now I pushed my local application to the google cloud repository. How do I link the app.js file to this server so it starts running the script and serving the files. I already changed the A record of my domain that I registered with godaddy so it's external ip is the same as that of the nodejs server I am running but all I am getting is this page.
Bitnami developer here.
You need to copy your files to the remote repository, create the configuration files and after that you have to include that configuration in the configuration of Apache to serve the application. It looks like you have done most of the work, but there may be an error in one of the steps. This guide will help you to configure your application in order to access it properly.
https://wiki.bitnami.com/Applications/Bitnami_Custom_Node.js_Application
I hope it helps.
Jota
We are in the process of setting up our IT infrastructure on Amazon EC2.
Assume a setup along the lines of:
X production servers
Y staging servers
Log collation and Monitoring Server
Build Server
Obviously we have a need to have various servers talk to each other. A new build needs to be scp'd over to a staging server. The Log collator needs to pull logs from production servers. We are quickly realizing we are running into trouble managing access keys. Each server has its own key pair and possibly its own security group. We are ending up copying *.pem files over from server to server kind of making a mockery of security. The build server has the access keys of the staging servers in order to connect via ssh and push a new build. The staging servers similarly has access keys of the production instances (gulp!)
I did some extensive searching on the net but couldnt really find anyone talking about a sensible way to manage this issue. How are people with a setup similar to ours handling this issue? We know our current way of working is wrong. The question is - what is the right way ?
Appreciate your help!
Thanks
[Update]
Our situation is complicated by the fact that at least the build server needs to be accessible from an external server (specifically, github). We are using Jenkins and the post commit hook needs a publicly accessible URL. The bastion approach suggested by #rook fails in this situation.
A very good method of handling access to a collection of EC2 instances is using a Bastion Host.
All machines you use on EC2 should disallow SSH access to the open internet, except for the Bastion Host. Create a new security policy called "Bastion Host", and only allow port 22 incoming from the bastion to all other EC2 instances. All keys used by your EC2 collection are housed on the bastion host. Each user has their own account to the bastion host. These users should authenticate to the bastion using a password protected key file. Once they login they should have access to whatever keys they need to do their job. When someone is fired you remove their user account to the bastion. If a user copies keys from the bastion, it won't matter because they can't login unless they are first logged into the bastion.
Create two set of keypairs, one for your staging servers and one for your production servers. You can give you developers the staging keys and keep the production keys private.
I would put the new builds on to S3 and have a perl script running on the boxes to pull the lastest code from your S3 buckets and install them on to the respective servers. This way, you dont have to manually scp all the builds into it everytime. You can also automate this process using some sort of continuous build automation tools that would build and dump the build on to you S3 buckets respectively. Hope this helps..