amazon elastic beanstalk how to change RDS - amazon

I have 2 different elastic beanstalks and want them to be connected to the same RDS, but i can't find any way to change the Eclipse RDS Endpoint.
I can do it on the PHP level with an if-statement, but i want to change the $_SERVER['RDS_HOSTNAME'] variable.
Does any one know how to do this?

Related

Setting up ELB on AWS for Node.js

I have a very basic question but I have been searching the internet for days without finding what I am looking for.
I currently run one instance on AWS.
That instance has my node server and my database on it.
I would like to make use of ELB by separating the one machine that hosts both the server and the database:
One machine that is never terminated, which hosts the database
One machine that runs the basic node server, which as well is never terminated
A policy to deploy (and subsequently terminate) additional EC2 instances that run the server when traffic demands it.
First of all I would like to know if this setup makes sense.
Secondly,
I am very confused about the way this should work in practice:
Do all deployed instances run using the same volume or is a snapshot of the volume is used?
In general, how do I set such a system? Again, I searched the web and all of the tutorials and documentations are so generalized for every case that I cannot seem to figure out exactly what to do in my case.
Any tips? Links? Articles? Videos?
Thank you!
You would have an AutoScaling Group with a minimum size of 1, that is configured to use an AMI based on your NodeJS server. The AutoScaling Group would add/remove instances to the ELB as instances are created and deleted.
EBS volumes can not be attached to more than one instance at a time. If you need a shared disk volume you would need to look into the EFS service.
Yes you need to move your database onto a separate server that is not a member of the AutoScaling Group.

How to configure the security group on AWS to run node app

AWS is new to me. I want to configure three VM on AWS to run one node.js app.
I want to set three VMs to run MongoDB, Memcached and node seperatedly.
The question description says that You
should also carefully configure the security groups inside of AWS, so that only your node instance can access your mongo and mcd instances, and so that your node instance is only reachable on port 8080.
When I am setting the security group, I feel really confused. If somebody can tell me how to configure this?
PS: I wanted to comment to OP's question, but I can't as I don't have enough reputation.
You need to go through some docs on AWS to understand this. If you are building enterprise level app you want to look into this docs where you can get more info on security groups and how you can setup your architecture on AWS with security.
Secondly, Security groups are the rules which are applied on instance level - consider as firewall, for your system more info here. In your case you can open node.js ports for mongodb (27017/18 TCP) and Memcached (11211 - TCP) instances as node only requires to connect to mongodb and memcached, also you can setup NAT if you want to keep your instances in private subnet.

Using MongoDB with AWS ElasticBean application

I have an ElasticBean application running (setup with NodeJS) and I wondered what the best way to integrate MongoDB would be. As of now, all I've done is ssh into my eb instance (with the eb cli) and install mongodb. My main worry comes from the fact that the mongo db exists in my instance. As I understand it, that means that my data will most certainly be lost as soon as I terminate my instance. Is that correct? If so the what is the best way to go about hooking en EB app to a MongoDB? Does AWS support that natively without having to go rent a DB on a dedicated server?
You definitely do NOT want to install MongoDB on an Elastic Beanstalk instance.
You have a few options for running MongoDB on AWS. You can install it yourself on some EC2 servers (NOT Elastic Beanstalk servers) and handle all the management of that yourself. The other option is to use mLab (previously MongoLab) which is a managed MongoDB as a Service provider that works on AWS as well as other cloud services. Using mLab you can easily provision a MongoDB database in the same AWS region as your Elastic Beanstalk servers.
Given the steps involved in setting up a highly-available MongoDB cluster on AWS I generally recommend using mLab instead of trying to handle it yourself. Just know that the free-tier is not very performant and you will want to upgrade to one of the paid plans for your production database.
Been there, done that. As #MarkB suggested it´d be a lot easier to use a SaaS instead.
AWS by itself doesn´t have a native MongoDB support but deppending of your requierements you could find a solution with little or no extra cost (beside EC2 price) on Amazon Marketplace. These images are vendor´s pre-configured production-ready AMIs of popular tools like MongoDB.

Best way to avoid a single point of failure with an elasticsearch cluster and a web server cluster

We have a web application running on AWS with the following architecture:
1 elasticseach cluster with 2 data nodes
1 auto-scaling load-balanced cluster of web servers
As elasticsearch does some clever internal load balancing we could just point all the web servers at one of the data nodes. But this would create a single point of failure - if that node goes down then I'm not going to get any query results.
My solution thus far has been to have elasticsearch running on each web server as non-data nodes. Each web server queries its local elasticsearch node, which in turn farms the request off to one of the data nodes. This seems to be the suggested approach on the elasticsearch website
This is great in that if one of the data nodes fails in some way we don't lose the ability to serve search queries. However, it does mean elasticsearch is using resources on each web server, and if we migrate to using elastic beanstalk (which I'm keen to do) then we'll need to some how get elasticsearch installed on our web instances. EDIT: I've succeeded with this now, but have yet to figure out how to specify a different config for each environment.
Is there another way to avoid a single point of failure without having elasticsearch running on each web server?
I thought about using a load balancer in front of the data nodes to serve queries from the web servers, but that would also mean opening the cluster up to public access without setting up VPC to restrict access.
Is there a simpler solution I'm missing?
I don't think this directly answers your question, but if you are still ok with running ES on your web server nodes, you can customize the software that is installed using the .ebextensions mechanism, which allows you to run scripts and/or install packages when new Elastic Beanstalk instances are started up. If this isn't sufficient you can start your Elastic Beanstalk instances using a custom AMI.
Also, you may not be aware that you can run Elastic Beanstalk in a VPC.

Keeping Elastic Search alive on Amazon EC2 Linux Instance

I have elastic search running on a linux instance on Amazon EC2. I use tunnelier to connect to the instance. I'm new to EC2 and tunnelier (I'm more familiar with Windows Servers and Remote desktop). The problem is that when I disconnect the tunnelier console, my Elastic Search Server is no longer available for clients connecting to it. I would like to know how to keep the Elastic Search Server alive, serving client requests without my having to keep a tunnelier session active.
I guess I didn't ask this properly or so. Anyway, I found the answer here: http://www.elasticsearch.org/tutorials/2011/08/22/elasticsearch-on-ec2.html. Really really helpful. Thanks a million to the author. Helped me set up elastic search as a service on EC2.

Resources