I have a bare bones setup on Amazon, and wanted to know which is the better approach coming out of the gate on a new site, where we anticipate a spike of traffic occasionally (from tech press) before we gradually build up 'real' membership traffic to a reasonable level.
I currently am toying with two starter options:
1) Do I have 1 node app (micro ec2) pointing to a redis-server AND mongod (EC2 server) (which mounts one combined 10G EBS).
Or
2) do I have 1 node app (micro ec2) running redis-server and mongod locally (but with 2 10G EBS mounts, 1 for redis and 1 for mongo).
If traffic went crazy (tech press etc), which is easiest/fastest to scale to handle the spike in traffic. I anticipate equal read writes for mongo and redis btw, and I have no caching (other than that provided by cloudfront assets like images and some css)
I can't speak to Redis, but for MongoDB you'll want to be sure that you run on an instance with sufficient RAM to hold your "working set" of data in memory. "Working set" means, roughly, the full set of data that your application accesses frequently -- for instance, consider Twitter -- the working set of Twitter data is the most recent set of status updates across all users, as this is what is shown on web pages and what Twitter provides via its APIs. For your application, the definition of working set may differ.
Mongo uses memory-mapped files for data access, which means that its performance is great when there is enough memory to hold the data you are accessing frequently, and can degrade when there is not. If you expect your data set to grow beyond about 2.5 gigabytes, you will also want to ensure that you are on a 64-bit instance -- on 32-bit instances, Mongo is limited to around 2.5 gigabytes of data, due to the limited memory address space available on such a platform. For more on MongoDB on EC2, see the Mongo docs on EC2 deployment on the wiki.
I would also caution against using EC2 Micro instances in your production environment. The nature of Micros is that they have "burstable" but very limited CPU resources. If you get a spike of traffic due to tech press, it's likely that your application would be limited by EC2 to a very low amount of available CPU, which will cause performance to suffer. You can mitigate this to a certain extent with load balancing and many Micro instances, but it may be more cost-effective and less complex to simply use Large instances for both Mongo/Redis and your application servers.
You may want to have a look at this question, since IMO, the answer also applies to your situation:
Benefits of deploying multiple instances for serving/data/cache
I would never put mongod and redis-server on the same box. MongoDB is meant to swap due to its usage of memory mapped files, and will generate swapping activity if the data cannot fit in RAM. Redis does not use data structures which are compatible with swapping (like MongoDB does with btrees), and will become unresponsive if its memory is swapped out. Currently, there is no easy way to lock Redis in memory.
So I would put Redis and the app server on the same box, and isolate MongoDB on its own box.
Depending on the size of the data you want to store in Redis, I would pick a large or a small EC2 instance. Redis works well in 32 bits, but memory is limited. For MongoDB, a 64 bits box is almost mandatory. In any case, I would avoid micro instances like the plague.
Related
Consider scenario that
I have multiple devclouds (remote workplace for developers), they are all virtual machines running on the same bare-metal server.
In the past, they used their own MongoDB containers running on Docker. So that number of MongoDB containers can add up to over 50 instances across devclouds.
The problem becomes apparent that while 50 instances is running at the same time, but only 5 people actually perform read/write operations against their own instances. So other 45 running instances waste the server's resources.
Should I use only one MongoDB cluster by combining a set of MongoDB instances ,for everyone so that they can connect to 1 endpoint only (via internal network) to avoid wasting resources.
I am considering the sharding strategy, but the problem is there are chances that if one node taken down (one VM shut down), is that ok for availability (redundancy)?
I am pretty new to sharding and replication, looking forward to know your solutions. Thank you
If each developer expects to have full control over their database deployment, you can't combine the deployments. Otherwise one developer can delete all data in the deployment, etc.
If each developer expects to have access to one database, you can deploy a single replica set serving all developers and assign one database per developer (via authentication).
Sharding in MongoDB sense (a sharded cluster) is not really going to help in this scenario since an application generally uses all of the shards. You can of course "shard manually" by setting up multiple replica sets.
Trying to deploy a project on t3 large server with auto scaling.
I have my elastic search service deployed on same system as node and react projects.(Not using AWS elastic search)
Will it be facing issues in future and i need to segregate elastic search service to some other server?
It's always nice to have a separate dedicated server for running the Elasticsearch server but as you are using AWS some of the things which you can do to minimize the issues:
Elasticsearch is a stateful application contrast to your node and react app unless you are storing the state there as well which is not a good idea and due to stateless nature of the applications, autoscaling is very useful as you can on-demand based on the CPU, memory or other metrics scale up or down the instances.
But in case of Elasticsearch or other stateful applications, it becomes tricky as when you scale up or down the instance, shards get relocated if they are not reachable within a threshold which can lead to unbalanced Elasticsearech cluster.
Now in order to minimize these issues:
Make sure you can storing Elasticsearch indices on the network-attached disk so that there is no data loss when autoscaling brings a new instance and new instance again should use earlier network attaches EBS(where your data is stored).
Make sure you don't create a new Elasticsearch process when you scale up or down the instances according to your autoscaling policy and the Elasticsearch process should be fixed and scale up/down with some manual intervention.
If you have to scale up the Elasticsearch cluster then make sure you disable shard allocation to avoid the issues mentioned earlier.
These are some known issues which you might face and there could be even more based on your configuration and while writing the answer itself I felt, it so easy to just have a dedicated instance for Elasticsearch to avoid these weird issues.
I would add to other answers following:
Elasticsearch performs best if it has enough RAM to keep indexes in entirety in RAM. If the Elasticsearch is competing with Node/Application for RAM it will affect it's performance.
From maintenance/performance perspective you should consider having at least 3-node cluster. Even if that means you have smaller machines. If AWS is upgrading infrastructure and you have 1 machine, when than 0.05% unavailability hits your search is down. If you need to do maintenance on the node or do upgrades having multiple machines will help with availability.
Depending on your use of Elasticsearch and how often you update/delete items in the indexes, and how fast your indexes will grow, adding more machines/nodes to the cluster will help with growth.
There are probably many more things to consider, but that totally depends on your application, budget, SLAs etc.
I am running a node app on a Digital Ocean cloud server, and the app merely services API requests. All client-side assets are served by a CDN, and the DB is accessed remotely, rather than stored on the server instance itself.
I have the choice of a greater number of vCPUs or RAM. I have no idea what that means in any way, so any feedback is a great help.
A single node.js server will run your Javascript on only one CPU so it doesn't help your Javascript run any faster to have more CPUs unless you cluster your app and run multiple node.js processes sharing the load of your app or unless there are other processes on the same server that are being used by your server.
Having more RAM (memory) will only improve things if you actually need more RAM. That depends entirely upon what the memory usage profile is of your app and how much RAM you already have available. Probably, you would already know if you were running out of RAM because you either get drastic slow-down when the OS starts page swapping or your process crashes when out of memory.
So, in order to know which would benefit you more, you really need more data on how your existing app is performing (whether it is ever bog down with CPU intensive operations and how much RAM it uses compared to how much you have available). It is quite possible that neither will actually matter to you - it totally depends upon the usage profile or your server process.
If you have no more data than this and have to make a choice, choose the vCPUs because there are some circumstances where it might help you (and gives you the option to go to clustering in the future if needed) whereas adding more RAM when you aren't even using what you already have won't help you at all.
Here's the setup:
ec2 micro instance
MySQL 5.6
Redis server
Node.js (express based app)
Nginx as reverse front-end proxy.
It's slow. Very slow. I know it's a micro instance and you get what you pay for (considering it's free).
I ended up even using a swap file for MySQL and it's so slow to the point that it's unusable. Should I spin up 2 medium instances (1 for the db/redis and one for the app server)? Keep everything on one and upgrade it to a large instance?
Also, what should I be looking for? More RAM for MySQL and more CPU for the app server? Any input would be extremely helpful (especially those that have used a similar setup in the past).
Keep in mind that EC2 micro instances throttle the cpu - the can surge a bit but if you place consistent cpu load on a micro instance it will throttle down. They are really designed for development - I've used micro instances as web servers before and have paid the price when they throttled down just when the load went up - basically ground to a halt.
As to what you should use you'll really need to assess your own needs based on a combination of benchmarking and analysis on database size, working set size, number of users etc.
That said, if you intend to scale your app at all trying to keep everything on one virtualized server tends not to work well. EC2 currently has many different instance types optimized for different usage scenarios, variously emphasizing cpu, memory, local disk or network capacity. Scaling the node.js/nginx side of your app is very different than MySQL and Redis.
Personally (and it's just my opinion) I'd start with two smalls, MySQL and Redis on one, node and nginx on the other and monitor memory, cpu and disk usage carefully. The great thing about EC2 (or any of the major cloud based virtual instance providers) is the ease with which you can experiment and move to another instance type. To facilitate that I'd definitely use an EBS volume as your database as it makes it very easy to move it later (not to mention backups using volume snapshots for backups).
Ok so I have an idea I want to peruse but before I do I need to understand a few things fully.
Firstly the way I think im going to go ahead with this system is to have 3 Server which are described below:
The First Server will be my web Front End, this is the server that will be listening for connection and responding to clients, this server will have 8 cores and 16GB Ram.
The Second Server will be the Database Server, pretty self explanatory really, connect to the host and set / get data.
The Third Server will be my storage server, this will be where downloadable files are stored.
My first questions is:
On my front end server, I have 8 cores, what's the best way to scale node so that the load is distributed across the cores?
My second question is:
Is there a system out there I can drop into my application framework that will allow me to talk to the other cores and pass messages around to save I/O.
and final question:
Is there any system I can use to help move the content from my storage server to the request on the front-end server with as little overhead as possible, speed is a concern here as we would have 500+ clients downloading and uploading concurrently at peak times.
I have finally convinced my employer that node.js is extremely fast and its the latest in programming technology, and we should invest in a platform for our Intranet system, but he has requested detailed documentation on how this could be scaled across the current hardware we have available.
On my front end server, I have 8
cores, what's the best way to scale
node so that the load is distributed
across the cores?
Try to look at node.js cluster module which is a multi-core server manager.
Firstly, I wouldn't describe the setup you propose as 'scaling', it's more like 'spreading'. You only have one app server serving the requests. If you add more app servers in the future, then you will have a scaling problem then.
I understand that node.js is single-threaded, which implies that it can only use a single core. Not my area of expertise on how to/if you can scale it, will leave that part to someone else.
I would suggest NFS mounting a directory on the storage server to the app server. NFS has relatively low overhead. Then you can access the files as if they were local.
Concerning your first question: use cluster (we already use it in a production system, works like a charm).
When it comes to worker messaging, i cannot really help you out. But your best bet is cluster too. Maybe there will be some functionality that provides "inter-core" messaging accross all cluster workers in the future (don't know the roadmap of cluster, but it seems like an idea).
For your third requirement, i'd use a low-overhead protocol like NFS or (if you can go really crazy when it comes to infrastructure) a high-speed SAN backend.
Another advice: use MongoDB as your database backend. You can start with low-end hardware and scale up your database instance with ease using MongoDB's sharding/replication set features (if that is some kind of requirement).