Choosing the right EC2 instance for three NodeJS Applications - node.js

I'm running three MEAN stack programmes. Each application receives over 10,000 monthly users. Could you please assist me in finding an EC2 instance for my apps?
I've been using a "t3.large" instance with two vCPUs and eight gigabytes of RAM, but it costs $62 to $64 per month.
I need help deciding which EC2 instance to use for three Nodejs applications.

First check CloudWatch metrics for the current instances. Is CPU and memory usage consistent over time? Analysing the metrics could help you to decide whether you should select a smaller/bigger instance or not.
One way to avoid too unnecessary costs is to use auto scaling groups and load balancers. By using them and finding and applying proper settings, you could have always right amount of computing power for your applications.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/working_with_metrics.html
https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-groups.html

Depends on your applications. If your apps need more compute power or more memory or more storage? Deciding a server is similar to installing an app on system. Check what are basic requirements for it & then proceed to choose server.
If you have 10k+ monthly customers, think about using ALB so that traffic gets distributed evenly. Try caching to server some content if possible. Use unlimited burst mode of t3 servers if CPU keeps hitting 100%. Also, try to optimize code so that fewer resources are consumed. Once you are comfortable with ec2 choice, try to purchase saving plans or RIs for less cost.
Also, do monitor the servers & traffic using Cloudwatch agent, internet monitor etc features.

Related

Azure web site questions

I currently have a web application deployed to "Web Sites" - This is configured in standard mode and it performs really well from what I have seen so far.
I have a few questions:
1)My instance size is currently small - however I can scale out to 10 instances. Does this also mean that if I change my instance size to medium or large, I can still have 10 instances?
2)What is the maximum number of instances I can have for an azure web site?
3)Is there any SLA for a single azure instance?
4)Is it possible to change the instance size programatically or is better to just change the instance count
1) Yes
2) 10 for standard.
3) Yes, for Websites Basic and Standard, MS guarantee a 99.9% monthly availability.
4) It depends on a lot of factors. The real question is "Is it better for your app to scale up or scale out?"
Yes, the default limit is 10 instances regardless of the size.
The default limit is 10 instances, but you can contact Azure Support to have the limit increased. Default and "real" limits for Azure services are documented here.
According to the Websites pricing page Free and Shared sites have no SLA and Basic and Standard sites have 99.9% uptime SLA. Having a single instance means that during the 0.1% outage time (43.8 minutes per month) your site will be down. If you have multiple instances then most likely at least one will be up at any given time.
Typically instance auto-scaling is used to handle variation in demand while instance size would be used for application performance. If you only get 100 requests per day but each request is slow because it's maxing out CPU then adding more instances won't help you. Likewise if you're getting millions of requests that are being processed quickly but the volume is maxing out your resources then adding more instances is probably the better solution.

Reduce costs of Azure availability set

I am planning on running Sharepoint Foundation on one VM size A3 and SQL Server on another of size A6. As far as I understand this is not enough to achieve SLA and I should use 2 more instances - one for Sharepoint and one for SQL Server configured in 2 seperate availability sets.
Can I use scaling (by CPU usage) to turn off one instance and leave only one running at a time in an availability set? This would reduce the costs but I wonder if this solution will be good enough to achieve Azure's SLA. The way I see it one instance is running at a time while other one is shut down so I am billed for one instance. When there is an update or failure going on, the instance that until then has been running is shut down and the other one comes online. Is this the way it works? Can I cut costs of availability sets like this?
no, the SLA requires two running instances. However, if you want to control your costs, the approach you have in place will work. Just keep in mind that the duration/window for a disruption will be dependent on how quickly you detect that the primary VM has failed, and how fast you can start the secondary VM. And depending on the nature of the service disruption, it may not be possible for you to start the secondary. So its a risk.

deploying CPU intensive web service on cloud

I have an application which I want to expose as a web service (SaaS). The application is CPU intensive and is a multithreaded application which takes good amount of time for the execution(on an average 15-20secs). Since, I want to expose it as a SaaS and want to use existing cloud services available in the market like Amazon, Google App Engine etc. so that the cost involved and the work involved while scaling my service is not much. I have couple of questions in my mind like:
1.) Since the application is multithreaded and the number of threads invoked depends on the number of results thrown by the service(so basically number of threads is a dynamic entity). Right now I have a 6 core processor so I have kept the threadpool size to be 6 but since I am moving onto the cloud, how can I optimally use the cloud infrastructure?
2.) Do the cloud service providers(which?) give the option to select number of CPU cores required for each request (or something similar to serve my purpose)?
3.) What changes are needed in the code (related to the threads)?
4.) Any other specific area which I should give a sight for moving to the cloud?
In Amazon EC2 you are basically paying for different types of instances - you are free to pick one with only single core and one with sixteen. You get what you pay for.
how can I optimally use the cloud infrastructure?
Your approach is fine, if your task is CPU-intenstive, have a thread pool with the same number of threads as CPU cores/CPUs.
select number of CPU cores required for each request
No, at least not Amazon. You run your application on a given instance and that's all you get. You have to pick instance type in advance, but of course you are free to switch between them, add new, etc. at any time. The cloud!
In Google App Engine you can't create threads, so it's a no-option for you. See also: Why does Google App Engine support a single thread of execution only?
3.) What changes are needed in the code (related to the threads)?
None. It's a standard PC, after all.
4.) Any other specific area which I should give a sight for moving to the cloud?
Well, see above, some services are completely useless for you, like GAE. Make some research before you actually pay for something.

Azure compute instances

On Azure I can get 3 extra small instances for the price 1 small.I'm not worried about my site not scaling.
Are there any other reasons I should not go for 3 extra small instead of 1 small?
See: Azure pricing calculator.
An Extra Small instance is limited to approx. 5Mbps bandwidth on the NIC (vs. approx. 100Mbps per core with Small, Medium, Large, and XL), and has less than 1GB of RAM. So, let's say you're running something that's very storage-intensive. You could run into bottlenecks accessing SQL Azure or Windows Azure storage.
With RAM: If you're running 3rd-party apps, such as MongoDB, you'll likely run into memory issues.
From a scalability standpoint, you're right that you can spread the load across 2 or 3 Extra Small instances, and you'll have a good SLA. Just need to make sure your memory and bandwidth are good enough for your performance targets.
For more details on exact specs for each instance size, including NIC bandwidth, see this MSDN article.
Look at the fine print - the I/O performance is supposed to be much better with the small instance compared to the x-small instance. I am not sure if this is due to a technology related bottleneck or a business decision, but that's the way it is.
Also I'm guessing the OS takes a bit of RAM in each of the instances, so in 3 X-small instances it takes it up three times instead of just once in a small instance. That would reduce the resources that are actually available for your application needs.
While 3 xtra-small instances theoretically may equal or even be better "on paper" than one small instance, do remember that xtra-small instances do not have dedicated cores and their raw computing resources are shared with other tenants. I've tried these xtra-small instances in an attempt to save money for tiny-load website and must say that there were simply outages or times of horrible performance that I've found unacceptable.
In short: I would not use xtra-small instances for any sort of production environment

Number of instances needed for windows azure application

I'm fairly new to Windows Azure and want to host a survey application that will be filled out by appr. 30.000 users simultaniously.
The application consists of 1 .aspx page that will be sent to the client once, asks 25 questions and will give a wrap-up of the given answers at the end. When the user has given the answer and hits the 'next question' buttons the given answer will be send via an .ashx handler to the server. The response is the next question and answers. The wrap-up is sent to the client after a full postback.
The answer is saved in an Azure Table that is partitioned so that each partition can hold a max of 450 users.
I would like to ask if someone can give an estimated guess about how many web-role instances we need to start in order to have this application keep running. (If that is too hard to say, is it more likely to start 5, 50 or 500 instances?)
What is a better way to go: 20 small instances or 5 large instances?
Thanks for your help!
The most obvious answer: you would be best served by testing this yourself and see how your application holds up. You can easily get performance counters and other diagnostics out of Windows Azure; for instance, you can connect Microsoft SCOM (System Center Operations Manager) to monitor your environment during test. Site Hammer is a simple load testing tool for Windows Azure (on MSDN code gallery).
Apart from this very obvious answer, I will share some guesstimates: given the type of load, you are probably better of with more small instances as opposed to a lower number of large ones, especially since you already have your storage partitioned. If you are really going to have 30K visitors simultaneously and give them a ~15 second interval between reading the questions & posting their answers you are looking at 2,000 requests per second. 10 nodes should be more than enough to handle that load. Remember that this is just a simple estimate, lacking any form of insight in your architecture, etc. For these types of loads, caching is a very good idea; it will dramatically increase the load each node can handle.
However, the best advice I can give you is to make sure that you are actively monitoring. It takes less than 30 minutes to spin up additional instances, so if you monitor your environment and/or make sure that you are notified whenever it starts to choke, you can easily upgrade your setup. Keep in mind that you do need to contact customer support to be able to go over 20 instances (this is a default limit, in place to protect you from over-spending).
Aside from the sage advice tijmenvdk gave you, let me add my opinion on instance size. In general, go with the smallest size that will support your app, and then scale out to handle increased traffic. This way, when you scale back down, your minimum compute cost is kept low. If you ran, say, a pair of extra-large instances as your baseline (since you always want minimum two instances to get the uptime SLA), your cost footprint starts at 0.12 x 8 x 2 = $1.92 per hour, even during low-traffic times. If you go with small instances, you'd be at 0.12 x 1 x 2 = $0.24 per hour.
Each VM size as associated CPU, memory, and local 9non-durable) disk storage, so pick the smallest size unit that your app works efficiently in.
For load/performance-testing, you might also want to consider a hosted solution such as Loadstorm.
How simultaneous are the requests in reality?
Will they all type the address in at exactly the same time?
That said, profile your app locally, this will enable you to estimate CPU, Network and Memory usage on Azure. Then, rather than looking at how many instances you need, look at how you can reduce the requirement! Apply these tips, and profile locally again.
Most performance tips have a tradeoff between cpu, memory or bandwith usage, the idea is to ensure that they scale equally. If you're application runs out of memory, but you have loads of CPU and network, dont
For a single page survey, ensure your html, css & js is minified, ensure its cacheable.
Combine them if possible, and to get really scaleable, push static files (css,js & images) to a CDN. This all reduces the number of requests the webserver has to deal with, and therefore reduces the number of webroles you will need = less network.
How does the ashx return the response? i.e. is it sending html, xml or json?
personally, I'd get it to return JSON, as this will require less network bandwidth, and most likely less server side processing = less mem and network.
Use Asyncronous API's to access azure storage (this uses IO completion ports to free up the iis thread to handle more requests until azure storage comes back = enabling cpu to scale)
tijmenvdk has already mentioned using queues to write. Do the list of questions change? if not, cache them, so that the app only has to read from table storage once on start-up and once for each client for the final wrap-up = saves network and cpu at the expense of memory.
All of these tips are equally applicable to a normal web application, on a single server or web-farm environment.
The point I'm trying to make is that what you can't measure, you cant improve, and measurement, improvement and cost all go hand in hand. Dynamic scaling will reduce costs, but fundamentally if your application hasn't been measured and resource usage optimised, asking how many instances you need is pointless.

Resources