Can Azure handle 5000 web requests per second? - azure

I want to know how many users can request to web page at same time. Can the server be down if 5000 customers come online at same time on website.

This depend on a lot of factors, for example, a simple static HTML webpage can support 5000 clients at same time on a basic server or VPS, but if the website are dynamic and have a lot of operations (with database for example) can turn down the website because hardware resources limits.
This question can't be answered with "yes" or "not", you should test your website with stressing tools and monitoring your server load for each client to get the aprox value of simultaneous clients supported.
If you plain to open a website with a lot of simultaneous clients expend some time on apply a good cache system for your dynamic content.
If your server can't support the current trafic you should plain to upgrade server or use more than one with IP load balancer.
There are a lot of things that can decide if you can or not run a website with 5000 clients at same time, infrastructure, application design, code quality, server configuration...

Capacity depends on your app, it's architecture and the Azure resources you'll be using. CBC/Radio-Canada 2015 election night app achieved peaks of over 800K requests per second with 1,300 compute cores.
Canadian Broadcasting Corporation/Radio-Canada leverage Azure for smooth election coverage
Azure App Service Best Practices for Large Scale Applications

Related

Express-rate-limit vs NGINX in a node server

I'm currently using express-rate-limit module to block multiple requests from the same ip or logged in user account in my node server, and this is working pretty good against DoS attacks. This server is a small local business that requires only one instance, as it doesn't have too many users and it's computing requirements aren't too intensive.
I've been reading a lot about nginx lately, and many people recommends using it in node servers, but I can't see the major advantages of using it in this kind of application.
How would nginx be better for my application? What can it do that other npm modules can't in terms of security for a single server application?
Well I am not an NGINX expert but I use NGINX in production currently on my EC2 instance. When it comes to rate limiting there are a couple of options available with respect to express
You can use redis as a store, get the IP address of each incoming request and check how many hits they currently have before deciding to service them. This could be a middleware that works on all routes
You could use a library like express-rate-limit or rate-limiter-flexible which will handle the redis part for you
Now when you take NGINX, it is a web server whose strongest point is not rate limiting to be precise. It still supports rate limiting though if you modify the configuration. HERE is an insight into NGINX rate limiting.
Another option you havent considered is called HAProxy which is a load balancer which is considered superior for tasks such as rate limiting. You can read about HERE
Lets talk about the second part of your question
Rate limiting inside an application is a bad idea. It does not belong to the application as such. It is not a part of business logic. Also, It does not work well with clustered mode (more than one cores running express at the same time) unless you tweak it for supporting cluster.
Rate limiting using NGINX configuration just needs 2 extra lines as shown in the earlier link I posted. If suddenly you want to add an extra route or exempt some route from rate limiting NGINX can easily do that.
If you want to exempt your cloudfront addresses or CDN server addresses from being rate limited, you can add a whitelist of IPs to NGINX conf so that it will exempt them. Doing this in the application will be a real pain as you would have to git commit, redeploy etc. THIS answer covers how to exempt addresses

Sails.js (Node.js) server architecture, scaling and performance

I want to create Sails.js (Node.js) server app, which will provide API for single-page-app. This server will consist of multiple modules:
user management
forum
chat
admin GUI
content management
payment gateway
...
All these modules will share one database. The server must be able to handle as many requests and web sockets as possible. Clean architecture and performance are my primary goals.
My questions:
Should I create multiple servers running on multiple ports? I mean, one server for content management module. Another server for forum management module.
Or is it better to create only one big universal server, which will consists of multiple separate modules (hooks in Sails.js) and runs on one port? Will performance of the server decrease in this case ?
I was thinking about vertical scaling one big universal server, running on single port with pm2. Or is it better to scale Node.js horizontaly and split server to multiple smaller servers ?
Im new to Node.js so I appreciate any advice.
I think it really boils down to the scale of the project.
For very simple things there's no real reason to scale past a single but reliable server is there?
However for broader projects that have a back-end that is resource intensive and a lot of users and traffic, you may a want to split the back / front end aspects depending on the requirements.
In which case you might have a single server (or more) dealing with the specific administrative requests or routines then have the client / user API running through a load balancer and spread across multiple servers in multiple regions or break it down further into an auto scaling group so as to accommodate for fluctuations in traffic.
It would be worthwhile to note too that this is really suited for higher volumes of traffic or resource usage as you're dedicating the server infrastructure for this purpose, for smaller applications where there is infrequent usage then breaking things down into micro services from the start and getting billed for the runtime rather than dedicated infrastructure utilization might make more sense to me. You could take a look at AWS API Gateway and Lambda services for some more information on that (I am not affiliated to AWS in any way, I just appreciate what they have managed to put together there).

How to test a web service with up to 1 million users?

I'd like to know how to test a web service with up to 1 million active users, all accessing the site at the same time.
This is in theory - I don't have a web service like this, but was recently reading this article on how to build a scalable app for > 500K users, and it got me wondering how people would test this?
For the sake of discussion, lets assume that I'm in full control of the service and have 1 million test accounts already created, with the usernames test1 -> test1000000 available. I'd prefer that the accounts were accessing my service from places all over the world, but am open to any suggestions!
EDIT: I'm familiar with JMeter and Selenium, but was concerned with the idea that possibly the client activity if all run from a single location would be bottlenecked by the local network, and thus not a great test? So instead of having say 10 JMeter clients at different locations running 100K clients, I was thinking that it might be better to have 1000 JMeter clients testing 1000 users each, all from different locations... but maybe this isn't much of a concern?
I think at a high level, there could be test nodes distributed around the world. Each would contain the logic to authenticate and execute a certain type of transaction. Blocks of test accounts could be distributed to each node and each node would launch the tests in parallel.
At a practical level I would start by looking at a framework locust.io claims it does this in its tag line :)
http://locust.io/
You can use apache jmeter or my personal preference siege
In case of siege I would think of generating a urls.txt file with million urls each representing a call from a user and running them concurrently.
As for your concern about the locations
Blazemeter has a geo-distributed stress testing too
You can take a look on Tsung tool http://tsung.erlang-projects.org/
It is really light weight and allows to run hundred of thousand virtual users from average machine (depends from your script difficulty).
While you can't do multi step automation at these sites, the following services will let you hit an url from different client locations (e.g Asia, North America, Australia), and throttle the bandwidth if you like for testing purposes:
WebPageTest - https://www.webpagetest.org/
- from their about page: "WebPagetest is an open source project that is primarily being developed and supported by Google as part of our efforts to make the web faster." This site also has an api, is opensource, and allows you to automate it via a node api and cli.
Pingdom
https://tools.pingdom.com/
More info in this blogpost from KeyCDN

Node.js scalability in typical web applications

As Node.js beginner coming from Enterprise IT, I am unable to comprehend one aspect of node.js usage. I am framing my question in two parts.
Question-1) Strictly from scalability standpoint, how can an I/O heavy web application scale using node.js unless we scale back-end I/O resources that it is consuming?
A database server can serve only "X" number of concurrent users. Even if node based HTTP server is able to handle more incoming requests, overall throughput is going to be dictated by number of concurrent connections DB can handle.
Same applies for other enterprise resources like content retrieval from file servers or invocation of legacy APIs etc. I understand that we would be less worried about cloud resources which can elastically scale and are not in our direct purview.
Question-2) If answer to above question is "Node is not one-size-fit-all solution", how are companies like PayPal, Walmart, LinkedIn et al able to gain scale using node? They too would integrate within their existing system landscape, and are not totally network based applications (or are they?).
Node.js is typically used as an orchestration layer in SOA.It is mainly used as front-end for the backend services.It is true that
the throughput is going to be dictated by number of concurrent connections DB can handle but there is also the time involved
for the presentation layer to present the content.
Web technologies like JSP,Ruby on rails are designed to get the content on the server and serve as a single page to the client and are not suited for orchestration layer.Today we need services that handle mobile clients(where there are lot of API calls to retrieve small amount of data)Thus node.js reduces the response time and increases the user expierence.
Look at http://nodejs.org/video/ video by Eric Hammer to understand how Node.js is being used in Walmart.

Testing a Windows Azure web app for maximum user load

I am conducting some research on emerging web technologies and have created a very simple Azure website which makes use of web sockets and mongo db as the database. I have managed to get all the components working together and now must perform load testing on the application.
The main criteria is the maximum user load that the app can support, at the moment there is 1 web role instance, so probably I would need to test the max user load for that instance, then try with 2 instances and so on.
I found some solutions online such as Loadstorm, however I cannot afford to pay to use these services so I need to be able to do this from my own development machine OR from another cloud service.
I have come across Visual Studio Load Tests and they seem quite useful, however it seems they require VS Ultimate and an active msdn subscription - the prerequisites are listed here. Also, from this video which shows the basics of load tests, it seems like these load tests are created completely separately from the actual web project, so does that mean I can only see metrics related to the user? i.e. I cannot see the amount of RAM being used, processor etc.
Any suggestions?
You might create a Linux virtual machine in Azure itself or another hosting provider and use ApacheBench (ab) or JMeter to do simple load testing on your application. Be aware that in such a setup your benchmark servers may be a bottleneck themselves.
Another approach is to use online load testing services wich allow some free usage, such as:
loader.io, by SendGrid Labs
LoadStorm
Blazemeter
Blitz
Neotys
Loadimpact
For load-testing, LoadStorm is very reasonably priced, especially compared to on-premises software (and has a free tier with up to 25 virtual clients). You can install code such as jmeter, but you'll still need machines (or vm's) to host and run it from, and you need to make sure that the load-generator machines aren't the bottleneck in your tests.
When you run your tests, you may want to consider separating your web tier from MongoDB. MongoDB will consume as much memory as possible (as that's what gives MongoDB its speed). In a real-world scenario, you'll likely have MongoDB in its own environment. So for your tests, I'd consider offloading MongoDB to its own instance(s), and 10gen has a Worker Role setup that's fairly straightforward to install.
Also remember that NIC bandwidth is 100Mbps per core, which could be a limiting factor on your tests, depending on how much load you're driving.
One alternative to self-hosting MongoDB: Offload MongoDB to a hoster such as MongoLab. This will allow you to test the capacity of your web app without worrying about the details around MongoDB setup, configuration, optimization, etc. Currently MongoLab offers their free tier hosted in Azure, US West and US East data centers.
Editing my response, didnt read the question carefully.
Check out this thread for various tools and links:
Open source Tool for Stress, Load and Performance testing
If you are interested in finding the performance counters of the application under test you can revisit some of the latest features added to Visual Load Cloud base load test.
http://blogs.msdn.com/b/visualstudioalm/archive/2014/04/07/get-application-performance-data-during-load-runs-with-visual-studio-online.aspx
To get more info on Visual Studio Cloud Load Testing solution - https://www.visualstudio.com/features/vso-cloud-load-testing-vs

Resources