IIS and concurrent requests - iis

I am going to host an C# Web Service in IIS 7. The traffic of this application could be a bit huge, almost 100 requests per minute.
The question is how many request can IIS served simultaneously ?
Thank you

It all depends on your server, IIS can handle up to 100 threads per processor.
Here's a reference for you.
https://learn.microsoft.com/en-US/biztalk/technical-guides/optimizing-iis-performance?redirectedfrom=MSDN.
I would also say that what appears to be a single request to a website could actually generate 100s of requests for stylesheets, images etc. So it is almost impossible to predict what server you'd need to service this. However 100 requests per minute is a tiny load for IIS, so there is no need to worry about it. In fact my laptop could service that many page requests resulting in 1,000s of requests along with SQL server and all the other stuff I have running on it.

Related

Simple message passing Nodejs server accepting only 4 requests at a time

We have a simple express node server deployed on windows server 2012 that recieves GET requests with just 3 parameters. It does some minor processing on these parameters, has a very simple in-memory node-cache for caching some of these parameter combinations, interfaces with an external license server to fetch license for the requesting user and sets it in the cookie, followed by which, it interfaces with some workers via a load balancer (running with zmq) to download some large files (in chunks, and unzips and extracts them, writes them to some directories) and display them to the user. On deploying these files, some other calls to the workers are initiated as well.
The node server does not talk to any database or disk. It simply waits for response from the load balancer running on some other machines (these are long operations taking typically between 2-3 minutes to send response). So, essentially, the computation and database interactions happens on other machines. The node server is only a simple message passing/handshaking server that waits for response in event handlers, initiates other requests and renders the response.
We are not using a 'cluster' module or nginx at the moment. With a bare bones node server, is it possible to accept and process atleast 16 requests simultaneously ? Pages such as these http://adrianmejia.com/blog/2016/03/23/how-to-scale-a-nodejs-app-based-on-number-of-users/ mention that a simple node server can handle only 2-9 requests at a time. But even with our bare bones implementation, not more than 4 requests are accepted at a time.
Is using a cluster module or nginx necessary even for this case ? How to scale this application for a few hundred users to begin with ?
An Express server can handle many more than 9 requests at a time, especially if it isn't talking to a datebase.
The article you're referring to assumes some database access on each request and serving static assets via node itself, rather than a CDN. All of this taking place on a single CPU with 1GB of RAM. That's a database and web server all running on a single core with minimal RAM.
There really are not hard numbers on this sort of thing; You build it and see how it performs. If it doesn't perform well enough, put a reverse proxy in front of it like nginx or haproxy to do load balancing.
However, based on your problem, if you really are running into bottlenecks where only 4 connections are possible at a time, it sounds like you're keeping those connections open way too long and blocking others. Better to have those long running processes kicked off by node, close the connections, then have those servers call back somehow when they're done.

Load test a Backbone App

I've got an NGinx/Node/Express3/Socket.io/Redis/Backbone/Backbone.Marionette app that proxies requests to a PHP/MySQL REST API. I need to load test the entire stack as a whole.
My app takes advantage of static asset caching with NGinx, clustering with node/express and socket is multi-core enabled using Redis. All that's to say, I've gone through a lot of trouble to try and make sure it can stand up to the load.
I hit it with 50,000 users in 10 seconds using blitz.io and it didn't even blink... Which concerned me because I wanted to see it crash, or at least breath a little heavy; but 50k was the max you could throw at it with that tool, indicating to me that they expect you to not reasonably be able to, or need to, handle more than that... Which is when I realized it wasn't actually incurring the load I was expecting because the load is initiated after the page loads and the Backbone app starts up and kicks off the socket connection and requests the data from the correct REST API endpoint (from different server).
So, here's my question:
How can I load test the entire app as a whole? I need the load test to tax the server in the same way that the clients actually will, which means:
Request the single page Backbone app from my NGinx/Node/Express server
Kick off requests for the static assets from NGinx (simulating what the browser would do)
Kick off requests to the REST API (PHP/MySQL running on a different server)
Create the connection to the Socket.io service (running on NGinx/Node/Express, utilizing Redis to handle multi-core junk)
If the testing tool uses a browser-like environment to load the page up, parsing the JS and running it, everything will be copasetic (NGinx/Node/Express server will get hit and so will the PHP/MySQL server). Otherwise, the testing tool will need to simulate this by firing off at least a dozen different kinds of requests nearly simultaneously. Otherwise it's like stress testing a door by looking at it 10,000 times (that is to say, it's pointless).
I need to ensure my app can handle 1,000 users hitting it in under a minute all loading the same page.
You should learn to use Apache JMeter http://jmeter.apache.org/
You can perform stress tests with it,
see this tutorial https://www.youtube.com/watch?v=8NLeq-QxkSw
As you said, "I need the load test to tax the server in the same way that the clients actually will"
That means that the tests is agnostic to the technology you are using.
I highly recommend Jmeter, is widely used and you can integrate it with Jenkins and do a lot of cool stuff with it.

Optimizing Node.js for a large number of outbound HTTP requests?

My node.js server is experiencing times when it becomes slow or unresponsive, even occasionally resulting in 503 gateway timeouts when attempting to connect to the server.
I am 99% sure (based upon tests that I have run) that this lag is coming specifically from the large number of outbound requests I am making with the node-oauth module to contact external APIs (Facebook, Twitter, and many others). Admittedly, the number of outbound requests being made is relatively large (in the order of 30 or so per minute). Even worse, this frequently means that the corresponding inbound requests to my server can take ~5-10 seconds to complete. However, I had a previous version of my API which I had written in PHP which was able to handle this amount of outbound requests without any problem at all. Actually, the CPU usage for the same number (or even fewer) requests with my Node.js API is about 5x that of my PHP API.
So, I'm trying to isolate where I can improve upon this, and most importantly to make sure that 503 timeouts do not occur. Here's some stuff I've read about or experimented with:
This article (by LinkedIn) recommends turning off socket pooling. However, when I contacted the author of the popular nodejs-request module, his response was that this was a very poor idea.
I have heard it said that setting "http.globalAgent.maxSockets" to a large number can help, and indeed it did seem to reduce bottlenecking for me
I could go on, but in short, I have been able to find very little definitive information about how to optimize performance so these outbound connections do not lag my inbound requests from clients.
Thanks in advance for any thoughts or contributions.
FWIW, I'm using express and mongoose as well, and my servers are hosted on the Amazon Cloud (2x M1.Large for the node servers, 2x load balancers, and 3x M1.Small MongoDB instances).
It sounds to me that the Agent is capping your requests to the default level of 5 per-host. Your tests show that cranking up the agent's maxSockets helped... you should do that.
You can prove this is the issue by firing up a packet sniffer, or adding more debugging code to your application, to show that this is the limiting factor.
http://engineering.linkedin.com/nodejs/blazing-fast-nodejs-10-performance-tips-linkedin-mobile
Disable the agent altogether.

Multiple Tabs in the same browser and IIS Concurrent Connection

I understand multiple tabs in a single browser shares the same session. But does it uses the same concurrent connection?
More specifically does each tab in the browser to the same website will create multiple concurrent connections or share a common connection.
I will be using IIS as my webserver.
Thanks.
There are many dynamics to this, and it depends on how you configure your WEBSITE, and the App Pool. For a standard website created with IIS and very little to no changes in the configuration, a single user (Browser) will issue a single connection from which multiple requests will take place. Of course, a single request will block until "completion".
However that being said, Browsers more or less have a "concurrent limit". Used to be set # 2, but has now changed based on which one you use. Think Chrome is currently at 4.
Thirdly, browsers are a little smarter these days, and upon grabbing a page, they will open multiple requests (through a single connection - if HTTP Keep Alive set on IIS [default]) which will get images (resources) and the HTML concurrently.
HTH

Scale web scraping site with node.js

I'm developing a web scraping website to find available delivery restaurants. The website searches on the most popular delivery portals and shows the result aggregated in a single page.
The site is hosted on Heroku with 4 dynos.
http://deliveria.net/#05409-002
When a user makes a request on the website, it makes around 30 HTTP requests to retrieve the result.
The problem is the performance, the requests aren't fast and each search can make 30 of them, locking the app while the search is being performed for a single user.
I tried to increase Heroku dynos:
heroku scale web=10
And I didn't feel a perceptible gain.
What is the best approach to scale this kind of application?
(I can't use cache, as the searches need to be in real time)
Current stack:
Heroku
Node.js
express
request module
EJS
Pusher
Redis
The important thing here is to have workers, because you must avoid blocking the event loop in your main app.
Try to delegate the 30 http requests between the available workers. Maybe Kue can help you with this aspect (you push new jobs to the queue and they get executed one by one by the workers). So for example if you have 10 dynos on Heroku, use 9 for workers (that make those 30 http searches).
From the user's point of view it's important to know that the application is reacting fast to his search (and doesn't give him the 'freeze' impression), so maybe you would like to update him as soon as you have preliminary results (for example 10 pages get searched out of 30). You could do that via WebSockets (Socket.IO) and even show a nice graphical progress bar or something similar.

Resources