Maximum parallel requests on SPDY connection - browser

Do browsers impose a limit on max parallel requests on a SPDY connection? If yes, could you please share link(s) explaining this setting and the max value used by different browsers?

Related

Dynamic IP Restriction with HTTP/2

We are considering using DynamicIpRestriction to deny excessive access from a single IP in Azure App Service.
As a question, in the case of HTTP/2, it is doubtful whether it is effective to deny excessive access using this function.
In the case of HTTP/1.1, the number of simultaneous connections is restricted to 6 due to client side (browser) restrictions.
We are aware that we can limit the number of concurrent requests by maxConcurrentRequests, but
In the case of HTTP/2, parallel requests can be made on the same socket, and there is no upper limit.
Could you tell me if there is a best practice to reject excessive access with DynamicIpRestricion when using HTTP/2?
Could you tell me if there is a best practice to reject excessive access with DynamicIpRestricion when using HTTP/2?
As per this Azure Blog article, HTTP/2 is an advanced version of HTTP/1.1 where only few changes made in it such as Http/2 has only 1 TCP/IP Connection, fully multiplexed, binary model, single connection usage for parallelism, overhead reduction by using header compression process.
As per my research, I didn’t find the alternative way for protecting the app service from attacks (DDoS) using the “Dynamic IP Restriction” feature and the process for it same as here after enabling the HTTP/2 and working as expected.

Scaling websockets on Heroku and Node.js

Let's assume I have a large application handling a significative number of websockets connections on Heroku. To cope with such large demand, the number of dynos is incremented to N.
How is Heroku's router going to distribute the new incoming websockets connection among the running dynos?
In other words, if one of the dynos is maxed out in websockets connections is Heroku's router going to deviate all the new incoming requests among the other (supposedly) less busy dynos? Or is it still going to use the random assignment as stated in the documentation for http incoming connections?
That would definitely make sense as websockets connections are http connections in the very first instance. However, it would be rather complex to scale a large number of persistent connections evenly among N running dynos.
Anybody con confirm?
The page you link to says "Routers use a random selection algorithm for HTTP request load balancing across web dynos." That seems unambiguous. Note that Heroku got in trouble with Rap Genius over exactly this issue, although it was with their older Bamboo stack.
In general, by adding dynos, you should be able to avoid any one dyno ever being overloaded, since websockets does not actually use additional RAM for each additional connection supported. At worst, busier dynos should just suffer from higher latency.

Require.js and SPDY

I know that doing many requests for different scripts is bad for performance, i.e. my script requires 30 dependecies, so require.js would make 31 request for that. I could make use of require optimizer and download bundles of scripts.
What if I use SPDY and still make those 31 request would the ability to multiplex the requests help me out and would not make difference performance wise?
Thank you
When you use SPDY, the ability to multiplex requests will likely improve your performance.
The reason is that with plain HTTP, browsers can usually make only 6 requests concurrently (assuming all your 30 scripts will be downloaded from the same domain).
Then the browsers have to wait at least one network roundtrip before being able to perform the 7th request.
Depending on where your clients are with respect to your server, a network roundtrip can be in the 50ms to 500ms range, sometimes even more.
With SPDY, all 30 requests can be made at once thanks to multiplexing.
Furthermore, SPDY server that implement SPDY Push may be able to push the 30 secondary resources down to the client along with the initial request for the primary resource (usually, the HTML page).
Jetty implements SPDY Push, and you can watch this demo to see the difference that SPDY Push make when you're requesting 20+ secondary resources associated to the primary resource.
You can find the Jetty SPDY documentation here, and a blog post that shows how to configure SPDY Push in Jetty.

Optimizing Node.js for a large number of outbound HTTP requests?

My node.js server is experiencing times when it becomes slow or unresponsive, even occasionally resulting in 503 gateway timeouts when attempting to connect to the server.
I am 99% sure (based upon tests that I have run) that this lag is coming specifically from the large number of outbound requests I am making with the node-oauth module to contact external APIs (Facebook, Twitter, and many others). Admittedly, the number of outbound requests being made is relatively large (in the order of 30 or so per minute). Even worse, this frequently means that the corresponding inbound requests to my server can take ~5-10 seconds to complete. However, I had a previous version of my API which I had written in PHP which was able to handle this amount of outbound requests without any problem at all. Actually, the CPU usage for the same number (or even fewer) requests with my Node.js API is about 5x that of my PHP API.
So, I'm trying to isolate where I can improve upon this, and most importantly to make sure that 503 timeouts do not occur. Here's some stuff I've read about or experimented with:
This article (by LinkedIn) recommends turning off socket pooling. However, when I contacted the author of the popular nodejs-request module, his response was that this was a very poor idea.
I have heard it said that setting "http.globalAgent.maxSockets" to a large number can help, and indeed it did seem to reduce bottlenecking for me
I could go on, but in short, I have been able to find very little definitive information about how to optimize performance so these outbound connections do not lag my inbound requests from clients.
Thanks in advance for any thoughts or contributions.
FWIW, I'm using express and mongoose as well, and my servers are hosted on the Amazon Cloud (2x M1.Large for the node servers, 2x load balancers, and 3x M1.Small MongoDB instances).
It sounds to me that the Agent is capping your requests to the default level of 5 per-host. Your tests show that cranking up the agent's maxSockets helped... you should do that.
You can prove this is the issue by firing up a packet sniffer, or adding more debugging code to your application, to show that this is the limiting factor.
http://engineering.linkedin.com/nodejs/blazing-fast-nodejs-10-performance-tips-linkedin-mobile
Disable the agent altogether.

Why is node.js only processing six requests at a time?

We have a node.js server which implements a REST API as a proxy to a central server which has a slightly different, and unfortunately asymmetric REST API.
Our client, which runs in various browsers, asks the node server to get the tasks from the central server. The node server gets a list of all the task ids from the central one and returns them to the client. The client then makes two REST API calls per id through the proxy.
As far as I can tell, this stuff is all done asynchronously. In the console log, it looks like this when I start the client:
Requested GET URL under /api/v1/tasks/*: /api/v1/tasks/
This takes a couple seconds to get the list from the central server. As soon as it gets the response, the server barfs this out very quickly:
Requested GET URL under /api/v1/tasks/id/:id :/api/v1/tasks/id/438
Requested GET URL under /api/v1/workflow/id/:id :/api/v1/workflow/id/438
Requested GET URL under /api/v1/tasks/id/:id :/api/v1/tasks/id/439
Requested GET URL under /api/v1/workflow/id/:id :/api/v1/workflow/id/439
Requested GET URL under /api/v1/tasks/id/:id :/api/v1/tasks/id/441
Requested GET URL under /api/v1/workflow/id/:id :/api/v1/workflow/id/441
Then, each time a pair of these requests gets a result from the central server, another two lines is barfed out very quickly.
So it seems our node.js server is only willing to have six requests out at a time.
There are no TCP connection limits imposed by Node itself. (The whole point is that it's highly concurrent and can handle thousands of simultaneous connections.) Your OS may limit TCP connections.
It's more likely that you're either hitting some kind of limitation of your backend server, or you're hitting the builtin HTTP library's connection limit, but it's hard to say without more details about that server or your Node implementation.
Node's built-in HTTP library (and obviously any libraries built on top of it, which are most) maintains a connection pool (via the Agent class) so that it can utilize HTTP keep-alives. This helps increase performance when you're running many requests to the same server: rather than opening a TCP connection, making a HTTP request, getting a response, closing the TCP connection, and repeating; new requests can be issued on reused TCP connections.
In node 0.10 and earlier, the HTTP Agent will only open 5 simultaneous connections to a single host by default. You can change this easily: (assuming you've required the HTTP module as http)
http.globalAgent.maxSockets = 20; // or whatever
node 0.12 sets the default maxSockets to Infinity.
You may want to keep some kind of connection limit in place. You don't want to completely overwhelm your backend server with hundreds of HTTP requests under a second – performance will most likely be worse than if you just let the Agent's connection pool do its thing, throttling requests so as to not overload your server. Your best bet will be to run some experiments to see what the optimal number of concurrent requests is in your situation.
However, if you really don't want connection pooling, you can simply bypass the pool entirely – sent agent to false in the request options:
http.get({host:'localhost', port:80, path:'/', agent:false}, callback);
In this case, there will be absolutely no limit on concurrent HTTP requests.
It's the limit on number of concurrent connections in the browser:
How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
I have upvoted the other answers, as they helped me diagnose the problem. The clue was that node's socket limit was 5, and I was getting 6 at a time. 6 is the limit in Chrome, which is what I was using to test the server.
How are you getting data from the central server? "Node does not limit connections" is not entirely accurate when making HTTP requests with the http module. Client requests made in this way use the http.globalAgent instance of http.Agent, and each http.Agent has a setting called maxSockets which determines how many sockets the agent can have open to any given host; this defaults to 5.
So, if you're using http.request or http.get (or a library that relies on those methods) to get data from your central server, you might try changing the value of http.globalAgent.maxSockets (or modify that setting on whatever instance of http.Agent you're using).
See:
http.Agent documentation
agent.maxSockets documentation
http.globalAgent documentation
Options you can pass to http.request, including an agent parameter to specify your own agent
Node js can handle thousands of incoming requests - yes!
But when it comes down to ougoing requests every request has to deal with a dns lookup and dns lookup's, disk reads etc are handled by the libuv which is programmed in C++. The default value of threads for each node process is 4x threads.
If all 4x threads are busy with https requests ( dns lookup's ) other requests will be queued. That is why no matter how brilliant your code might be : you sometimes get 6 or sometimes less concurrent outgoing requests per second completed.
Learn about dns cache to reduce the amount of dns look up's and increase libuv size. If you use PM2 to manage your node processes they do have a well documentation on their side on environment variables and how to inject them. What you are looking for is the environment variable UV_THREADPOOL_SIZE = 4
You can set the value anywhere between 1 or max limit of 1024. But keep in mind libuv limit of 1024 is across all event loops.
I have seen the same problem in my server. It was only processing 4 requests.
As explained already from 0.12 maxsockets defaults to infinity. That easily overwhelms the sever. Limiting the requests to say 10 by
http.globalAgent.maxSockets = 20;
solved my problem.
Are you sure it just returns the results to the client? Node processes everything in one thread. So if you do some fancy response parsing or anything else which doesn't yield, then it would block all your requests.

Resources