Mimicking with Jmeter - increase keepalive on static resources - browser

Im running Jmeter 2.13, and was wondering how to mimic a browser.
Ive done everything I can to ensure the headers are the same (keep-alive, "Retrieve All Embedded Resources", "User concurrent pool") but when I monitor my apache server-status I can see the main page request being kept alive and the static resources are open/download/close. If I compare this to accessing the site with IE I see a longer keep-alive on the static resources.
Does anyone have any suggestions to extending the keepalive on the static resources?

By default thread(s) which download embedded resources inherit all the settings from parent sampler, it includes:
connect timeout
response timeout
follow redirects
keep alive or close the connection
You can look into HTTPHC4Impl.java source yourself, the relevant method is
protected void setupRequest(URL url, HttpRequestBase httpRequest, HTTPSampleResult res)
The difference in JMeter and browser behaviours may be caused by missing HTTP Cache Manager. Real life browsers download embedded resources. Well-behaved browsers send "Connection: close" header in order to release server and client resources. All the browsers download embedded resources only once, on subsequent requests the resource is returned from browser's cache.
So double check settings in your HTTP Request Defaults test element (remember that local HTTP Request sampler settings override the defaults)

Related

How many times does a browser connect to a web server when retrieving static content?

In HTTP 1.0 I know that a new socket connection is made as soon as the browser sends a new GET request. I was wondering if the browser sends the GET request for each individual file in the website. For example, let's say we have a static website with 3 image files and the index.html file. When we connect to the server, does the browser send 4 separate requests (aka 4 different connections), or does it only connect to the website once and retrieve all the content (aka only 1 connection is enough)?
As explained in this answer (regarding HTTP 1.0 vs 1.1), in v1.0 every request is sent in a separate connection, so that would be 4, however, due to caching mechanisms (which are available in v1.0), the browser might not send any request at all, and hence not open any connection.
If you open the developer console in a browser and look at Network (in Chrome) it shows you all of the requests that are made. It will make an individual request for each resource. Also, if an image is used 20 times it will be requested once and displayed 20 times. Although all of these requests are made separately it could still be that they are all done through the same connection as a request and a connection are not the same thing. Hope this gives you a bit of direction. These two links may give you a bit more information on connections to the server.
https://en.wikipedia.org/wiki/HTTP_persistent_connection
https://en.wikipedia.org/wiki/HTTP_pipelining

Response body missing characters

I've seen this issue happen on multiple machines, using different languages and server-side environments. It seems to always be IIS, but it may be more widespread.
On slower connections, characters are occasionally missing from the response body. It happens somewhere between 25% and 50% of the time but only on certain pages, and only on a slow connection such as VPN. A refresh usually fixes the issue.
The current application in question is .NET 4 with SQL Server.
Example:
<script>
document.write('Something');
</script>
is being received by the client as
<scrit>
document.write('Something');
</script>
This causes the JavaScript inside the tag to instead be printed to the page, rather than executing.
Does anyone know why this occurs? Is it specific to IIS?
Speaking generally, the problem you describe would require corruption at the HTTP layer or above, since TCP/IP has checksums, packet lengths, sequence numbers, and re-transmissions to avoid this sort of issue.
That leaves:
The application generating the data
Any intermediate filters between the application and the server
The HTTP server returning the data
Any intermediary HTTP proxies, transparent or otherwise
The HTTP client requesting the data
The user-agent interpreting the data
You can diagnose further based off of a network capture performed at the server edge, and at the client edge.
Examine the request made by the client at the client edge to verify that the client is making a request for the entire document, and is not relying upon cache (no Range or If-* headers).
If the data is correct when it leaves the server (pay particular attention to the Content-Length header and verify it is a 200 response), neither the server nor the application are at fault.
If the data is correct as received by the client, you can rule out intermediary proxies.
If there is still an issue, it is a user-agent issue
If I had to psychically debug such a problem, I would look first at the application to ensure it is generating the correct document, then assume some interloper is modifying the data in transit. (Some HTTP proxy for wan-acceleration, aggressive caching, virus scanning, etc...) I might also assume some browser plugin or ad blocker is modifying the response after it is received.
I would not, however, assume it is the HTTP server without very strong evidence. If corruption is detected on the client, but not the server, I might disable TCP Offload and look for an updated NIC Driver.

Node.js, chunked encoding, and persistent/reusable sockets

I'm building an application that talks to a vendor API. The API expects chunked encoding connections POST over https (and talks XML but that's irrelevant). They also recommend to multiplex requests over one socket connection (not doing so has security limitations as described below).
Node's core https works to the extent that I can establish one connection with the remote web service. I just set 'Transfer-Encoding': 'chunked' header. The problems start when I make a second request to cancel the first request at the remote web service API. The second request comes in on a separate connection and as such is not authorized to affect the conditions set in the first request – only requests that would be on the same socket would be authorized to do so.
What are my options to make this happen with Node.js? I've been looking at Mikeal Rogers' request library, but have not been having luck with it thus far.
Any ideas what route I could go? Many thanks for any insights!

Why is node.js only processing six requests at a time?

We have a node.js server which implements a REST API as a proxy to a central server which has a slightly different, and unfortunately asymmetric REST API.
Our client, which runs in various browsers, asks the node server to get the tasks from the central server. The node server gets a list of all the task ids from the central one and returns them to the client. The client then makes two REST API calls per id through the proxy.
As far as I can tell, this stuff is all done asynchronously. In the console log, it looks like this when I start the client:
Requested GET URL under /api/v1/tasks/*: /api/v1/tasks/
This takes a couple seconds to get the list from the central server. As soon as it gets the response, the server barfs this out very quickly:
Requested GET URL under /api/v1/tasks/id/:id :/api/v1/tasks/id/438
Requested GET URL under /api/v1/workflow/id/:id :/api/v1/workflow/id/438
Requested GET URL under /api/v1/tasks/id/:id :/api/v1/tasks/id/439
Requested GET URL under /api/v1/workflow/id/:id :/api/v1/workflow/id/439
Requested GET URL under /api/v1/tasks/id/:id :/api/v1/tasks/id/441
Requested GET URL under /api/v1/workflow/id/:id :/api/v1/workflow/id/441
Then, each time a pair of these requests gets a result from the central server, another two lines is barfed out very quickly.
So it seems our node.js server is only willing to have six requests out at a time.
There are no TCP connection limits imposed by Node itself. (The whole point is that it's highly concurrent and can handle thousands of simultaneous connections.) Your OS may limit TCP connections.
It's more likely that you're either hitting some kind of limitation of your backend server, or you're hitting the builtin HTTP library's connection limit, but it's hard to say without more details about that server or your Node implementation.
Node's built-in HTTP library (and obviously any libraries built on top of it, which are most) maintains a connection pool (via the Agent class) so that it can utilize HTTP keep-alives. This helps increase performance when you're running many requests to the same server: rather than opening a TCP connection, making a HTTP request, getting a response, closing the TCP connection, and repeating; new requests can be issued on reused TCP connections.
In node 0.10 and earlier, the HTTP Agent will only open 5 simultaneous connections to a single host by default. You can change this easily: (assuming you've required the HTTP module as http)
http.globalAgent.maxSockets = 20; // or whatever
node 0.12 sets the default maxSockets to Infinity.
You may want to keep some kind of connection limit in place. You don't want to completely overwhelm your backend server with hundreds of HTTP requests under a second – performance will most likely be worse than if you just let the Agent's connection pool do its thing, throttling requests so as to not overload your server. Your best bet will be to run some experiments to see what the optimal number of concurrent requests is in your situation.
However, if you really don't want connection pooling, you can simply bypass the pool entirely – sent agent to false in the request options:
http.get({host:'localhost', port:80, path:'/', agent:false}, callback);
In this case, there will be absolutely no limit on concurrent HTTP requests.
It's the limit on number of concurrent connections in the browser:
How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
I have upvoted the other answers, as they helped me diagnose the problem. The clue was that node's socket limit was 5, and I was getting 6 at a time. 6 is the limit in Chrome, which is what I was using to test the server.
How are you getting data from the central server? "Node does not limit connections" is not entirely accurate when making HTTP requests with the http module. Client requests made in this way use the http.globalAgent instance of http.Agent, and each http.Agent has a setting called maxSockets which determines how many sockets the agent can have open to any given host; this defaults to 5.
So, if you're using http.request or http.get (or a library that relies on those methods) to get data from your central server, you might try changing the value of http.globalAgent.maxSockets (or modify that setting on whatever instance of http.Agent you're using).
See:
http.Agent documentation
agent.maxSockets documentation
http.globalAgent documentation
Options you can pass to http.request, including an agent parameter to specify your own agent
Node js can handle thousands of incoming requests - yes!
But when it comes down to ougoing requests every request has to deal with a dns lookup and dns lookup's, disk reads etc are handled by the libuv which is programmed in C++. The default value of threads for each node process is 4x threads.
If all 4x threads are busy with https requests ( dns lookup's ) other requests will be queued. That is why no matter how brilliant your code might be : you sometimes get 6 or sometimes less concurrent outgoing requests per second completed.
Learn about dns cache to reduce the amount of dns look up's and increase libuv size. If you use PM2 to manage your node processes they do have a well documentation on their side on environment variables and how to inject them. What you are looking for is the environment variable UV_THREADPOOL_SIZE = 4
You can set the value anywhere between 1 or max limit of 1024. But keep in mind libuv limit of 1024 is across all event loops.
I have seen the same problem in my server. It was only processing 4 requests.
As explained already from 0.12 maxsockets defaults to infinity. That easily overwhelms the sever. Limiting the requests to say 10 by
http.globalAgent.maxSockets = 20;
solved my problem.
Are you sure it just returns the results to the client? Node processes everything in one thread. So if you do some fancy response parsing or anything else which doesn't yield, then it would block all your requests.

Multiple Tabs in the same browser and IIS Concurrent Connection

I understand multiple tabs in a single browser shares the same session. But does it uses the same concurrent connection?
More specifically does each tab in the browser to the same website will create multiple concurrent connections or share a common connection.
I will be using IIS as my webserver.
Thanks.
There are many dynamics to this, and it depends on how you configure your WEBSITE, and the App Pool. For a standard website created with IIS and very little to no changes in the configuration, a single user (Browser) will issue a single connection from which multiple requests will take place. Of course, a single request will block until "completion".
However that being said, Browsers more or less have a "concurrent limit". Used to be set # 2, but has now changed based on which one you use. Think Chrome is currently at 4.
Thirdly, browsers are a little smarter these days, and upon grabbing a page, they will open multiple requests (through a single connection - if HTTP Keep Alive set on IIS [default]) which will get images (resources) and the HTML concurrently.
HTH

Resources