while loading report page manually browser crashed and failed to load data, but vugen script is running fine and getting 200 OK response - performance-testing

While loading report contains ~ 50K records data manually web browser crashed and failed to respond but vugen script is passed with the same 50K record returning 200 Ok for the same request. so does it mean from vugen script (web http protocol) we can not find such performance issue where browser failed to load web page.

If you are using WebHTTP you are only simulating the transport layer and you will not be able to find browser issues.
For browser issues you will have to do a functional test with a functional testing tool (e.g. UFT).
You can "cheat" though and use 1 TruClient virtual user in conjunction with the other WebHttp virtual users and then you will most likely be able to detect the problem.

Related

REST API In Node Deployed as Azure App Service 500 Internal Server Errors

I have looked at the request trace for several requests that resulted in the same outcome.
What will happen is I'll get a HttpModule="iisnode", Notification="EXECUTE_REQUEST_HANDLER", HttpStatus=500, HttpReason="Internal Server Error", HttpSubstatus=1013, ErrorCode="The pipe has been ended. (0x6d)"
This is a production API. Fewer than 1% of requests get this result but it's not the requests themselves - I can reissue the same request and it'll work.
I log telemetry for every API request - basics on the way in, things like http status and execution time as the response is on its way out.
None of the requests that get this error are in telemetry which makes me think something is happening somewhere between IIS and iisnode.
If anyone has resolved this or has solid thoughts on how to pin down what the root issue is I'd appreciate it.
Well for me, what's described here covered the bulk of the issue: github.com/Azure/iisnode/issues/57 Setting keepAliveTimeout to 0 on the express server reduced the 500s significantly.
Once the majority of the "noise" was eliminated it was much easier to correlate the remaining 500s that would occur to things I could see in my logs. For example, I'm using a 3rd party node package to resize images and a couple of the "images" that were loaded into the system weren't images. Instead of gracefully throwing an exception, the package seems to exit out of the running node process. True story. So on Azure, it would get restarted, but while that was happening requests would get a 500 internal server error.

Why Azure VM is not receiving a HTTP GET response?

I've encountered an interesting problem when trying to make a HTTP request from Azure VM. It appears that when the request is ran from this VM the response never arrives. I tried using a custom C# code that makes an HTTP request and Postman. In both cases we can see in the logs on the target API side that the response has been sent, but no data is received on the origin VM. The exact same C# request and Postman request work outside of this VM in multiple networks and machines. The only tool that actually works for this request on VM side is Curl Bash terminal but it is not an option based on current requirements.
Tried on multiple Azure VM sizes, on Windows 10 and Windows Server 2019.
The target API is on-premise hosted and it requires around 5 minutes for the data to be sent back. Payload is very small but due to the computing performed on the API side it takes a while to generate. Modifying this API is not an option.
So to be clear- the requests are perpetually stuck until the timeout on client side is reached (if it was configured). Does anybody know what could be a reason for this?
If these transfers take longer than 4 minutes without keep alives, Azure will typically close the connection.
You should be able to see this by monitoring the connection with wireshark.
TCP Timeouts can be configured when using a Load Balancer, but you can also try adding keep alives in your API server if possible.

Getting 500 server error.. My site works fine and only throws these errors occasionally. How can I better diagnose the problem?

My site works fine locally. It even works fine with my backend using azure web services and front end using netlify but occasionally after several api calls (I'm not overloading the server because these api calls are done one by one) I get LOTS of errors that are all the same. 500 internal server error. I look at the logs and they give me some numbers 500 1013 109 329 2144 391
Reason for this could be
Network issue of your server
Server request time out
-Web app takes too long to respond for a request/response when connecting to any resource( database,different server) etc..
To resolve that , i would suggest you to to increase the idle timeout of your app.
in the app setting of your web app add SCM_COMMAND_IDLE_TIMEOUT = 3600
By default, Web Apps are unloaded if they are idle for some period of time. This lets the system conserve resources. In Basic or Standard mode, you can enable ‘Always On’ to keep the app loaded all the time.
You may also check the diagnostic log stream to get more details on this issue and the blog post for Troubleshooting Azure App Service Apps Using Web Server Logs.
Hope it helps.

How many times does a browser connect to a web server when retrieving static content?

In HTTP 1.0 I know that a new socket connection is made as soon as the browser sends a new GET request. I was wondering if the browser sends the GET request for each individual file in the website. For example, let's say we have a static website with 3 image files and the index.html file. When we connect to the server, does the browser send 4 separate requests (aka 4 different connections), or does it only connect to the website once and retrieve all the content (aka only 1 connection is enough)?
As explained in this answer (regarding HTTP 1.0 vs 1.1), in v1.0 every request is sent in a separate connection, so that would be 4, however, due to caching mechanisms (which are available in v1.0), the browser might not send any request at all, and hence not open any connection.
If you open the developer console in a browser and look at Network (in Chrome) it shows you all of the requests that are made. It will make an individual request for each resource. Also, if an image is used 20 times it will be requested once and displayed 20 times. Although all of these requests are made separately it could still be that they are all done through the same connection as a request and a connection are not the same thing. Hope this gives you a bit of direction. These two links may give you a bit more information on connections to the server.
https://en.wikipedia.org/wiki/HTTP_persistent_connection
https://en.wikipedia.org/wiki/HTTP_pipelining

Load test a Backbone App

I've got an NGinx/Node/Express3/Socket.io/Redis/Backbone/Backbone.Marionette app that proxies requests to a PHP/MySQL REST API. I need to load test the entire stack as a whole.
My app takes advantage of static asset caching with NGinx, clustering with node/express and socket is multi-core enabled using Redis. All that's to say, I've gone through a lot of trouble to try and make sure it can stand up to the load.
I hit it with 50,000 users in 10 seconds using blitz.io and it didn't even blink... Which concerned me because I wanted to see it crash, or at least breath a little heavy; but 50k was the max you could throw at it with that tool, indicating to me that they expect you to not reasonably be able to, or need to, handle more than that... Which is when I realized it wasn't actually incurring the load I was expecting because the load is initiated after the page loads and the Backbone app starts up and kicks off the socket connection and requests the data from the correct REST API endpoint (from different server).
So, here's my question:
How can I load test the entire app as a whole? I need the load test to tax the server in the same way that the clients actually will, which means:
Request the single page Backbone app from my NGinx/Node/Express server
Kick off requests for the static assets from NGinx (simulating what the browser would do)
Kick off requests to the REST API (PHP/MySQL running on a different server)
Create the connection to the Socket.io service (running on NGinx/Node/Express, utilizing Redis to handle multi-core junk)
If the testing tool uses a browser-like environment to load the page up, parsing the JS and running it, everything will be copasetic (NGinx/Node/Express server will get hit and so will the PHP/MySQL server). Otherwise, the testing tool will need to simulate this by firing off at least a dozen different kinds of requests nearly simultaneously. Otherwise it's like stress testing a door by looking at it 10,000 times (that is to say, it's pointless).
I need to ensure my app can handle 1,000 users hitting it in under a minute all loading the same page.
You should learn to use Apache JMeter http://jmeter.apache.org/
You can perform stress tests with it,
see this tutorial https://www.youtube.com/watch?v=8NLeq-QxkSw
As you said, "I need the load test to tax the server in the same way that the clients actually will"
That means that the tests is agnostic to the technology you are using.
I highly recommend Jmeter, is widely used and you can integrate it with Jenkins and do a lot of cool stuff with it.

Resources