How to stress test NodeJS API behind NGINX? - node.js

I have an API on a remote server that I need to stress test. It is sitting behind a NGINX reverse proxy that does 301 to the API app running behind it.
Normal requests / Postman all work fine and I get 200s back. As soon as I use something like AutoCannon I get 3xx instead of 200s and the requests never hit the actual nodejs app.
Is there some special configuration I need to do on NGINX to allow stress tests to occur?

Given you're sending the same request you should get the same response so double check the URL, request body, request headers and so on. Compare the requests originating from Postman and AutoCannon using an external sniffer tool like Fiddler or Wireshark, identify the differences and amend AutoCannon configuration so it would send exactly the same request like Postman does.
You might want to switch to other load testing tool like Apache JMeter which comes with HTTP(S) Test Script Recorder so you will be able to record the request from your browser (or other application like Postman) so you won't have to guess what's wrong with your AutoCannor setup. There is also JMeter Chrome Extension so if you can access your API via browser you will be able to record JMeter script without having to worry about proxies and certificates.

Related

Keep on getting Unauthorize Web API

I have a project, It's a web application that requires Windows Authentication.
I've setup an Active Directory at home using my NAS virtualization. Then I've created a VMWare Server for IIS which is a member of that domain on my desktop which I also use for development. I've created the Web API and installed it into that VMWare server. When I call a routine directly, it works and return results but when I use the Web API routine from my javascript web application I keep on getting 401 error. I then put the code on the IIS server and the web application works.
I've seen a lot of solutions like changing the sequence of the Provider in IIS Authentication. Added Everyone read/write permission on the folders. I've also added entry on the web.config. But none of them work.
*****Update as per request on the comment *****
Below is when I run directly from Web API
Calling the Web API from Javascript
Here's the error I'm getting
Just FYI, I tried running the web api from Visual Studio on the same machine but also with 401 error
Is there anything I could add to AD to make my development machine as trusted?
********************A new issue after the code change **********
****************Another Update******
This is definitely weird, so I installed Fiddler 4 to see what's going on. But still no luck.
Then I made changes on the IIS HTTP Response Header
The weird thing is when I run Fiddler the error is gone but when I close it it comes back.
There are two things going on here:
A 401 response is a normal first step to Windows Authentication. The client is then expected to resend the request with credentials. AJAX requests don't do this automatically unless you tell it to.
To tell it to send credentials in a cross-domain request (more on that later), you need to set the withCredentials option when you make the request in JavaScript.
With jQuery, that looks like this:
$.ajax({
url: url,
xhrFields: {
withCredentials: true
}
}).then(callback);
These problems pop up when the URL in the address bar of the browser is different than the URL of the API you are trying to connect to in the JavaScript. Browsers are very picky about when this is allowed. These are called "cross-domain requests", or "Cross-Origin Resource Sharing" (CORS).
It looks at the protocol, domain name and port. So if the website is http://localhost:8000, and it's making an AJAX request to http://localhost:8001, that is still considered a cross-domain request.
When a cross-domain AJAX request is made, the browser first sends an OPTIONS request to the URL, which contains the URL of the website that's making the request (e.g. http://localhost:8000). The API is expected to return a response with an Access-Control-Allow-Origin header that says whether the website making the request is allowed to.
If you are not planning on sending credentials, then the Access-Control-Allow-Origin header can be *, meaning that the API allows anyone to call it.
However, if you need to send credentials, like you do, you cannot use *. The Access-Control-Allow-Origin header must specifically contain the domain (and port) of your webpage, and the Access-Control-Allow-Credentials must be set to true. For example:
Access-Control-Allow-Origin: http://localhost:8000
Access-Control-Allow-Credentials: true
It's a bit of a pain in the butt, yes. But it's necessary for security.
You can read more about CORS here: Cross-Origin Resource Sharing (CORS)

Is it possible to distinguish a requestURL as one typed in the address bar to log in a node proxy?

I just could not get the http-proxy module to work properly as a forward proxy. It works great as a reverse proxy. Therefore, I have implemented a node-based forward proxy using node's http and net modules. It works fine, both with http and https. I will deal with websockets later. Among other things, I want to log the URLs visited or requested through a browser. In the request object, I do get the URL, but as expected, when a page loads, a zillion other requests are triggered, including AJAX, third-party ads, etc. I do not want to log these.
I know that I can distinguish an AJAX request from the x-requested-with header. I can distinguish requests coming from a browser by examining the user-agent header (though these can be spoofed thru cURL). I want to minimize the log entries.
How do commercial proxies log such info? Or do they just log every request? One way would be to not log any requests within a certain time after the main request presuming that they are all associated with the main request. That would not be technically accurate.
I have researched in this area but did not find any solution. I am not looking for any specific code, just some direction...
No one can know that with precision, but you can find clues such as, "HTTP referer", "x-requested-with" or add your custom headers in each ajax request (squid proxy by default sends a "X-Forwarded-For" which says he is a proxy), but anybody can figure out what headers are you sending for your requests or copy all headers that a common browser sends by default and you will believe it is a person from a browser, but could be a bash cURL sent by a bot.
So, really, you can't know for example, if a request is an AJAX request because the headers aren't mandatory, by default your browser or your framework adds an x-requested-with or useful information to help to "guess" who is performing the request.

Apigee API end point gives 503 on the browser, but a 200 on Apigee trace and curl

We use Apigee proxy to invoke our API. All works well when we test it out within Apigee trace. Also works fine with curl. But on a browser, it gives a 503. This is not consistent though, sometimes it gives a 200 on the browser too. Tried Chrome and Firefox, same behavior.
Our API still executes well though. We do not return any response, merely set the status. Any ideas on what we could try out to get a 200 on the browser?
Couple of things to check:
Check if your Browser has a DNS entry caching. Sometimes services like ELB changes the actual IPs. So caching DNS entries may result in 503.
Another you may want to check is the difference is in the HTTP Verb used. Browsers send a GET request. But curl commands can do all. So if your service is specifically not serving GET calls you may get some server side errors. Also curl sends certain headers even if you do not explicitly send. E.g., Accept:/ header and user-agent header etc. Check if the server is behaving differently based on those headers.
You should look into using Chrome or Firefox extensions for this. There are two in particular which support a wide range of additional features for API developers.
For Chrome, try Postman.
For Firefox, try RESTClient.
Thanks.

Using http protocol between servers

I have a configuration of two servers working in intranet.
First one is a web server that produces html pages to the browser, this html sends requests to the second server, which produces and returns reports (also html) according to some GET parameter's value.
Since this solution is un-secured (the passed parameter is exposed) I thought about having the html (produced by the first server) sending the requests for report back to the first server, there, a security check will be made, and the request for report will be sent to the reports server using http between the servers, instead of from browser to server.
The report's markup will be returned to the first server (as a string?), added to the response object and presented in the browser.
Is this a common practice of http?
Yes, it's a common practice. In fact, it works the same when your webserver needs to fetch some data from a database (not publically exposed - ie not in the webserver DMZ for example).
But you need to be able to use dynamic page generation (not static html. Let's suppose your webserver allows PHP or java for example).
your page does the equivalent of an HTTP GET (or POST, or whatever you like) do your second server, sending any required parameter you need. You can use cURL libraries, or fopen(http://), etc.
it receives the result, checks the return code, can also do optionnal content manipulation (like replacing some text or URLs)
it sends back the result to the user's browser.
If you can't (or won't) use dynamic page generation, you can configure your webserver to proxy some requests to the second server (for example with Apache's mod_proxy).
For example, when a request comes to server 1 for URL "http://server1/reports", the webserver proxies a request to "http://server2/internal/reports?param1=value1&param2=value2&etc".
The user will get the result of "http://server2/internal/reports?param1=value1&param2=value2&etc", but will never see from where it comes (from his point of view, he only knows http://server1/reports).
You can do more complex manipulations associating proxying with URL rewriting (so you can use some parameters of the request to server1 on the request to server2).
If it's not clear enough, don't hesitate to give more details (o/s, webserver technology, urls, etc) so I can give you more hints.
Two others options:
Configure the Internet facing HTTP server with a proxy (e.g.
mod_proxy in Apache)
Leave the server as it is and add an Application Firewal

Debugging 403's?

whats the best way to investigate why a server is returning a 403 for a http web request?
Can an iis server be configured to provide a more detailed internal log for 403's?
A nice way to start is to analyze returned headers (using curl -vvv, or inspecting the request using your favorite web browser development tool).

Resources