Using http protocol between servers - security

I have a configuration of two servers working in intranet.
First one is a web server that produces html pages to the browser, this html sends requests to the second server, which produces and returns reports (also html) according to some GET parameter's value.
Since this solution is un-secured (the passed parameter is exposed) I thought about having the html (produced by the first server) sending the requests for report back to the first server, there, a security check will be made, and the request for report will be sent to the reports server using http between the servers, instead of from browser to server.
The report's markup will be returned to the first server (as a string?), added to the response object and presented in the browser.
Is this a common practice of http?

Yes, it's a common practice. In fact, it works the same when your webserver needs to fetch some data from a database (not publically exposed - ie not in the webserver DMZ for example).
But you need to be able to use dynamic page generation (not static html. Let's suppose your webserver allows PHP or java for example).
your page does the equivalent of an HTTP GET (or POST, or whatever you like) do your second server, sending any required parameter you need. You can use cURL libraries, or fopen(http://), etc.
it receives the result, checks the return code, can also do optionnal content manipulation (like replacing some text or URLs)
it sends back the result to the user's browser.
If you can't (or won't) use dynamic page generation, you can configure your webserver to proxy some requests to the second server (for example with Apache's mod_proxy).
For example, when a request comes to server 1 for URL "http://server1/reports", the webserver proxies a request to "http://server2/internal/reports?param1=value1&param2=value2&etc".
The user will get the result of "http://server2/internal/reports?param1=value1&param2=value2&etc", but will never see from where it comes (from his point of view, he only knows http://server1/reports).
You can do more complex manipulations associating proxying with URL rewriting (so you can use some parameters of the request to server1 on the request to server2).
If it's not clear enough, don't hesitate to give more details (o/s, webserver technology, urls, etc) so I can give you more hints.

Two others options:
Configure the Internet facing HTTP server with a proxy (e.g.
mod_proxy in Apache)
Leave the server as it is and add an Application Firewal

Related

How to stress test NodeJS API behind NGINX?

I have an API on a remote server that I need to stress test. It is sitting behind a NGINX reverse proxy that does 301 to the API app running behind it.
Normal requests / Postman all work fine and I get 200s back. As soon as I use something like AutoCannon I get 3xx instead of 200s and the requests never hit the actual nodejs app.
Is there some special configuration I need to do on NGINX to allow stress tests to occur?
Given you're sending the same request you should get the same response so double check the URL, request body, request headers and so on. Compare the requests originating from Postman and AutoCannon using an external sniffer tool like Fiddler or Wireshark, identify the differences and amend AutoCannon configuration so it would send exactly the same request like Postman does.
You might want to switch to other load testing tool like Apache JMeter which comes with HTTP(S) Test Script Recorder so you will be able to record the request from your browser (or other application like Postman) so you won't have to guess what's wrong with your AutoCannor setup. There is also JMeter Chrome Extension so if you can access your API via browser you will be able to record JMeter script without having to worry about proxies and certificates.

How to identify https clients through proxy connection

We have developed a corporate NodeJS application served through http/2 protocol and we need to identify clients by their IP address because the server need to send events to clients based on their IP (basically some data about phone calls).
I can successfully get client IP through req.connection.remoteAddress but there are some of the clients that can only reach the server through our proxy server.
I know about x-forwarded-for header, but this doesn't work for us because proxies can't modify http headers in ssl connections.
So i though I could get the IP from client side and send back to the server, for example, during the login process.
But, if I'm not wrong, browsers doesn't provide that information to javascript so we need a way to obtain that information first.
Researching about it, the only option I found out is obtaining from a server which could tell me the IP from where I'm reaching it.
Of course, through https I can't because of the proxy. But I can easily enable an http service just to serve the client IP.
But then I found out that browsers blocks http connections from https-served pages because of "mixed active content" issue.
I read about it and I found out that I can get "mixed passive content" and I succeed in downloading garbage data as image file through <img>, but when I try to do the same thing using an <object> element I get a "mixed active content" block issue again even in MDN documentation it says it's considered passive.
Is there any way to read that data either by that (broken) <img> tag or am I missing something to make the <object> element really passive?
Any other idea to achieve our goal will also be welcome.
Finally I found a solution:
As I said, I was able to perform an http request by putting an <img> tag.
What I was unable to do is to read downloaded data regardless if it were an actual image or not.
...but the fact is that the request was made and to which url is something that I can decide beforehand.
So all what I need to do is to generate a random key for each served login screen and:
Remember it in association with your session data.
Insert a, maybe hidden, <img> tag pointing to some http url containing that id.
As soon as your http server receive the request to download that image, you could read the real IP through the x-forwarded-for header (trusting your proxy, of course) and resolve to which active session it belongs.
Of course, you also must care to clear keys, regardless of being used or not, after a few time to avoid memory leak or even to be reused with malicious intentions.
FINAL NOTE: The only drawback of this approach is the risk that, some day, browsers could start blocking mixed passive content too by default.
For this reason I, in fact, opted by a double strategy approach. That is: additionally to the technique explained above, I also implemented an http redirector which does almost the same: It redirects all petitions to the root route ("/") to our https app. But it does so by a POST request containing a key which is previously associated to the client real IP.
This way, in case some day the first approach stops to work, users would be anyway able to access first through http. ...Which is in fact what we are going to do. But the first approach, while it continue working, could avoid problems if users decide to bookmark the page from within it (which will result in a bookmark to its https url).

Send a Ajax Request , do I need open the firewall from customer ip to application server

I open a jsp page which load on a IHS Webserver,when I click a button on the jsp page ,the page send a ajax request to a Application server.
I am confused about the request send from client or IHS?
I need open the firewall.
An AJAX request is typically performed by your browser. Most of the time, you're just loading a page that performs some additional requests to the same server using JavaScript. If the web page provided by your first server is making requests to a different server, you may encounter problems relating to CORS. The admin of the Application server will need to configure headers that allow requests initiated by pages served from your first server.
Unless your firewall is very tightly configured to only allow whitelisted traffic, it's unlikely to require any changes.

Is it possible to distinguish a requestURL as one typed in the address bar to log in a node proxy?

I just could not get the http-proxy module to work properly as a forward proxy. It works great as a reverse proxy. Therefore, I have implemented a node-based forward proxy using node's http and net modules. It works fine, both with http and https. I will deal with websockets later. Among other things, I want to log the URLs visited or requested through a browser. In the request object, I do get the URL, but as expected, when a page loads, a zillion other requests are triggered, including AJAX, third-party ads, etc. I do not want to log these.
I know that I can distinguish an AJAX request from the x-requested-with header. I can distinguish requests coming from a browser by examining the user-agent header (though these can be spoofed thru cURL). I want to minimize the log entries.
How do commercial proxies log such info? Or do they just log every request? One way would be to not log any requests within a certain time after the main request presuming that they are all associated with the main request. That would not be technically accurate.
I have researched in this area but did not find any solution. I am not looking for any specific code, just some direction...
No one can know that with precision, but you can find clues such as, "HTTP referer", "x-requested-with" or add your custom headers in each ajax request (squid proxy by default sends a "X-Forwarded-For" which says he is a proxy), but anybody can figure out what headers are you sending for your requests or copy all headers that a common browser sends by default and you will believe it is a person from a browser, but could be a bash cURL sent by a bot.
So, really, you can't know for example, if a request is an AJAX request because the headers aren't mandatory, by default your browser or your framework adds an x-requested-with or useful information to help to "guess" who is performing the request.

In Node.js, finding the original client URL when app is behind a reverse proxy

I'm working on a Node.js/Express application that, when deployed, sits behind a reverse proxy.
For example: http://myserver:3000/ is where the application actually sits, but users access it at https://proxy.mycompany.com/myapp.
I can get the original user agent request's host from a header passed through easily enough (if the reverse proxy is configured properly), but how do I get that extra bit of path and protocol information from the original URL accessed by the browser?
When my application has to generate redirects, it needs to know that the end user's browser expects the request to go to not only to proxy.mycompany.com over https, but also under a sub-path of myapp.
So far all I can get access to is proxy.mycompany.com, which isn't enough to create a valid redirect.
For dev purposes I'm using a reverse proxy setup in nginx, but my company is using Microsoft's ISA as well as HAProxy.
Generally this is done with x-forwarded-* headers which are inserted by the reverse proxy itself. For example:
x-forwarded-host: foo.com
x-forwarded-proto: https
Take a look here:
http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/x-forwarded-headers.html
Probably you can configure nginx to insert whatever x- header you want, but the convention (standard?) seems to be the above.
If you're reverse proxying into a sub-path such as /myapp, that definitely complicates matters. Presumably that sub-path should be a configuration option available to both nginx and your app.

Resources