Nginx reverse proxy - passthrough basic authenication - iis

I am trying to setup nginx as a reverse rpoxy server in front off several IIS web servers who are authenticating using Basic authentication.
(note - this is not the same as nginx providing the auth using a password file - it should just be marshelling everythnig between the browser/server)
Its working kind off - but getting repeatedly prompted for auth by every single resource (image/css etc) on a page.
upstream my_iis_server {
server 192.168.1.10;
}
server {
listen 1.1.1.1:80;
server_name www.example.com;
## send request back to my iis server ##
location / {
proxy_pass http://my_iis_server;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass_header Authorization;
proxy_redirect off;
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

This exact situation took me forever to figure out, but OSS is like that I guess. This post is a year old so maybe the original poster figured it out, or gave up?
Anyway, the problem for me at least was caused by a few things:
IIS expects the realm string to be the same as what it sent to Nginx, but if your Nginx server_name is listening on a different address than the upstream then the server side WWW-Authenticate is not going to be what IIS was expecting and ignore it.
The builtin header module doesn't clear the other WWW-Authenticate headers, particularly the problematic WWW-Authenticate: Negotiate. Using the headers-more module clears the old headers, and adds whatever you tell it to.
After this, I was able to finally push Sharepoint 2010 through Nginx.
Thanks stackoverflow.
server {
listen 80;
server_name your.site.com;
location / {
proxy_http_version 1.1;
proxy_pass_request_headers on;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_pass_header Authorization; //This didnt work for me
more_set_input_headers 'Authorization: $http_authorization';
proxy_set_header Accept-Encoding "";
proxy_pass https://sharepoint/;
proxy_redirect default;
#This is what worked for me, but you need the headers-more mod
more_set_headers -s 401 'WWW-Authenticate: Basic realm="intranet.example.com"';
}
}

I had these same symptoms with nginx/1.10.3. I have a service secured under basic authentication, and nginx as a reverse proxy between the clients and the server. The requirement was that nginx would passthrough the authorization.
First request to the server did pass through the Authorization header. Second request simply blocked this header, which meant the client was only able to make one request per session.
This was somehow related to cookies. If I cleared the browser cookies, then the cycle repeated. The client was able to authenticate but just for the first request. Closing the browser had the same effect.
The solution for me was to change the upstream server from https to http, using:
proxy_pass http://$upstream;
instead of:
proxy_pass https://$upstream;

Related

How to get TLS Session Id from a NGINX proxy?

In Node.js, headers are available in req.headers and I do get them in the http server.
I want to get the TLS session id set by the Nginx proxy where the setting, among other headers, in the relevant section, is:
location / {
...
proxy_set_header X-tlsSessionId $ssl_session_id;
...
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_buffering off;
expires off;
}
I am able to get access to other custom header but not to X-tlsSessionId when I try to access it as req.headers[x-tlssessionid]
I think that the lowercase is what I use to get access to custom headers but this one does not work.

Server Side Rendering with Nginx outputs HTML strings

I have a vue-ssr project, not using nuxt.js. And the server is express.
Project ran at port 3000.
When not using nginx, visited ip:3000, the page worked well.
When using nginx, visited my domain, I still could get HTML strings, but the strings were not rendered, like that:
(I don't have enought reputation to post images.)
html strings.png
And the request and response headers like that:
request and response headers.png
Here is my nginx config:
server {
server_name mydomain;
root /my/path;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_pass http://localhost:3000;
proxy_redirect off;
}
}
I've tried the config introduced in nuxt.js documents, but didn't worked either.
If removing the nosniff header fixes the problem, that would be an indication that for some reason express is not sending the right mime types with your responses.
You could check that by hitting express directly and examining the headers it is returning, e.g.:
curl -D - -o /dev/null http://localhost:3000
If it's NOT returning text/html, it could be a sign you're accidentally breaking the default mime type somewhere in your application code.

nginx + node + ssl + websockets, on one server

I've been able to find guides pertaining to various combinations of nginx, node, ssl, and websockets, but never all together and never on a single server.
Ultimately, this is what I'm after and I'm not sure if it's even possible:
single server (Ubuntu 14.04)
forced HTTPS (browsing to http://site forwards to https://)
node app is hosted on localhost:3000
node app uses web sockets
it's a single-page React app with no routing at all, so I don't need routes. I repeat, I'm only hosting one page with no navigation whatsoever.
With the below config, I have everything working except websockets - the client throws an error which doesn't happen if I browse straight to the node server and don't use nginx (browse to http://my.domain:3000):
bundle.js:26 WebSocket connection to 'wss://<my domain>/socket.io/?EIO=3&transport=websocket&sid=x1uQtRzF3gYYEvfIAAAi' failed: Error during WebSocket handshake: Unexpected response code: 400
server {
listen 80;
return 301 https://my/domain$request_uri;
}
server {
listen 443 ssl;
listen [::]:443;
ssl_certificate /path/cert.crt;
ssl_certificate_key /path/key.key;
ssl_session_cache shared:SSL:10m;
server_name blaze.chat;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host $host;
proxy_redirect http://localhost:3000 https://my.domain;
}
}
Right, got it working... Found a lot of articles all showing similar things but missing these key lines:
proxy_set_header Connection "upgrade";
proxy_read_timeout 86400;
In my case, websockets won't work without those lines, contrary to many other posts where similar questions were asked. Not sure what the difference was. Ironically it is listed as a requirement in the Nginx Websocket Proxying doco... Should have started there, my bad.
http://nginx.org/en/docs/http/websocket.html
Side note, I am just using this on the root path of / which works fine.

How to setup nginx for multiple nodejs apps on a server (using socket.io with namespaces)

We have a nodejs app that currently uses socket.io ( with namespaces ). This app is used as a dashboard for a specific financial market. Each instance of app subscribe to a specific market data and provides a dashboard. Initially we were running 3 separate instances of this app configured for 3 separate markets on the server, all binding to separate ports for serving requests.
Since we plan to add more markets it makes sense to have a reverse proxy server where a single port (along with separate URI for each market) can be used. However, setting up nginx has been a nightmare for various reasons.
(a) each instance of app for a market can be in different development stage and hence can have different static files. Managing all static file via nginx seems painful ? What can be done to leave handling of the static files with the app itself.
(b) socket.io communication is a failure. We tried to look into network communication and it seems it keeps on getting 404 page not found error when trying to connect to socket.io server. Not sure why it is connecting via http::/localhost/server.io/ instead of ws://localhost/server.io/ ? Can somebody point us to a similar example ? Anything that needs to be taken care of ?
IN our case we have been trying the following inside nginx sites-available/default
location /app/ {
proxy_pass http://localhost:3000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# kill cache
add_header Last-Modified $date_gmt;
add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
if_modified_since off;
expires off;
etag off;
}
Using nginx as a reversed proxy should not give you a hard time. The great thing about nginx is that you can have multiple projects on the same server with different domains.
Here is an example of nginx with multiple projects:
server {
listen 80;
server_name yourdomain.com;
location / {
proxy_pass http://localhost:3000;
#Rember to set the header like this otherwise the socket might not work.
proxy_set_header X-Real-Ip $remote_addr;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
}
server {
listen 80;
server_name subdomain.yourdomain.com;
location / {
proxy_pass http://localhost:3001;
}
}
I'm not sure why your socket should fail. Perhaps the mistake is that you try to define the route on the client site. Try having the javascript like this:
var socket = io();
or if your socket runs on one of your other applications:
var socket = io('http://yourdomain.com');
And remember that your changes should be added to sites-enabled instead of sites-avaible

Nginx upstream prematurely closed connection while reading response header from upstream form Node js rocky proxy

I have nginx and one node server which works as proxy between java backend servers.
my nginx config
server {
listen 80;
server_name peoplehum.dev www.peoplehum.dev;
#rewrite ^/(.*)/$ /$1 permanent;
charset utf-8;
keepalive_requests 100;
keepalive_timeout 100s;
gzip on;
gzip_http_version 1.1;
gzip_vary on;
gzip_comp_level 6;
gzip_proxied any;
gzip_buffers 16 8k;
gzip_disable "MSIE [1-6]\.(?!.*SV1)";
client_max_body_size 16M;
#include /etc/nginx/proxy_header.conf;
#include /etc/nginx/proxy_buffer.conf;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header HOST $host;
#X-Forwarded-Proto header gives the proxied server information about the schema of the original client request (whether it was an http or an https request).
proxy_set_header X-Forwarded-Proto $scheme;
#The X-Real-IP is set to the IP address of the client so that the proxy can correctly make decisions or log based on this information.
proxy_set_header X-Real-IP $remote_addr;
#The X-Forwarded-For header is a list containing the IP addresses of every server the client has been proxied through up to this point.
#In the example above, we set this to the $proxy_add_x_forwarded_for variable.
#This variable takes the value of the original X-Forwarded-For header retrieved from the client and adds the Nginx server's IP address to the end.
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
here, when i send big body request to nginx to my node proxy server it gives upstream prematurely closed connection while reading response header from upstream.
I am using rocky proxy--> https://github.com/h2non/rocky on node js for proxy.
I have searched alot and tried most of other question answers related this but nothing worked out.
After so much digging into this problem finally i got problem root cause.I was decorating request with some data using this function here and for doing so
I was using a function which was async call to Redis for getting some data and i wanted that to be sync for doing so i used this package deasync.
it worked for small files or data but when when it comes to more data or large file don't know it starts failing.
as learning what i suggest is use native promises or use any other package which is actually built on promises or use async await.

Resources