I have IIS6 services with NTLM auth.
For my testing purposes i need to configure load balancer for these services.
I've confiured simple upstreams for a few services and now i have a problem with NTLM authentication.
How to configure Nginx to support NTLM in reverese proxy mode?
The goal is to enable keepalive and set HTTP version to 1.1:
server {
...
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass_request_headers on;
proxy_pass http://myupstream;
}
}
upstream myupstream {
server localhost:12345;
keepalive 32;
}
Related
I have the following nginx configuration to run node.js with websockets behind an nginx reverse proxy:
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
gzip on;
upstream nodejsserver {
server 127.0.0.1:3456;
}
server {
listen 443 ssl;
server_name myserver.com;
error_log /var/log/nginx/myserver.com-error.log;
ssl_certificate /etc/ssl/myserver.com.crt;
ssl_certificate_key /etc/ssl/myserver.com.key;
location / {
proxy_pass https://nodejsserver;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_read_timeout 36000s;
}
}
}
The node.js server uses the same certificates as specified in the nginx configuration.
My issue is that in my browser (Firefox, though this issue occurs in other browsers too), my websocket connection resets every few minutes with a 1006 code. I have researched the reason for this error in this particular (or similar) constellation, and most of the answers here as well as on other resources point to the proxy_read_timeout nginx configuration variable not being set or being set too low. But this is not the case in my configuration.
Worthy of note is also that when I run node.js and access it directly, I do not experience these disconnects, both locally and on the server.
In addition, I've tried running nginx and node.js insecurely (port 80), and accessing ws:// instead of wss:// in my client. The issue remains the same.
There are a few things you need to do to keep a connection alive.
You should stablish a keepalive connection count per worker proccess, and the documentation states you need to be explicit about your protocol as well. Other than that, you maybe running into other kinds of timeouts, so edit your upstream and server blocks:
upstream nodejsserver {
server 127.0.0.1:3456;
keepalive 32;
}
server {
#Stuff...
location / {
#Stuff...
# Your time can be different for each timeout, you just need to tune into your application's needs
proxy_read_timeout 36000s;
proxy_connect_timeout 36000s;
proxy_send_timeout 36000s;
send_timeout 36000s; # This is stupid, try a smaller number
}
}
There are a number of other discussions in SO about the same subject, check this answer out
I have a Nginx server handling http request and doing proxy pass to some node servers upstream, if the domain name match one of the enabled sites, all packets are redirected to one node server, only if the channel is SSL, otherwise 301 to the https version:
server {
listen 80;
server_name something.com
return 301 https://$host$request_uri;
}
server {
listen 433;
server_name something.com;
ssl_certificate /etc/nginx/cert.crt;
ssl_certificate_key /etc/nginx/cert.key;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:3000/;
proxy_redirect off;
}
}
All that works, but the certificates management, the SSL handshake and so are made by Nginx. I will like to have each node server upstream to manage their own SSL preferences so I don't depend on Nginx to do this. My node servers already support https requests but I don't understand if it is possible to tell Nginx:
Listen to 80, if something comes do a 301 to the https version of it.
Listen to 433, don't worry for SSL, just proxy pass everything to localhost:3000
And the node server listening to port 3000 handles SSL
Listen to 433, don't worry for SSL, just proxy pass everything to localhost:3000
No, not with nginx, you will have to use port forwarding for that.
nginx would either have to use some SSL key and possibly proxy the traffic to some Node app using SSL, this will mean that both Node and nginx would have to manage their own SSL keys (nginx for the client-nginx connection and Node app for the nginx-nodeApp connection).
Or nginx could use HTTP without SSL to proxy the request to Node that uses SSL, and this will mean that the client-nginx connection would be insecure and only the nginx-NodeApp connection would be secure. Also it would mean that https://www.example.com/ would not work - though http://www.example.com:443/ would.
If you want Node to handle the SSL keys and not the reverse proxy (as it is usually done) then you would have to use port forwarding on the TCP/IP level to pass the traffic to the Node app, without using a reverse proxy (nginx) at all.
Usually a reverse proxy is used so that the apps wouldn't have to handle the SSL keys used for client connections (among other things). If you want the Node apps to use the SSL keys and not the reverse proxy then you should reconsider using a reverse proxy in the first place.
I followed the instruction and installed Ogar on my CentOS server successfully. But every time when my friends want to play on my server they have to use a google chrome and go to command lines and type 'connect("ws://agar.davidchen.com:443")'. It's not cool, because they think how the things work is you type a domain name (like 'agar.davidchen.com') then you can play the game, just like typing 'agar.io'. Is there any solution to this issue? Thanks!
You need to proxy the requests from HTTP to your socket connection via a web server like Nginx, so you can use http://agar.davidchen.com to access your web socket.
Install Nginx (version >= 1.3), then configure your virtual host with something like this:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream websocket {
# This is where your web socket runs
server 127.0.0.1:443;
}
server {
listen 80;
server_name agar.davidchen.com;
location / {
proxy_pass http://websocket;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
Reference: https://www.nginx.com/blog/websocket-nginx/
I'm writing web socket project, everything is working like expected(locally), I using:
NGINX as a WebSockets Proxy
NODEJS as a backend server
WS as websocket module: ws
NGINX configuration:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
upstream backend_cluster {
server 127.0.0.1:5050;
}
# Only retry if there was a communication error, not a timeout.
proxy_next_upstream error;
server {
access_log /code/logs/access.log;
error_log /code/logs/error.log info;
listen 80;
listen 443 ssl;
server_name mydomain;
root html;
ssl_certificate /code/certs/sslCert.crt;
ssl_certificate_key /code/certs/sslKey.key;
ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; # basically same as apache [all -SSLv2]
ssl_ciphers HIGH:MEDIUM:!aNULL:!MD5;
location /websocket/ws {
proxy_pass http://backend_cluster;
proxy_http_version 1.1;
proxy_redirect off ;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Like I mentioned this is working just fine locally and in one machine in development environments, the issue I'm worry about is when we will go to production, in production environments will have more that one nodejs server.
In production the configuration for nginx will be something like:
upstream backend_cluster {
server domain1:5050;
server domain2:5050;
}
So I don't know how NGINX solves the issue for stickiness, meaning how I know that after the 'HANDSHAKE/upgrade' is done in one server, how it will know to continue working with the same server, is there a way to tell NGINX to stick to the same server?
I hope I make my self clear.
Thanks in advanced
Use this configuration:
upstream backend_cluster {
ip_hash;
server domain1:5050;
server domain2:5050;
}
clody69's answer is pretty standard. However I prefer using the following configuration for 2 reasons :
Users connecting from the same public IP should be connecting to 2 different servers if needed. ip_hash enforces 1 server per public IP.
If user 1 is maxing out server 1's performance I want him/her to be able to use the application smoothly if he/she opens another tab. ip_hash doesn't allow that.
upstream backend_cluster {
hash $content_type;
server domain1:5050;
server domain2:5050;
}
How can we configure a server to serve http://domain1.com using Meteor.js and http://domain2.com using nginx/apache?
You could use a node-http-proxy script to do this or nginx.
A sample node-http-proxy script. Be sure to use the caronte branch will allows websockets to work with meteor without falling to long polling:
Sample node.js script
var httpProxy = require('http-proxy');
httpProxy.createServer({
router: {
'domain1.com': 'localhost:3000' //Meteor port & host
'domain2.com': 'localhost:8000' //Apache port & host
}
}).listen(80);
So the above would run on port 80. You would run meteor on port 3000 and apache/nginx on port 8000.
The proxy would check the domain hostname and if its domain1.com it would act as a transparent proxy to localhost:3000
Another other way to do this is let nginx handle the proxying and using virtual hosts to separate the traffic.
You'll need nginx 1.4.3 or newer to proxy websockets, and the following config will do it:
/etc/nginx/conf.d/upgrade.conf
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
/etc/nginx/sites-enabled/meteor
server {
server_name domain1.com;
# add_header X-Powered-By Meteor;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
and your nginx config for the Apache site would be the same as usual, but with server_name domain2.com; or whatever you want to name it.