HAProxy/Websockets why new sockets are constantly being created? - node.js

I'm trying to setup Websockets with HAProxy, with the following configuration:
http traffic -> haproxy -> varnish -> nginx -> node
ws traffic -> haproxy -> node
one subdomain has forced ssl so haproxy redirect any http traffic to https. (and ws to wss)
everything is working as expected except one issue, new sockets are constantly being created instead of just one (I can see them being created every few seconds in Chrome's debugging console)
I didn't have this problem when I used Varnish to do the Websockets pipe.
how can I fix this ?
global
daemon
defaults
mode http
frontend insecure
# HTTP
bind :80
timeout client 5000
# acl
acl is_console hdr_end(host) -i console.mydomain.com
acl is_client hdr_end(host) -i www.mydomain.com
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
# Redirect all HTTP traffic to HTTPS
redirect location https://console.mydomain.com if is_console
use_backend node_console if is_console is_websocket
use_backend node_client if is_client is_websocket
default_backend varnish
frontend secure
# HTTPS
bind :443 ssl crt /etc/ssl/console.mydomain.com.pem
timeout client 5000
# acl
acl is_console hdr_end(host) -i console.mydomain.com
acl is_client hdr_end(host) -i www.mydomain.com
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend dealspot_console if is_console is_websocket
use_backend dealspot_client if is_client is_websocket
default_backend varnish
backend varnish
balance leastconn
option forwardfor
timeout server 5000
timeout connect 4000
server varnish1 127.0.0.1:6081
backend node_client
balance leastconn
option forwardfor
timeout queue 5000
timeout server 5000
timeout connect 5000
server client_node1 127.0.0.1:3000
backend node_console
balance leastconn
option forwardfor
timeout queue 5000
timeout server 5000
timeout connect 5000
server console_node1 127.0.0.1:3001

I managed to fix that by setting 'tunnel timeout' to one day on the backends

Related

pfSense + HAProxy – Reverse Proxy with multiple Services on one internal IP

currently I am using pfSense on my server with the HAProxy package, because I can easily configure it via the GUI.
I configured HAProxy to act as a reverse proxy corresponding to this guide: https://blog.devita.co/pfsense-to-proxy-traffic-for-websites-using-pfsense/
SSL offloading works like a charm. The problem I have is when I have more than one service (open port) on the same internal IP it seems not to be working.
Example:
I configure service1.domain.com for Service1 with port 8000 (10.100.10.101:8000) and it works flawlessly.
Now I need another port on the same machine (e.g. 10.100.10.101:8082) with another service. If I configure another backend pointing to the same IP but with a different port I can only reach the second servce (service2.domain.com) even if I access service1.domain.com.
My use case is that I am trying to set up Seafile which is using port 8000 for the web GUI and port 8082 for the fileserver. Right now I am able to access the web GUI but I am not able to upload, download or share files.
My configuration:
# Automaticaly generated, dont edit manually.
# Generated on: 2018-09-29 19:24
global
maxconn 1000
stats socket /tmp/haproxy.socket level admin
gid 80
nbproc 1
hard-stop-after 15m
chroot /tmp/haproxy_chroot
daemon
tune.ssl.default-dh-param 8192
server-state-file /tmp/haproxy_server_state
ssl-default-bind-ciphers TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-CHACHA20-POLY1305-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-ciphers TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-CHACHA20-POLY1305-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
listen HAProxyLocalStats
bind 127.0.0.1:2200 name localstats
mode http
stats enable
stats admin if TRUE
stats show-legends
stats uri /haproxy/haproxy_stats.php?haproxystats=1
timeout client 5000
timeout connect 5000
timeout server 5000
frontend shared-frontend-merged
bind X.X.X.X:443 name X.X.X.X:443 ssl crt-list /var/etc/haproxy/shared-frontend.crt_list
mode http
log global
option http-keep-alive
option forwardfor
acl https ssl_fc
http-request set-header X-Forwarded-Proto http if !https
http-request set-header X-Forwarded-Proto https if https
timeout client 30000
http-response set-header Strict-Transport-Security max-age=15768000
acl aclcrt_shared-frontend var(txn.txnhost) -m reg -i ^([^\.]*)\.domain\.com(:([0-9]){1,5})?$
acl ACL1 var(txn.txnhost) -m str -i test.domain.com
acl ACL2 var(txn.txnhost) -m str -i service1.domain.com
acl ACL3 var(txn.txnhost) -m str -i service2.domain.com
http-request set-var(txn.txnhost) hdr(host)
default_backend test.domain.com_ipv4
default_backend service1.domain.com_ipvANY
default_backend service2.domain.com_ipvANY
frontend http-to-https
bind X.X.X.X:80 name X.X.X.X:80
mode http
log global
option http-keep-alive
timeout client 30000
http-request redirect scheme https
backend test.domain.com_ipv4
mode http
id 10100
log global
timeout connect 30000
timeout server 30000
retries 3
source ipv4# usesrc clientip
option httpchk GET /
server testvm-server01 10.100.10.101:54080 id 10101 check inter 1000
backend service1.domain.com_ipvANY
mode http
id 102
log global
timeout connect 30000
timeout server 30000
retries 3
option httpchk GET /
server seafile-vm-01 10.100.10.103:8000 id 101 check inter 1000
backend service2.domain.com_ipvANY
mode http
id 104
log global
timeout connect 30000
timeout server 30000
retries 3
option httpchk GET /
server seafile-vm-02 10.100.10.103:8082 id 103 check inter 1000
I would really be glad if anyone can point me in the right direction, thank you in advance and if you need further information please tell me.
Best regards,
Bioneye
I was able to solve my problem with the help of one awesome user over on reddit.
The first problem was that I misconfigured my frontend and thus had 3 default_backends. That was the reason why every services pointed to the same virtual machine. To solve it I just had to add the if condition corresponding to my ACL name.
The second problem was that my Service2 was shown as DOWN on the HAProxy stats page. I had to change the health check method from HTTP to Basic and that finally resolved everything.
This is the working configuration:
# Automaticaly generated, dont edit manually.
# Generated on: 2018-10-02 16:59
global
maxconn 1000
stats socket /tmp/haproxy.socket level admin
gid 80
nbproc 1
hard-stop-after 15m
chroot /tmp/haproxy_chroot
daemon
tune.ssl.default-dh-param 8192
server-state-file /tmp/haproxy_server_state
ssl-default-bind-ciphers TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-CHACHA20-POLY1305-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-ciphers TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-CHACHA20-POLY1305-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
listen HAProxyLocalStats
bind 127.0.0.1:2200 name localstats
mode http
stats enable
stats admin if TRUE
stats show-legends
stats uri /haproxy/haproxy_stats.php?haproxystats=1
timeout client 5000
timeout connect 5000
timeout server 5000
frontend shared-frontend-merged
bind X.X.X.X:443 name X.X.X.X:443 ssl crt-list /var/etc/haproxy/shared-frontend.crt_list
mode http
log global
option http-keep-alive
option forwardfor
acl https ssl_fc
http-request set-header X-Forwarded-Proto http if !https
http-request set-header X-Forwarded-Proto https if https
timeout client 30000
http-response set-header Strict-Transport-Security max-age=15768000
acl aclcrt_shared-frontend var(txn.txnhost) -m reg -i ^([^\.]*)\.domain\.com(:([0-9]){1,5})?$
acl ACL1 var(txn.txnhost) -m beg -i test.domain.com
acl ACL2 var(txn.txnhost) -m beg -i service1.domain.com
acl ACL3 var(txn.txnhost) -m beg -i service2.domain.com
http-request set-var(txn.txnhost) hdr(host)
use_backend test.domain.com_ipv4 if ACL1
use_backend service1.domain.com_ipvANY if ACL2
use_backend service2.domain.com-seafhttp_ipvANY if ACL3
frontend http-to-https
bind X.X.X.X:80 name X.X.X.X:80
mode http
log global
option http-keep-alive
timeout client 30000
http-request redirect scheme https
backend test.domain.com_ipv4
mode http
id 10100
log global
timeout connect 30000
timeout server 30000
retries 3
source ipv4# usesrc clientip
option httpchk GET /
server testvm-server01 10.100.10.101:54080 id 10101 check inter 1000
backend service1.domain.com_ipvANY
mode http
id 102
log global
timeout connect 30000
timeout server 30000
retries 3
option httpchk GET /
server seafile-vm-01 10.100.10.103:8000 id 101 check inter 1000
backend service2.domain.com-seafhttp_ipvANY
mode http
id 104
log global
timeout connect 30000
timeout server 30000
retries 3
server seafile-vm-02 10.100.10.103:8082 id 103 check inter 1000
For further details: https://www.reddit.com/r/PFSENSE/comments/9kezl3/pfsense_haproxy_reverse_proxy_with_multiple/?st=jmruoa9r&sh=26d24791
TLDR: I misconfigured my Action Table and had the wrong health check in place.
Greetings,
Bioneye

How to install mod_websocket with Lighttpd?

I want to make Lighttpd works with websocket (Socket.IO) and it appears the only way to do so is to install an additionnal module: mod_websocket. I followed these steps but I think I don't get the right /path/to/lighttpd_top_srcdir. I used /usr/lib/lighttpd as I saw all the modules in here when I ls-ed it.
Apparently, I need to reinstall Lighttpd, am I right?
So far, I got
copy mod_websocket files into /usr/lib/lighttpd
cp src/mod_websocket*.{h,c} /usr/lib/lighttpd/src
cp: target « /usr/lib/lighttpd/src » is not a directory
I need to do this because the error I get when trying to make websockets work is the following: WebSocket connection to 'ws://<myURL>/socket.io/1/websocket/agXkznae1gmlDTutzJyk' failed: Unrecognized frame opcode: 5 (I use Google Chrome v33.0.1750.154).
Is there another way to make websockets work with Lighttpd or do I need to change webserver?
Many thanks!
I resolved my problem!
I used HAProxy instead of Lighttpd mod_proxy as specified in this question
Here is my conf file (amend <...> per your configuration):
# this config needs haproxy-1.1.28 or haproxy-1.2.1
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
maxconn 4096
uid 99
gid 99
daemon
defaults
log global
mode http
option httplog
option dontlognull
retries 3
option http-use-proxy-header
option redispatch
option http-server-close
maxconn 2000
contimeout 5000
clitimeout 50000
srvtimeout 50000
frontend public
bind *:80
acl is_example hdr_end(host) -i <URL.toyourwebsite.com>
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket path_beg -i /websockets
use_backend ws if is_websocket is_example
default_backend www
backend ws
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server apiserver localhost:<PORT> weight 1 maxconn 1024 check
And I made Lighttpd listened to the 8080 port (otherwise HAProxy wouldn't start).

HAProxy + Nodejs + SockJS + Express + SSL

I've got a server setup in NodeJS which looks like the picture below:
Now what i want to do two things which seem to be possible with HAProxy:
To only use one port no matter what server a client wants to access. I want to use the external port 8080 for all non SSL
traffic. (All SSL traffic should use the port 443)
Enable SSL on the SockJS Server and the Express Server.
Please not that all my servers are running on the same instance on an amazon ec2. So i want to internally route the traffic.
This is my haproxy.cfg so far:
mode http
# Set timeouts to your needs
timeout client 10s
timeout connect 10s
timeout server 10s
frontend all 0.0.0.0:8080
mode http
timeout client 120s
option forwardfor
# Fake connection:close, required in this setup.
option http-server-close
option http-pretend-keepalive
acl is_sockjs path_beg /echo /broadcast /close
acl is_stats path_beg /stats
use_backend sockjs if is_sockjs
use_backend stats if is_stats
default_backend express
backend sockjs
# Load-balance according to hash created from first two
# directories in url path. For example requests going to /1/
# should be handled by single server (assuming resource prefix is
# one-level deep, like "/echo").
balance uri depth 2
timeout server 120s
server srv_sockjs1 127.0.0.1:8081
backend express
balance roundrobin
server srv_static 127.0.0.1:8008
backend stats
stats uri /stats
stats enable
Cant figure out how to route the SSL and the traffic to the TCP Server (8080 internal port)
Any ideas?
Your setup is kinda hard to understand (for me). If I understand your goals correctly, you want to serve your web service through SSL hence port 443. And from 443, connect to port 8080 (internally). If that is the case then the following configuration might be what you are looking for. It does not really use port 8080 but instead it connects directly to your express backend. You don't really need to have port 8080 exposed (unless you have special reasons for doing so) because you can just use the backend servers directly inside the frontend section.
Note that this only works for HAProxy 1.5+, if you are using older version of HAProxy, you should put something to tunnel the SSL connection before it reaches HAProxy (But I strongly suggest 1.5 because it makes your setup less complex)
frontend ssl
bind *:443 ssl crt /path/to/cert.pem ca-file /path/to/cert.pem
timeout client 120s
option forwardfor
# Fake connection:close, required in this setup.
option http-server-close
option http-pretend-keepalive
acl is_sockjs path_beg /echo /broadcast /close
acl is_stats path_beg /stats
use_backend sockjs if is_sockjs
use_backend stats if is_stats
default_backend express

502 Bad Gateway HAproxy

I have Ubuntu 12.04LTS running. My webserver is Tomcat 7.0.42 and I use HAProxy as proxy server. My application is a servlet application which uses websockets.
Sometime when I request my page I get "502 Bad Gateway" error on some resources not on all, but on some. I think that this has something to do with my HAProxy configuration, which is the following:
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 1
defaults
mode http
option http-server-close
option httpclose
# option redispatch
no option checkcache # test against 502 error
frontend all 0.0.0.0:80
timeout client 86400000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend socket_backend if is_websocket
backend www_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server apiserver localhost:8080 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server apiserver localhost:8080 weight 1 maxconn 1024 check
What do I have to change to prevent the 502 error?
First, enable haproxy logging. It will simply tell you why it is giving the 502's. My guess is that the backend "localhost:8080" is simply not able to keep up or is not able to get a connection within 4000ms "timeout connect 4000".
You may have exceeded some of the default limits in HAProxy. Try adding the following to global section:
tune.maxrewrite 4096
tune.http.maxhdr 202
Your should replace your defaults with these ones :
# Set balance mode
balance random
# Set http mode
mode http
# Set http keep alive mode (https://cbonte.github.io/haproxy-dconv/2.3/configuration.html#4)
option http-keep-alive
# Set http log format
option httplog
# Dont log empty line
option dontlognull
# Dissociate client from dead server
option redispatch
# Insert X-Forwarded-For header
option forwardfor
Don't use http-server-close, it is likely the cause of your problems.
Keep-alive will have a connection with client and server at both side.
It is working fine with websockets as well.
And if you enable the check on the server you need to as well configure it with something like this :
# Enable http check
option httpchk
# Use server configuration
http-check connect default
# Use HEAD on / with HTTP/1.1 protocol for Host example.com
http-check send meth HEAD uri / ver HTTP/1.1 hdr Host example.com
# Expect status 200 to 399
http-check expect status 200-399

nginx, node.js and socket.io - is there a working marriage?

nginx is a killer static file server.
it can serve node.js, as in this example, but in a limited fashion.
but nginx is apparently unable to proxy websockets.
the only thing I found that might work is using HAProxy front end as per this article - but it's from October 6, 2011.
this has to be a common problem, but I'm not finding a very common solution.
Solution
(see https://github.com/bangkok-maco/barebone-node for complete solution and details)
ip testing schema:
127.0.0.12 - www.chat.nit - public, in /etc/hosts and haproxy
127.0.1.12 - internal nginx web server
127.0.2.12 - internal chat serving node.js socket.io
/etc/haproxy/haproxy.cfg:
global
maxconn 4096
nbproc 2
daemon
# user nobody
log 127.0.0.1 local1 notice
defaults
mode http
# listen on 127.0.0.12:80
frontend app
bind 127.0.0.12:80
mode tcp
timeout client 86400000
default_backend www_backend
acl is_chat hdr_dom(Host) chat
acl is_websocket path_beg /socket.io
use_backend chat_socket_backend if is_websocket is_chat
tcp-request inspect-delay 500ms
tcp-request content accept if HTTP
# ngnix on 127.0.1.12:80
backend www_backend
balance roundrobin
option forwardfor
mode http
option httplog
option httpclose
timeout server 30000
timeout connect 4000
server w1 127.0.1.12:80 weight 1 maxconn 1024 check
# node (socket.io) on 127.0.2.12:80
backend chat_socket_backend
balance roundrobin
mode http
option httplog
option forwardfor
timeout queue 5000
timeout server 86400000
timeout connect 86400000
timeout check 1s
no option httpclose
option http-server-close
option forceclose
server s14 127.0.2.12:8000 weight 1 maxconn 1024 check
/etc/nginx/sites-enabled/www.chat.nit
server {
listen 127.0.1.12:80;
root /data/node/chat;
index client.html;
server_name www.chat.nit;
# favicon.ico is in /images
location = /favicon.ico$ { rewrite /(.*) /images/$1 last; }
# standard includes
location ^~ /(css|images|scripts)/ {
try_files $uri =404;
}
# html page (only in root dir)
location ~ ^/([-_a-z]+).html$ {
try_files $uri =404;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/www;
}
}
chat (node.js): server.js
var app = require('http').createServer()
, io = require('socket.io').listen(app);
app.listen(8000,'127.0.2.12');
io.sockets.on('connection', function(socket) {
...
};
chat: client.html
<head>
<script src="/scripts/socket.io/socket.io.js"></script>
<script>
var socket = io.connect('http://www.chat.nit:80');
...
</script>
</head>
notes:
link socket.io client js into scripts/ directory
/.../scripts$ ln -s ../node_modules/socket.io/node_modules/socket.io-client/dist/ socket.io
/etc/default/haproxy (contrary to text, must set to work at all)
ENABLED=1
this version haproxy not logging. found kvz's write up on how to use rsyslogd via 127.0.0.1, but could not make it fly.
this solution is working - not sysadmin quality to be sure. (enhancements more than welcome.)
It looks like that you can proxy WebSockets through nginx since v1.3.13
See http://nginx.org/en/docs/http/websocket.html for more details
Here's my (old and for testing purposes) HAProxy config for proxying WebSockets and normal HTTP requests.
global
maxconn 4096
nbproc 2
daemon
user nobody
defaults
mode http
frontend app
bind 0.0.0.0:8000
mode tcp
timeout client 86400000
default_backend www_backend
acl is_websocket path_beg /sockets
use_backend socket_backend if is_websocket
tcp-request inspect-delay 500ms
tcp-request content accept if HTTP
backend www_backend
balance roundrobin
option forwardfor
mode http
option httplog
option httpclose
timeout server 30000
timeout connect 4000
server w1 localhost:8080 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
mode http
option httplog
option forwardfor
timeout queue 5000
timeout server 86400000
timeout connect 86400000
timeout check 1s
no option httpclose
option http-server-close
option forceclose
server s1 localhost:8081 weight 1 maxconn 1024 check
Note that I am recognizing whether request is WS or not by looking at path (acl is_websocket path_beg /sockets line). This can be replaced with for example this:
acl is_websocket hdr(Upgrade) -i WebSocket
or this:
acl is_websocket hdr_beg(Host) -i ws
or both. Proxying to nginx with this config should work out of the box.

Resources