nginx is a killer static file server.
it can serve node.js, as in this example, but in a limited fashion.
but nginx is apparently unable to proxy websockets.
the only thing I found that might work is using HAProxy front end as per this article - but it's from October 6, 2011.
this has to be a common problem, but I'm not finding a very common solution.
Solution
(see https://github.com/bangkok-maco/barebone-node for complete solution and details)
ip testing schema:
127.0.0.12 - www.chat.nit - public, in /etc/hosts and haproxy
127.0.1.12 - internal nginx web server
127.0.2.12 - internal chat serving node.js socket.io
/etc/haproxy/haproxy.cfg:
global
maxconn 4096
nbproc 2
daemon
# user nobody
log 127.0.0.1 local1 notice
defaults
mode http
# listen on 127.0.0.12:80
frontend app
bind 127.0.0.12:80
mode tcp
timeout client 86400000
default_backend www_backend
acl is_chat hdr_dom(Host) chat
acl is_websocket path_beg /socket.io
use_backend chat_socket_backend if is_websocket is_chat
tcp-request inspect-delay 500ms
tcp-request content accept if HTTP
# ngnix on 127.0.1.12:80
backend www_backend
balance roundrobin
option forwardfor
mode http
option httplog
option httpclose
timeout server 30000
timeout connect 4000
server w1 127.0.1.12:80 weight 1 maxconn 1024 check
# node (socket.io) on 127.0.2.12:80
backend chat_socket_backend
balance roundrobin
mode http
option httplog
option forwardfor
timeout queue 5000
timeout server 86400000
timeout connect 86400000
timeout check 1s
no option httpclose
option http-server-close
option forceclose
server s14 127.0.2.12:8000 weight 1 maxconn 1024 check
/etc/nginx/sites-enabled/www.chat.nit
server {
listen 127.0.1.12:80;
root /data/node/chat;
index client.html;
server_name www.chat.nit;
# favicon.ico is in /images
location = /favicon.ico$ { rewrite /(.*) /images/$1 last; }
# standard includes
location ^~ /(css|images|scripts)/ {
try_files $uri =404;
}
# html page (only in root dir)
location ~ ^/([-_a-z]+).html$ {
try_files $uri =404;
}
error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/www;
}
}
chat (node.js): server.js
var app = require('http').createServer()
, io = require('socket.io').listen(app);
app.listen(8000,'127.0.2.12');
io.sockets.on('connection', function(socket) {
...
};
chat: client.html
<head>
<script src="/scripts/socket.io/socket.io.js"></script>
<script>
var socket = io.connect('http://www.chat.nit:80');
...
</script>
</head>
notes:
link socket.io client js into scripts/ directory
/.../scripts$ ln -s ../node_modules/socket.io/node_modules/socket.io-client/dist/ socket.io
/etc/default/haproxy (contrary to text, must set to work at all)
ENABLED=1
this version haproxy not logging. found kvz's write up on how to use rsyslogd via 127.0.0.1, but could not make it fly.
this solution is working - not sysadmin quality to be sure. (enhancements more than welcome.)
It looks like that you can proxy WebSockets through nginx since v1.3.13
See http://nginx.org/en/docs/http/websocket.html for more details
Here's my (old and for testing purposes) HAProxy config for proxying WebSockets and normal HTTP requests.
global
maxconn 4096
nbproc 2
daemon
user nobody
defaults
mode http
frontend app
bind 0.0.0.0:8000
mode tcp
timeout client 86400000
default_backend www_backend
acl is_websocket path_beg /sockets
use_backend socket_backend if is_websocket
tcp-request inspect-delay 500ms
tcp-request content accept if HTTP
backend www_backend
balance roundrobin
option forwardfor
mode http
option httplog
option httpclose
timeout server 30000
timeout connect 4000
server w1 localhost:8080 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
mode http
option httplog
option forwardfor
timeout queue 5000
timeout server 86400000
timeout connect 86400000
timeout check 1s
no option httpclose
option http-server-close
option forceclose
server s1 localhost:8081 weight 1 maxconn 1024 check
Note that I am recognizing whether request is WS or not by looking at path (acl is_websocket path_beg /sockets line). This can be replaced with for example this:
acl is_websocket hdr(Upgrade) -i WebSocket
or this:
acl is_websocket hdr_beg(Host) -i ws
or both. Proxying to nginx with this config should work out of the box.
Related
Trying to figure out how a static page can be added to a haproxy.
I have a website: https://sky.example.com
And i need to add addition static page https://sky.example.com/testing through Haproxy
My config file haproxy.cfg looks like this:
global
log /var/log loca2 err
chroot /var/lib/haproxy
user haproxy
group haproxy
daemon
tune.ssl.default-dh-param 1024
maxconn 8000de here
defaults
mode http
log global
balance static-rr
option httplog
timeout server 300s
timeout client 300s
timeout connect 5s
timeout queue 60s
retries 3
timeout http-request 60s
maxconn 8000
frontend skying
bind *:443 ssl crt /etc/haproxy/ssl/testing.pem
option forwardfor
acl modelx hdr(host) -i sky.example.com
use_backend missmay if modelx
acl is_info_testing path /testing
use_backend missmay if is_info_testing
backend missmay
mode http
errorfile 200 /etc/haproxy/static/testing.html
server test1_node1 192.168.1.25:78222 check cookie test1_node1
server test1_node2 192.168.1.26:78222 check cookie test1_node2
But it's not working. I get 404 error when I try to get the page https://sky.example.com/testing
It's hard to say what's exactly wrong based on your config file, but I would suggest you to check the haproxy version first. Sometimes different versions could cause the issues like that.
What you are looking for is listen and monitor-uri.
The following config, placed below defaults, serves the static file 200.http on port 80:
listen static-file
bind :80
monitor-uri /
errorfile 200 /usr/local/etc/haproxy/errors/200.http
Content of 200.http could be:
HTTP/1.0 200 OK
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>200 we are good</h1>
Health checked.
</body></html>
what is the equivalent of Nginx pass_proxy this in HAProxy
location /{
proxy_pass https://WebApplication.azurewebsites.net;
}
I tried to test this configuration but I receive 404 when I point to any backend server with the below configuration without using ACLs on root for example with self signed certificate
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log /dev/log local0
log /dev/log local1 notice
user haproxy
group haproxy
maxconn 16000
stats socket /var/lib/haproxy/stats level admin
tune.bufsize 32768
tune.maxrewrite 1024
tune.ssl.default-dh-param 2048
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
#---------------------------------------------------------------------
#HAProxy Monitoring Config
#---------------------------------------------------------------------
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
#---------------------------------------------------------------------
# FrontEnd Configuration
#---------------------------------------------------------------------
frontend fe_http_sne_in
bind *:443 ssl crt /etc/ssl/private/mydomain.pem
option forwardfor
default_backend be_default
backend be_default
mode http
option forwardfor
http-request add-header X-Forwarded-For %[src]
server srv02 www.google.com:443 ssl verify none
I receive 404 when pointing to any backend server tested with bing, google as urls ...
I suggest to use the following config
frontend fe_http_sne_in
bind *:443 ssl crt /etc/ssl/private/mydomain.pem
option forwardfor
use_backend be_sne_insecure if { path_beg /test}
default_backend be_default
backend be_default
...
backend be_sne_insecure
mode http
option forwardfor
http-request replace-header Host .* WebApplication.azurewebsites.net
server srv01 WebApplication.azurewebsites.net:443 ssl verify none
In the Blog post Introduction to HAProxy ACLs is the acl explained
currently I am using pfSense on my server with the HAProxy package, because I can easily configure it via the GUI.
I configured HAProxy to act as a reverse proxy corresponding to this guide: https://blog.devita.co/pfsense-to-proxy-traffic-for-websites-using-pfsense/
SSL offloading works like a charm. The problem I have is when I have more than one service (open port) on the same internal IP it seems not to be working.
Example:
I configure service1.domain.com for Service1 with port 8000 (10.100.10.101:8000) and it works flawlessly.
Now I need another port on the same machine (e.g. 10.100.10.101:8082) with another service. If I configure another backend pointing to the same IP but with a different port I can only reach the second servce (service2.domain.com) even if I access service1.domain.com.
My use case is that I am trying to set up Seafile which is using port 8000 for the web GUI and port 8082 for the fileserver. Right now I am able to access the web GUI but I am not able to upload, download or share files.
My configuration:
# Automaticaly generated, dont edit manually.
# Generated on: 2018-09-29 19:24
global
maxconn 1000
stats socket /tmp/haproxy.socket level admin
gid 80
nbproc 1
hard-stop-after 15m
chroot /tmp/haproxy_chroot
daemon
tune.ssl.default-dh-param 8192
server-state-file /tmp/haproxy_server_state
ssl-default-bind-ciphers TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-CHACHA20-POLY1305-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-ciphers TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-CHACHA20-POLY1305-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
listen HAProxyLocalStats
bind 127.0.0.1:2200 name localstats
mode http
stats enable
stats admin if TRUE
stats show-legends
stats uri /haproxy/haproxy_stats.php?haproxystats=1
timeout client 5000
timeout connect 5000
timeout server 5000
frontend shared-frontend-merged
bind X.X.X.X:443 name X.X.X.X:443 ssl crt-list /var/etc/haproxy/shared-frontend.crt_list
mode http
log global
option http-keep-alive
option forwardfor
acl https ssl_fc
http-request set-header X-Forwarded-Proto http if !https
http-request set-header X-Forwarded-Proto https if https
timeout client 30000
http-response set-header Strict-Transport-Security max-age=15768000
acl aclcrt_shared-frontend var(txn.txnhost) -m reg -i ^([^\.]*)\.domain\.com(:([0-9]){1,5})?$
acl ACL1 var(txn.txnhost) -m str -i test.domain.com
acl ACL2 var(txn.txnhost) -m str -i service1.domain.com
acl ACL3 var(txn.txnhost) -m str -i service2.domain.com
http-request set-var(txn.txnhost) hdr(host)
default_backend test.domain.com_ipv4
default_backend service1.domain.com_ipvANY
default_backend service2.domain.com_ipvANY
frontend http-to-https
bind X.X.X.X:80 name X.X.X.X:80
mode http
log global
option http-keep-alive
timeout client 30000
http-request redirect scheme https
backend test.domain.com_ipv4
mode http
id 10100
log global
timeout connect 30000
timeout server 30000
retries 3
source ipv4# usesrc clientip
option httpchk GET /
server testvm-server01 10.100.10.101:54080 id 10101 check inter 1000
backend service1.domain.com_ipvANY
mode http
id 102
log global
timeout connect 30000
timeout server 30000
retries 3
option httpchk GET /
server seafile-vm-01 10.100.10.103:8000 id 101 check inter 1000
backend service2.domain.com_ipvANY
mode http
id 104
log global
timeout connect 30000
timeout server 30000
retries 3
option httpchk GET /
server seafile-vm-02 10.100.10.103:8082 id 103 check inter 1000
I would really be glad if anyone can point me in the right direction, thank you in advance and if you need further information please tell me.
Best regards,
Bioneye
I was able to solve my problem with the help of one awesome user over on reddit.
The first problem was that I misconfigured my frontend and thus had 3 default_backends. That was the reason why every services pointed to the same virtual machine. To solve it I just had to add the if condition corresponding to my ACL name.
The second problem was that my Service2 was shown as DOWN on the HAProxy stats page. I had to change the health check method from HTTP to Basic and that finally resolved everything.
This is the working configuration:
# Automaticaly generated, dont edit manually.
# Generated on: 2018-10-02 16:59
global
maxconn 1000
stats socket /tmp/haproxy.socket level admin
gid 80
nbproc 1
hard-stop-after 15m
chroot /tmp/haproxy_chroot
daemon
tune.ssl.default-dh-param 8192
server-state-file /tmp/haproxy_server_state
ssl-default-bind-ciphers TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-CHACHA20-POLY1305-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
ssl-default-server-ciphers TLS13-AES-256-GCM-SHA384:TLS13-AES-128-GCM-SHA256:TLS13-CHACHA20-POLY1305-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256
ssl-default-server-options no-sslv3 no-tlsv10 no-tlsv11 no-tls-tickets
listen HAProxyLocalStats
bind 127.0.0.1:2200 name localstats
mode http
stats enable
stats admin if TRUE
stats show-legends
stats uri /haproxy/haproxy_stats.php?haproxystats=1
timeout client 5000
timeout connect 5000
timeout server 5000
frontend shared-frontend-merged
bind X.X.X.X:443 name X.X.X.X:443 ssl crt-list /var/etc/haproxy/shared-frontend.crt_list
mode http
log global
option http-keep-alive
option forwardfor
acl https ssl_fc
http-request set-header X-Forwarded-Proto http if !https
http-request set-header X-Forwarded-Proto https if https
timeout client 30000
http-response set-header Strict-Transport-Security max-age=15768000
acl aclcrt_shared-frontend var(txn.txnhost) -m reg -i ^([^\.]*)\.domain\.com(:([0-9]){1,5})?$
acl ACL1 var(txn.txnhost) -m beg -i test.domain.com
acl ACL2 var(txn.txnhost) -m beg -i service1.domain.com
acl ACL3 var(txn.txnhost) -m beg -i service2.domain.com
http-request set-var(txn.txnhost) hdr(host)
use_backend test.domain.com_ipv4 if ACL1
use_backend service1.domain.com_ipvANY if ACL2
use_backend service2.domain.com-seafhttp_ipvANY if ACL3
frontend http-to-https
bind X.X.X.X:80 name X.X.X.X:80
mode http
log global
option http-keep-alive
timeout client 30000
http-request redirect scheme https
backend test.domain.com_ipv4
mode http
id 10100
log global
timeout connect 30000
timeout server 30000
retries 3
source ipv4# usesrc clientip
option httpchk GET /
server testvm-server01 10.100.10.101:54080 id 10101 check inter 1000
backend service1.domain.com_ipvANY
mode http
id 102
log global
timeout connect 30000
timeout server 30000
retries 3
option httpchk GET /
server seafile-vm-01 10.100.10.103:8000 id 101 check inter 1000
backend service2.domain.com-seafhttp_ipvANY
mode http
id 104
log global
timeout connect 30000
timeout server 30000
retries 3
server seafile-vm-02 10.100.10.103:8082 id 103 check inter 1000
For further details: https://www.reddit.com/r/PFSENSE/comments/9kezl3/pfsense_haproxy_reverse_proxy_with_multiple/?st=jmruoa9r&sh=26d24791
TLDR: I misconfigured my Action Table and had the wrong health check in place.
Greetings,
Bioneye
I have Ubuntu 12.04LTS running. My webserver is Tomcat 7.0.42 and I use HAProxy as proxy server. My application is a servlet application which uses websockets.
Sometime when I request my page I get "502 Bad Gateway" error on some resources not on all, but on some. I think that this has something to do with my HAProxy configuration, which is the following:
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 1
defaults
mode http
option http-server-close
option httpclose
# option redispatch
no option checkcache # test against 502 error
frontend all 0.0.0.0:80
timeout client 86400000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend socket_backend if is_websocket
backend www_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout server 30000
timeout connect 4000
server apiserver localhost:8080 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout queue 5000
timeout server 86400000
timeout connect 86400000
server apiserver localhost:8080 weight 1 maxconn 1024 check
What do I have to change to prevent the 502 error?
First, enable haproxy logging. It will simply tell you why it is giving the 502's. My guess is that the backend "localhost:8080" is simply not able to keep up or is not able to get a connection within 4000ms "timeout connect 4000".
You may have exceeded some of the default limits in HAProxy. Try adding the following to global section:
tune.maxrewrite 4096
tune.http.maxhdr 202
Your should replace your defaults with these ones :
# Set balance mode
balance random
# Set http mode
mode http
# Set http keep alive mode (https://cbonte.github.io/haproxy-dconv/2.3/configuration.html#4)
option http-keep-alive
# Set http log format
option httplog
# Dont log empty line
option dontlognull
# Dissociate client from dead server
option redispatch
# Insert X-Forwarded-For header
option forwardfor
Don't use http-server-close, it is likely the cause of your problems.
Keep-alive will have a connection with client and server at both side.
It is working fine with websockets as well.
And if you enable the check on the server you need to as well configure it with something like this :
# Enable http check
option httpchk
# Use server configuration
http-check connect default
# Use HEAD on / with HTTP/1.1 protocol for Host example.com
http-check send meth HEAD uri / ver HTTP/1.1 hdr Host example.com
# Expect status 200 to 399
http-check expect status 200-399
I'm trying to setup Websockets with HAProxy, with the following configuration:
http traffic -> haproxy -> varnish -> nginx -> node
ws traffic -> haproxy -> node
one subdomain has forced ssl so haproxy redirect any http traffic to https. (and ws to wss)
everything is working as expected except one issue, new sockets are constantly being created instead of just one (I can see them being created every few seconds in Chrome's debugging console)
I didn't have this problem when I used Varnish to do the Websockets pipe.
how can I fix this ?
global
daemon
defaults
mode http
frontend insecure
# HTTP
bind :80
timeout client 5000
# acl
acl is_console hdr_end(host) -i console.mydomain.com
acl is_client hdr_end(host) -i www.mydomain.com
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
# Redirect all HTTP traffic to HTTPS
redirect location https://console.mydomain.com if is_console
use_backend node_console if is_console is_websocket
use_backend node_client if is_client is_websocket
default_backend varnish
frontend secure
# HTTPS
bind :443 ssl crt /etc/ssl/console.mydomain.com.pem
timeout client 5000
# acl
acl is_console hdr_end(host) -i console.mydomain.com
acl is_client hdr_end(host) -i www.mydomain.com
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend dealspot_console if is_console is_websocket
use_backend dealspot_client if is_client is_websocket
default_backend varnish
backend varnish
balance leastconn
option forwardfor
timeout server 5000
timeout connect 4000
server varnish1 127.0.0.1:6081
backend node_client
balance leastconn
option forwardfor
timeout queue 5000
timeout server 5000
timeout connect 5000
server client_node1 127.0.0.1:3000
backend node_console
balance leastconn
option forwardfor
timeout queue 5000
timeout server 5000
timeout connect 5000
server console_node1 127.0.0.1:3001
I managed to fix that by setting 'tunnel timeout' to one day on the backends