HAProxy how to "stick-table" ip connection to same backend? - frontend

My HAProxy config:
global
maxconn 300000
defaults
mode http
log global
option httplog
option http-server-close
option dontlognull
option redispatch
option contstats
retries 3
backlog 10000
timeout client 5s
timeout connect 5s
timeout server 5s
timeout tunnel 120s
timeout http-keep-alive 5s
timeout http-request 15s
default-server inter 3s rise 2 fall 3
option forwardfor
frontend ft_web
bind *:8000 name http
maxconn 300000
stick-table type ip size 5000k expire 5m store conn_cur
tcp-request connection reject if { src_conn_cur ge 3 }
tcp-request connection track-sc1 src
default_backend bk_web
backend bk_web
balance roundrobin
server s8001 127.0.0.1:8001 maxconn 500 weight 10 cookie s8001 check
server s8002 127.0.0.1:8002 maxconn 500 weight 10 cookie s8002 check
server s8003 127.0.0.1:8003 maxconn 500 weight 10 cookie s8003 check
server s8004 127.0.0.1:8004 maxconn 500 weight 10 cookie s8004 check
Currently if someone is openning more then 3 connections, all other are droped, but I need to stick IP to backend so every time someone with same IP access frontend he goes to same backend node ...
thanks

You're just missing the stick on and stick match parts. My config would look like this:
global
maxconn 300000
defaults
mode http
log global
option httplog
option http-server-close
option dontlognull
option redispatch
option contstats
retries 3
backlog 10000
timeout client 5s
timeout connect 5s
timeout server 5s
timeout tunnel 120s
timeout http-keep-alive 5s
timeout http-request 15s
default-server inter 3s rise 2 fall 3
option forwardfor
frontend ft_web
bind *:8000 name http
maxconn 300000
stick-table type ip size 5000k expire 5m store conn_cur
stick on src table bk_web
tcp-request connection reject if { src_conn_cur ge 3 }
tcp-request connection track-sc1 src
default_backend bk_web
backend bk_web
balance roundrobin
stick match src table bk_web
server s8001 127.0.0.1:8001 maxconn 500 weight 10 cookie s8001 check
server s8002 127.0.0.1:8002 maxconn 500 weight 10 cookie s8002 check
server s8003 127.0.0.1:8003 maxconn 500 weight 10 cookie s8003 check
server s8004 127.0.0.1:8004 maxconn 500 weight 10 cookie s8004 check

Related

Haproxy rate limits by URL parameter

We have a haproxy and need to rate limit per second requests to API by URL parameter(token\key).
Right now we have test a configuration from documentation:
frontend website
bind *:443 ssl crt /etc/ssl/private/crt.pem
stick-table type string size 100k expire 1s store http_req_rate(1s)
acl exceeds_limit_key url_param(key),table_http_req_rate() gt 1000
acl exceeds_limit url_param(token),table_http_req_rate() gt 1000
http-request track-sc0 url_param(key) unless exceeds_limit_key
http-request track-sc0 url_param(token) unless exceeds_limit
http-request deny deny_status 429 if exceeds_limit_key or exceeds_limit
default_backend servers
But we need to change the limit for each token, at the moment we are trying to use rates.map with this config:
frontend website
bind *:443 ssl crt /etc/ssl/private/crt.pem
stick-table type string size 100k expire 1s store http_req_rate(1s)
http-request track-sc0 base32+src
http-request set-var(req.rate_limit) url_param,map_beg(/etc/haproxy/rates.map,500)
http-request set-var(req.request_rate) base32+src,table_http_req_rate()
acl rate_abuse var(req.rate_limit),sub(req.request_rate) lt 0
http-request deny deny_status 429 if rate_abuse
default_backend servers
And rates.map:
#token
DB6sa5rdASVDGhasd67rdasd 100
But with this configuration we have always default limits url_param,map_beg(/etc/haproxy/rates.map,500).
For my opinion problem in rates.map file. The question is, in what form should the rates.map file for url_param be?

Haproxy sticky sessions

I'm trying to use Haproxy 1.6.3 2015/12/25 and sticky sessions.
I did everything according to haproxy manual, but, unfortunately, checking client browser I see that cookies aren't being added (the balancer has to return me cookies in the response after the first request but it returns me nothing and works as nothing happened (without cookies)). Everything else perfectly works, but cookies don't. I've attached my haproxy.cfg:
global
log /dev/haproxy/log local0
log /dev/haproxy/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
stats enable
stats auth user:pass
stats uri /haproxy_stats
option httpchk HEAD / HTTP/1.0
option redispatch
balance roundrobin
frontend frontend_http
bind *:80
option forwardfor
default_backend backend_http
backend backend_http
option prefer-last-server
cookie mycookies insert indirect nocache
server server1 196.168.0.125:80 check cookie s1
server server2 196.168.0.126:80 check cookie s2
Also my servers (server1, server2) are deployed on IIS and balancer is deployed on Ubuntu 16.04 LTS
change backend configuration :
backend backend_http
option prefer-last-server
cookie mycookies insert indirect nocache
server server1 196.168.0.125:80 check cookie server1
server server2 196.168.0.126:80 check cookie server2
or
backend backend_http
option prefer-last-server
cookie mycookies insert indirect nocache
server s1 196.168.0.125:80 check cookie s1
server s2 196.168.0.126:80 check cookie s2

HAProxy not keeping HTTP connection open

I have a Node.js server that uses Server-Sent Events (SSE) to allow push notifications to connected web clients. It works beautifully when the browser talks to Node directly.
However, when I place haproxy in the middle, as it must be for production to meet other requirements, the connections are closed and reopened (thanks to SSE's auto-reconnect) every 30 seconds. I've changed and tried everything I know of and can find online in the haproxy configuration.
Most info out there and in their documentation examples deal with sockets, but there is very little about SSE support. Should it support persistent HTTP connections for SSE? If so, what is the trick in to configure it?
My config follows:
global
daemon
# maximum number of concurrent connections
maxconn 4096
# drop privileges after port binding
user nobody
group nogroup
# store pid of process in the file
pidfile /var/run/haproxy.pid
# create this socket for stats
stats socket /var/run/socket-haproxy
defaults
log global
mode http
# disable logging of null connections
option dontlognull
# I've tried all these to no avail
#option http-server-close
#option httpclose
option http-keep-alive
# Add x-forwarded-for header to forward clients IP to app
option forwardfor
# maximum time to wait for a server connection to succeed. Can be as low as few msec if Haproxy and server are on same LAN.
timeout connect 1s
# maximum inactivity time on client side. Recommended to keep it same as server timeout.
timeout client 24d
# maximum time given to server to respond to a request
timeout server 24d
# Long timeout for WebSocket connections.
timeout tunnel 8h
# timeout for keep alive
timeout http-keep-alive 60s
# maximum time to wait for client to send full request. Keep it like 5s for get DoS protection.
timeout http-request 5s
# enable stats web interface. very helpful to see what's happening in haproxy
stats enable
# default refresh time for web interface
stats refresh 30s
# this frontend interface receives the incoming http requests and forwards to https then handles all SSL requests
frontend public
# HTTP
bind :80
# by default, all incoming requests are sent to Node.js
default_backend node_backend
# redirect to the SSE backend if /ionmed/events (eventum #????)
acl req_sse_path path_beg /ionmed/events
use_backend node_sse_backend if req_sse_path
# redirect to the tomcat backend if Time Clock, ViewerJS, Spell Checker, Tomcat Manager, or eScripts (eventum #1039, #1082)
acl req_timeclock_path path_beg /TimeClock/
acl req_manager_path path_beg /manager/
acl req_spelling_path path_beg /jspellEvolution/
acl req_escripts_path path_beg /ionmed/escripts
use_backend tomcat_backend if req_timeclock_path or req_manager_path or req_spelling_path or req_escripts_path
# for displaying HAProxy statistics
acl req_stats path_beg /stats
use_backend stats if req_stats
# node backend, transfer to port 8081
backend node_backend
# Tell the backend that this is a secure connection,
# even though it's getting plain HTTP.
reqadd X-Forwarded-Proto:\ https
server node_server localhost:8081
# node SSE backend, transfer to port 8082
backend node_sse_backend
# Tell the backend that this is a secure connection,
# even though it's getting plain HTTP.
reqadd X-Forwarded-Proto:\ https
server node_sse_server localhost:8082
# tomcat backend, transfer to port 8888
backend tomcat_backend
# Tell the backend that this is a secure connection,
# even though it's getting plain HTTP.
reqadd X-Forwarded-Proto:\ https
server tomcat_server localhost:8888

HAProxy Configuration - How to make TCP connection sticky (Node.js, socket.io, websocket, FlashSocket)

I have setup HAProxy for EC2 server where i'm running my nodejs two server on port 3005 and 3006. we have setup this for our multiplayer game. we have used socket.io for our realtime event update on client side and server side. HAProxy is working correctly with "balance source" (I have added working copy of my HAProxy Configuration), in source balancer problem is that its goes all event on same server each and ever time. so i have 40 computer setup in my network so all 40 computer event goes to 3005 port. its not changing port when i'm coming next day. I would like to setup TCP connection sticky with TCP mode in haproxy. is there any way to do with balance roundrobin? I have added my current setting files here. we also trying to used cookie but its not working in our case because we have used mode as tcp.
Also we have flash game which used to load flash policy from port 3843.
Here added my haproxy configuration.
global
debug
log 127.0.0.1 local0 # Enable per-instance logging of events and traffic.
log 127.0.0.1 local1 notice # only send important events
nbproc 1
maxconn 65536
pidfile /var/run/haproxy.pid
defaults
log global
srvtimeout 300s
timeout connect 5s
timeout queue 5s
timeout server 1h
timeout tunnel 1h
frontend flash_policy
bind 0.0.0.0:843
timeout client 5s
default_backend nodejs_flashpolicy
frontend wwws
bind 0.0.0.0:3000 ssl crt /home/certificate/final.crt
timeout client 1h
default_backend flashsocket_backend
tcp-request inspect-delay 500ms
tcp-request content accept if HTTP
use_backend flashsocket_backend if !HTTP
backend flashsocket_backend
mode tcp
option log-health-checks
balance source
cookie JSESSIONID insert indirect nocache
server 3006Game serverip:3006 cookie socket1 weight 1 maxconn 32536 check
server 3005Game serverip:3005 cookie socket2 weight 1 maxconn 32536 check
backend nodejs_flashpolicy
server flashpolicy serverip:3843 weight 1 maxconn 65536 check
# Configuration for HAProxy Stats
listen stats :1900
mode http
timeout client 1h
stats enable
stats hide-version
stats realm Haproxy\ Statistics
stats uri /
stats auth alpesh:alpesh
It is possible by using following options in backend:
stick-table type ip size 50k expire 10m
stick on src

HAProxy mangling Socket.IO request - reserved fields must be empty

Hoping someone can help me.
I'm using NodeJS v0.8.16 Socket.ID v0.9.13 and HAProxy 1.5-dev17.
My setup is on Amazon AWS using a VPC, HAProxy on a public facing instance and NodeJS on a separate instance which normally is not publicly accessible. I do have an IP address on it for testing.
I have a test setup which logs into the NodeJS server, then opens an authenticated websocket through Socket.IO, session details are saved in Redis to share between them. Once the websocket is connected successfully a request is made through NodeJS which prompts an event to be emitted back to the client.
This flow works correctly when the test references the NodeJS instance directly bypassing HAProxy. When it does route through HAProxy I get the errors from Socket.IO
"reserved fields must be empty"
and
"no handler for opcode " referencing a random opcode
From what I can see, there is an initial byte that is parsed by Socket.IO using bitmasks to work out what the request is for. After routing through HAProxy yhis value is now not an expected one and throws these errors.
My HAProxy configuration is from another StackOverflow question HAProxy + WebSocket Disconnection
global
maxconn 4096 # Total Max Connections. This is dependent on ulimit
nbproc 2
defaults
mode http
retries 3
option redispatch
option http-server-close
frontend all 0.0.0.0:3000
timeout client 5000
default_backend www_backend
acl is_websocket hdr(Upgrade) -i WebSocket
acl is_websocket hdr_beg(Host) -i ws
use_backend socket_backend if is_websocket
backend www_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout server 5000
timeout connect 4000
server server 10.0.0.214:3000 weight 1 maxconn 1024 check
backend socket_backend
balance roundrobin
option forwardfor # This sets X-Forwarded-For
timeout queue 5000
timeout server 5000
timeout connect 5000
timeout tunnel 3600s
timeout http-keep-alive 1s
timeout http-request 15s
server server1 10.0.0.214:3000 weight 1 maxconn 1024 check
I've also tried other various HAProxy configurations but end up with the same result.
Has anyone come across this issue. I'm not sure what I've done incorrectly.

Resources