How to get client IP from request inside haproxy docker container? - node.js

I am trying to get the client's IP address from the request objects in my nodejs server.
My technology structure is:
I run two docker containers. One for haproxy and other for nodejs which uses expressjs framework. All incoming traffic is first received by haproxy which I use for proxying and load balancing. Haproxy forwards the requests to the appropriate backends based on the ACLs in the configuration file.
I tried accessing x-forwarded-for request header inside my nodejs but it only returned the IP for the docker network gateway interface 172.17.0.1.
Heading over to haproxy configuration and using option forwardfor header X-Client-IP in the defaults block also set the x-client-ip header to the docker network gateway interface ip. Also the debug logs also are logging the same ip.
So this is what the trouble is. Since haproxy is running inside a container it believes that the docker network gateway interface is the client.
How can I get the actual client's IP to haproxy inside the container so that it can forward it to nodejs?
This is my haproxy configuration file:
global
debug
maxconn 4096
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout http-keep-alive 50000ms
option http-keep-alive
option http-server-close
option forwardfor header X-Client-IP
frontend http-in
bind *:80
acl is_api hdr_end(host) -i api.3dphy-dev.com
use_backend api if is_api
default_backend default
backend default
server s0 "${DOCKER_INTERFACE_IP}:3000"
backend api
balance leastconn
option httpclose
option forwardfor
server s1 "${DOCKER_INTERFACE_IP}:17884"
I run my haproxy container using:
docker run -d --name haproxy_1 -p 80:80 -e DOCKER_INTERFACE_IP=`ifconfig docker0 | grep -oP 'inet addr:\K\S+'` -v $(pwd)/config/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy:1.6
Note: I am not using any firewall. Also, feel free to suggest any improvements in my configuration. Keep-alive is also proving to be an issue.

Finally managed to find a solution after scouring through the docker forum.
The solution is a two step process.
First I needed to update my haproxy configuration to this:
global
debug
maxconn 4096
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
timeout http-keep-alive 50000ms
option http-keep-alive
option http-server-close
frontend http-in
bind *:80
option forwardfor
acl is_site hdr_end(host) -i surenderthakran-dev.com
use_backend site if is_site
default_backend default
backend default
server s0 "${DOCKER_INTERFACE_IP}:3000"
backend site
balance leastconn
option httpclose
option forwardfor
server s1 "${DOCKER_INTERFACE_IP}:17884"
Notice the addition of option forwardfor in frontend http-in block. This tells the frontend part of haproxy to add the client IP to the request header.
Second, the docker run command should be updated to:
docker run -d --name haproxy_1 -p 80:80 -e DOCKER_INTERFACE_IP=`ifconfig docker0 | grep -oP 'inet addr:\K\S+'` -v $(pwd)/config/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro --net=host haproxy:1.6
Notice the addition of --net=host option in the docker run command. It tells docker to launch the new container and use the same network card as the host.
Now the original client IP is added to the request header and can be accessed in the x-forwarded-for request header in any application to which the request is forwarded.

This is not possible the way haproxy works, since it keeps on throwing, when you don't connect the host at startup because it needs to resolve the address fully. I have tried a lot of workarounds (maybe it's possible), but I gave up and made this run with docker-compose
I posted a running example that might help earlier in a post.
The gist is to link the containers with a host that actually already exists. This is done by the docker linking.
docker-compose.yml
api1:
build: .
dockerfile: ./Dockerfile
ports:
- 3955
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
- MONGO_HOST=mongo
- IS_TEST=true
command: "node app.js"
api2:
build: .
dockerfile: ./Dockerfile
ports:
- 3955
links:
- mongo
- redis
environment:
- REDIS_HOST=redis
- MONGO_HOST=mongo
- IS_TEST=true
command: "node app.js"
mongo:
image: mongo
ports:
- "27017:27017"
command: "--smallfiles --logpath=/dev/null"
redis:
image: redis
ports:
- "6379:6379"
haproxy:
image: haproxy:1.5
volumes:
- ./cluster:/usr/local/etc/haproxy/
links:
- "api1"
- "api2"
ports:
- 80:80
- 70:70
expose:
- "80"
- "70"
haproxy.cfg
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 10000
timeout server 10000
listen stats :70
stats enable
stats uri /
frontend balancer
bind 0.0.0.0:80
mode http
default_backend aj_backends
backend aj_backends
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
default-server inter 3s fall 5
server api1 api1:3955
server api2 api2:3955

Related

How can i get content path in Haproxy?

I stored port number in client side path and i want to use it in webserver in frontend section.
How can i get path content in Haproxy? i dont want to use if command
My code is:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend http80
bind *:2095
mode http
use_backend webs1 if { path -m beg -i /1023 }
use_backend webs2 if { path -m beg -i /5449 }
use_backend webs3 if { path -m beg -i /4855 }
backend webs1
mode http
server webserver1 ip:1023
backend webs2
mode http
server webserver1 ip:5449
backend webs3
mode http
server webserver1 ip:4855
thanks
You can try to set the dst port via http-request set-dst-port
Here a untested example, just that you get the idea
backend webs2
http-request set-var(txn.dst-port) %[url,'regsub("\/","",i)']
http-request set-dst-port %[var(txn.dst-port)]
server webserver1 0.0.0.0:0
Here is the documentation for http-request set-dst

How to make traefik work with graylog2?

I'm getting this error when entering to web interface url:
Server currently unavailable
We are experiencing problems connecting to the Graylog server running on http://127.0.0.1:9000/api. Please verify that the server is healthy and working correctly.
You will be automatically redirected to the previous page once we can connect to the server.
Do you need a hand? We can help you.
More details
docker-compose.yml:
graylog:
image: graylog2/server:2.3.0-1
environment:
GRAYLOG_PASSWORD_SECRET: xxx
GRAYLOG_ROOT_PASSWORD_SHA2: xxx
GRAYLOG_WEB_ENDPOINT_URI: http://example.com/api/
GRAYLOG_REST_LISTEN_URI: http://0.0.0.0:9000/api/
GRAYLOG_WEB_LISTEN_URI: http://0.0.0.0:9000/
GRAYLOG_ELASTICSEARCH_CLUSTER_NAME: graylog
GRAYLOG_ELASTICSEARCH_HOSTS: http://graylog-elasticsearch:9200
depends_on:
- graylog-elasticsearch
- mongo
networks:
- traefik
- default
- graylog
deploy:
labels:
- "traefik.port=9000"
- "traefik.tags=logging"
- "traefik.docker.network=infra_traefik"
- "traefik.backend=graylog"
restart_policy:
condition: on-failure
replicas: 1
placement:
constraints:
- node.labels.name == manager-1
As you see everything should work without any problem.
Here's what netstat shows:
root#6399d2a13c5d:/usr/share/graylog# netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:9000 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.11:42255 0.0.0.0:* LISTEN -
udp 0 0 127.0.0.11:51199 0.0.0.0:* -
Here's container printenv:
root#6399d2a13c5d:/usr/share/graylog# printenv
GRAYLOG_ELASTICSEARCH_CLUSTER_NAME=graylog
HOSTNAME=6399d2a13c5d
TERM=xterm
GRAYLOG_WEB_ENDPOINT_URI=http://example.com/api/
GRAYLOG_REST_LISTEN_URI=http://0.0.0.0:9000/api/
GRAYLOG_ROOT_PASSWORD_SHA2=ччч
CA_CERTIFICATES_JAVA_VERSION=20140324
GRAYLOG_PASSWORD_SECRET=ччч
GRAYLOG_REST_TRANSPORT_URI=http://example.com/api/
PWD=/usr/share/graylog
JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre
LANG=C.UTF-8
JAVA_VERSION=8u72
SHLVL=1
HOME=/root
JAVA_DEBIAN_VERSION=8u72-b15-1~bpo8+1
GRAYLOG_ELASTICSEARCH_HOSTS=http://graylog-elasticsearch:9200
GRAYLOG_WEB_LISTEN_URI=http://0.0.0.0:9000/
GOSU_VERSION=1.7
GRAYLOG_SERVER_JAVA_OPTS=-Xms1g -Xmx2g -XX:NewRatio=1 -XX:MaxMetaspaceSize=256m -server -XX:+ResizeTLAB -XX:+UseConcMarkSweepGC -XX:+CMSConcurrentMTEnabled -XX:+CMSClassUnloadingEnabled -XX:+UseParNewGC -XX:-OmitStackTraceInFastThrow
_=/usr/bin/printenv
I assume problem maybe within this custom header which graylog probably need:
RequestHeader set X-Graylog-Server-URL "http://graylog.example.org/api/"
For the implementation example you have given I am assuming that example.org is a remote host rather than 127.0.0.1. In that situation, you are correct in assuming that the additional header is needed.
From my experience you should strip api/ for the the end user web access
You will need to set the request header X-Graylog-Server-URL. eg curl -H "X-Graylog-Server-URL: http://graylog.example.org/" http://127.0.0.1:9000.Graylog web proxy config gives some good info if you want to setup a webserver in front of your graylog server. It's just a pity that (at the time of writing this) the Docker Examples do not include a basic nginx config
I am using kubernetes and had to add the following to my ingress annotation to get traffic in correctly
ingress:
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
proxy_set_header X-Graylog-Server-URL https://$server_name/;
I think you should just use
GRAYLOG_WEB_ENDPOINT_URI=http://<your-host-ip>/api/
without rest URI settings.

How to use port forwarding to connect to docker container using DNS name

I have 2 redis containers running on same machine m1.
container1 has port mapping 6379 to 6400
docker run -d -p 6379:6400 myredisimage1
container2 has port mapping 6379 to 7500
docker run -d -p 6379:7500 myredisimage2
I am looking for a solution where other machine m2 can communicate to machine m1, using different DNS names but same port number.
redis.container1.com:6379
redis.container2.com:6379
and I would like to redirect that request to proper containers inside machine m1.
Is this possible to achieve this ?
This is possible, but hacky. First, ask yourself if you really need to do this, or if you can get away with just using different ports for the containers. Anyway, if you do absolutely need to do this, here's how:
Each docker container gets its own ip address accessible from the host machine. AFAIK, these are generated pseudo-randomly at run-time, but they are accessible by doing a docker inspect $CONTAINER_ID, for example:
docker inspect e804af2472ca
[
{
"Id": "e804af2472ca605dec0035f45d3bd05c1fbccee31e6c09381b0c16657378932f",
"Created": "2016-02-02T21:34:12.49059198Z",
...
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
**"IPAddress": "172.17.0.6"**,
"IPPrefixLen": 16,
"IPv6Gateway": "",
...
}
}
]
In this case, we know this container's ip address accessible from the host is 172.17.0.1. That ip address is fully usable from the host, so you can have something proxy redis.container1.com to it and redis.container2.com to your other ip. You'd need to reload the proxy address every time the box goes up, so this would definitely not be ideal, but it should work.
Again, my recommendation overall is don't do this.
I'm not sure if I'm getting you right.
But how could you start two container with both working on the same port?
It seems to me that this should be dealt with by using load balancer. Try HAProxy and set up two acl's for each domain name.
I would go with something like this: (Using docker-compose)
Docker Copose setup to deploy docker images:
redis-1:
container_name: redis-1
image: myredis
restart: always
expose:
- "6400"
redis-2:
container_name: redis-2
image: myredis
restart: always
expose:
- "6400"
haproxy:
container_name: haproxy
image: million12/haproxy
restart: always
command: -n 500
ports:
- "6379:6379"
links:
- redis-1:redis.server.one
- redis-2:redis.server.two
volumes:
- /path/to/my/haproxy.cfg:/etc/haproxy/haproxy.cfg
And then custom haproxy config:
global
chroot /var/lib/haproxy
user haproxy
group haproxy
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
ssl-default-bind-ciphers AES256+EECDH:AES256+EDH:AES128+EDH:EECDH:!aNULL:!eNULL:!LOW:!DES:!3DES:!RC4
spread-checks 4
tune.maxrewrite 1024
tune.ssl.default-dh-param 2048
defaults
mode http
balance roundrobin
option dontlognull
option dontlog-normal
option redispatch
maxconn 5000
timeout connect 10s
timeout client 25s
timeout server 25s
timeout queue 30s
timeout http-request 10s
timeout http-keep-alive 30s
# Stats
stats enable
stats refresh 30s
stats hide-version
frontend http-in
bind *:6379
mode tcp
acl is_redis1 hdr_end(host) -i redis.server.one
acl is_redis2 hdr_end(host) -i redis.server.two
use_backend redis1 if is_redis1
use_backend redis2 if is_redis2
default_backend redis1
backend redis1
server r1 redis.server.one:6379
backend redi2
server r2 redis.server.two:6379

My websites running in docker containers, how to implement virtual host?

I am running two websites in two docker containers respectively in a vps.
e.g. www.myblog.com and www.mybusiness.com
How can I implement virtualhost in the vps so that the two websites can both use port 80.
I asked this question somewhere else, and was suggested to take a look at: https://github.com/hipache/hipache and https://www.tutum.co/
They look a bit curving. I am trying to find if there is a straightforward way to achieve that. Thanks!
In addition, forgot to mention my vps is a Ubuntu 14.04 box.
Take a look at jwilder/nginx-proxy project.
Automated nginx proxy for Docker containers using docker-gen
It's the easiest way to proxy your docker containers. You don't need to edit the proxy config file every time you restart a container or start a new one. It all happens automatically for you by docker-gen which generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
Usage
To run it:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock \
jwilder/nginx-proxy
Then start any containers you want proxied with an env var VIRTUAL_HOST=subdomain.youdomain.com
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
Provided your DNS is setup to forward foo.bar.com to the a host running nginx-proxy, the request will be routed to a container with the VIRTUAL_HOST env var set.
Multiple Ports
If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one. If your container only exposes one port and it has a VIRTUAL_HOST env var set, that port will be selected.
You need a reverse proxy. We use nginx and haproxy. They both work well, and are easy to run from a docker container. A nice way to run the entire setup would be to use docker-compose (formerly fig) to create the two website containers with no externally visible ports, and use a, say, haproxy container with links to both website containers. Then the entire combination exposes exactly one port (80) to the network, and the haproxy container forwards traffic to one or the other container based on the hostname of the request.
---
proxy:
build: proxy
ports:
- "80:80"
links:
- blog
- work
blog:
build: blog
work:
build: work
Then a haproxy config such as,
global
log 127.0.0.1 local0
maxconn 2000
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
log global
option dontlognull
option redispatch
retries 3
timeout connect 5000s
timeout client 1200000s
timeout server 1200000s
### HTTP frontend
frontend http_proxy
mode http
bind *:80
option forwardfor except 127.0.0.0/8
option httplog
option http-server-close
acl blog_url hdr_beg(host) myblog
use_backend blog if blog_url
acl work_url hdr_beg(host) mybusiness
use_backend work if work_url
### HTTP backends
backend blog
mode http
server blog1 blog:80 check
backend work
mode http
server work1 work:80 check

HAProxy error on OpenShift - Failed to execute: 'control restart'

I am trying to configure HAProxy on OpenShift to achieve following URL based routing.
when I am trying to restart my app, I am getting following error in HAProxy log
Starting frontend http-in: cannot bind socket
Following are the changes I made to haproxy.cfg, in addition I have also added "user nobody" to global section. What am I doing wrong? I am new to HAProxy, so I believe it might be very basic thing I am missing.
frontend http-in
bind :80
acl is_blog url_beg /blog
use_backend blog_gear if is_blog
default_backend website_gear
backend blog_gear
mode http
balance roundrobin
option httpchk
option forwardfor
server WEB1 nodejs-realspace.rhcloud.com weight 1 maxconn 512 check
backend website_gear
mode http
balance roundrobin
option httpchk
option forwardfor
server WEB2 website-realspace.rhcloud.com weight 1 maxconn 512 check
To note a few problems with your configuration.
The first problem in your configuration is that you should listen on port 8080.
Ports 80, 443, 8000 an 8443 on the outside will be redirected to port 8080 on your gear.
Second website-realspace.rhcloud.com is probably the external name of your gear that also hosts your HAProxy. This means that you have created a loop.
To acces your nodejs app you'll need to use the 127.a.b.c address assigned to your gear.
Also your nodejs app should most likely cannot listen on the same port as your HAProxy.

Resources