Internally load balance Docker Containers using Azure Container Service - azure

I am using Azure Container Service with Docker Swarm to host some containers. The containers are running ASP.NET Core Web API and have a private port exposed. I am trying to use Haproxy as an internal load balancer in front of these containers which in turn is exposed through port 8080 on Azure Container Service.
Here is the haproxy.cfg
global
log 127.0.0.1 local0
log 127.0.0.1 local1 notice
#log loghost local0 info
maxconn 4096
chroot /usr/local/etc/haproxy
uid 99
gid 99
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:8080
default_backend servers
backend servers
server server1 10.0.0.4:8080 maxconn 32
server server1 10.0.0.5:8080 maxconn 32
server server1 10.0.0.6:8080 maxconn 32

With Docker Swarm as an orchestrator, ACS already creates Load Balancers (separate for Agents and Masters) in your swarm based cluster. You do not need to worry about it anymore.
See the sample demonstration here:
"Microsoft Azure Container Service Engine - Swarm Walkthrough"

Related

HAProxy tcp mode source client ip

I have the following setup in HAProxy
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
retries 2
option dontlognull
timeout connect 10000
timeout server 600000
timeout client 600000
frontend https
bind 5.x.x.x:443
default_backend https
backend https
mode tcp
balance roundrobin
option tcp-check
server traefik 192.168.128.5:9443 check fall 3 rise 2
And it works as expected, the backend server "traefik" is doing the SSL termination of the requests.
The thing is the client source IP I get in the backend server is the HAProxy's IP and I would like to pass the source IP to the backend server.
Is it possible at all? because I tried all the options I saw in internet.
Thanks.
At the end the solution was to use https://www.haproxy.com/blog/haproxy/proxy-protocol/ as it is supported by HAProxy and traefik.
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
retries 2
option dontlognull
timeout connect 10000
timeout server 600000
timeout client 600000
frontend https
bind 5.x.x.x:443
default_backend https
backend https
mode tcp
balance roundrobin
option tcp-check
server traefik 192.168.128.5:9443 check fall 3 rise 2 send-proxy
And enabling traefik's entrypoint Proxy Protocol as described here: https://docs.traefik.io/configuration/entrypoints/#proxyprotocol

Cannot Access Google App Engine Instance Externally

I'm running a node JS app on Google Cloud Services using the cloud shell. I've deployed using gcloud app deploy, everything reports as a success. If I use gcloud app logs tail -s default I can see the logs, it says my app is listening on port 3000, that's the first debug message I see from my app.
When I invoke the endpoint without the port on the end, i.e.
https://myapp.appspot.com/myendpoint
I get an error,
"GET /myendpoint" 502
If I try with port 3000, i.e.
https://myapp.appspot.com:3000/myendpoint
The request just times out and I get no log messages from the shell.
I have port 3000 opened on the firewall, and my app.yaml is,
runtime: nodejs
env: flex
service: default
manual_scaling:
instances: 1
resources:
cpu: 1
memory_gb: 0.5
disk_size_gb: 10
Update 1:
I've also tried adding a forwarding port to my app.yaml,
network:
forwarded_ports:
- 3000/tcp
And allowed port 3000 in the VPC Firewall, but this seems to make no difference.
Update 2:
I can SSH into the instance and access the endpoint using a wget http://127.0.0.1:3000/myendpoint command but still no external access.
Update 3:
I've also tried port 443 too, listening on IP 0.0.0.0. But it seems to bind to IPV6 ip address 0 and changes the port to 8443 (somehow). This is just insane...
I resolved the issue by binding my service to port 8080, and removing the "service" field from my app.yaml. the external calls are all routed to port 8080 by default.
External calls have no port specified.

HAProxy - LB IP address is not delegated to virtual machines

I am total beginner for HAProxy so please any advice will be much useful.
I have two virtual machines on Microsoft Azure.
They are in virtual network, and they have private IP addresses 10.0.9.4 and 10.0.9.5
I created new Network interface on Microsoft Azure in the same virtual network with IP address 10.0.9.7
Of course this is not delegated to any virtual machines.
Name of interface is : lb.oozie.local, private IP address 10.0.9.7
I added in /etc/hosts on .4 and .5
10.0.9.7 lb.oozie.local
I installed haproxy on both machines 4 and 5.
haconfig file is the following:
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
#user haproxy
#group haproxy
daemon
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
# For more information, see ciphers(1SSL).
ssl-default-bind-ciphers kEECDH+aRSA+AES:kRSA+AES:+AES256:RC4-SHA:!kEDH:!LOW:!EXP:!MD5:!aNULL:!eNULL
defaults
log global
mode http
option httplog
option dontlognull
timeout connect 5000
timeout client 50000
timeout server 50000
frontend localnodes
bind lb.oozie.local:80
mode http
default_backend nodes
backend nodes
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server oozie1 10.0.9.4:11000 check
server oozie2 10.0.9.5:11000 check
listen stats lb.oozie.local:1936
stats enable
stats uri /haproxy?stats
I did also:
sudo service haproxy restart
Redirecting to /bin/systemctl restart haproxy.service
Validation returns following:
haproxy -f /etc/haproxy/haproxy.cfg -c
[WARNING] 284/134546 (22658) : config : frontend 'GLOBAL' has no 'bind' directive. Please declare it as a backend if this was intended.
Available polling systems :
epoll : pref=300, test result OK
poll : pref=200, test result OK
select : pref=150, test result FAILED
Total: 3 (2 usable), will use epoll.
Using epoll() as the polling mechanism.
[WARNING] 284/134547 (22658) : Server nodes/oozie2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] 284/134547 (22658) : sendto logger #1 failed: No such file or directory (errno=2)
[ALERT] 284/134547 (22658) : sendto logger #2 failed: No such file or directory (errno=2)
As I understood my servers should get the LB IP address (10.0.9.7).
I try from 10.0.9.4 and 10.0.9.5 ping to 10.0.9.7
but on both servers I am getting it is not recognized.
ping 10.0.9.7
PING 10.0.9.7 (10.0.9.7) 56(84) bytes of data.
From 10.0.9.4 icmp_seq=1 Destination Host Unreachable
From 10.0.9.4 icmp_seq=2 Destination Host Unreachable
Also if it is relevant:
i installed keepalived mechanism
I did not set public IP address for Load Balancer address, it has only private IP 10.0.9.7, because service is invoked directly from servers 10.0.9.4 and 10.0.9.5
please help.
Thank you in advance,
If you want to use Load Balancer in front of VM's with HA Proxy to create a fault tolerant pair of HA Proxies , you need to create an internal Load Balancer with the frontend IP of 10.0.9.7 (rather than assign 10.0.9.7 to a NIC). It is not possible to ICMP ping the frontend IP of a Load Balancer frontend, you need to use TCP ping instead. Make sure health probes are configured and see a signal from your HA Proxy VM's directly rather than the port HA Proxy is offering up to clients (the result is probably not what you want). Familiarize yourself with Standard Load Balancer at https://aka.ms/lbstandard and take not that an NSG must whitelist ports used with a Standard LB.

Azure Cloud App ERR_CONNECTION_TIMED_OUT

I would like to deploy a container based app in azure container service, and followed this tutorial.
https://learn.microsoft.com/en-us/azure/container-service/dcos-swarm/container-service-mesos-marathon-ui
Everything went well except that the public url is showing
ERR_CONNECTION_TIMED_OUT in browser.
When pinging the url, able to get the IP address but pings are showing as timed out.
I have verified agents LB with port 80 as allowed in the rules list.
How to access the application through public web ?
When pinging the url, able to get the IP address but pings are showing
as timed out.
Azure disable ICMP package, so you could not ping Azure public IP address. You could use telnet or tcping to check whether your service is listening.
Do you bind port 80 of the container to port 80 of the DC/OS agent? If I don't do this, I get same error log with you. Please refer to this link.
Note: I test in my lab, if I did not do this, nginx service will listen on other port. I ssh to the agent VM.
root#dcos-agent-public-65818314000001:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7e8091548413 nginx "nginx -g 'daemon off" 14 minutes ago Up 14 minutes 0.0.0.0:4912->80/tcp mesos-d7be0314-6be2-467b-8376-433a05033b17-S1.42edeac0-2aa3-4ecd-acaa-17d5f2f4ac19
The service is listening on port 4912 not 80.
If you do this step, I suggest you also could ssh to agent VM(same user name and private key) and execute docker ps .

My websites running in docker containers, how to implement virtual host?

I am running two websites in two docker containers respectively in a vps.
e.g. www.myblog.com and www.mybusiness.com
How can I implement virtualhost in the vps so that the two websites can both use port 80.
I asked this question somewhere else, and was suggested to take a look at: https://github.com/hipache/hipache and https://www.tutum.co/
They look a bit curving. I am trying to find if there is a straightforward way to achieve that. Thanks!
In addition, forgot to mention my vps is a Ubuntu 14.04 box.
Take a look at jwilder/nginx-proxy project.
Automated nginx proxy for Docker containers using docker-gen
It's the easiest way to proxy your docker containers. You don't need to edit the proxy config file every time you restart a container or start a new one. It all happens automatically for you by docker-gen which generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
Usage
To run it:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock \
jwilder/nginx-proxy
Then start any containers you want proxied with an env var VIRTUAL_HOST=subdomain.youdomain.com
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
Provided your DNS is setup to forward foo.bar.com to the a host running nginx-proxy, the request will be routed to a container with the VIRTUAL_HOST env var set.
Multiple Ports
If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one. If your container only exposes one port and it has a VIRTUAL_HOST env var set, that port will be selected.
You need a reverse proxy. We use nginx and haproxy. They both work well, and are easy to run from a docker container. A nice way to run the entire setup would be to use docker-compose (formerly fig) to create the two website containers with no externally visible ports, and use a, say, haproxy container with links to both website containers. Then the entire combination exposes exactly one port (80) to the network, and the haproxy container forwards traffic to one or the other container based on the hostname of the request.
---
proxy:
build: proxy
ports:
- "80:80"
links:
- blog
- work
blog:
build: blog
work:
build: work
Then a haproxy config such as,
global
log 127.0.0.1 local0
maxconn 2000
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
log global
option dontlognull
option redispatch
retries 3
timeout connect 5000s
timeout client 1200000s
timeout server 1200000s
### HTTP frontend
frontend http_proxy
mode http
bind *:80
option forwardfor except 127.0.0.0/8
option httplog
option http-server-close
acl blog_url hdr_beg(host) myblog
use_backend blog if blog_url
acl work_url hdr_beg(host) mybusiness
use_backend work if work_url
### HTTP backends
backend blog
mode http
server blog1 blog:80 check
backend work
mode http
server work1 work:80 check

Resources