Resolving subdomain.localhost doesn't work with docker swarm - linux

What I want to achieve:
docker swarm on localhost
dockerized reverse proxy which would forward subdomain.domain to container with app
container with app
What I have done:
changed /etc/hosts that now looks like:
127.0.0.1 localhost
127.0.0.1 subdomain.localhost
set up traefik to forward word.beluga to specific container
What is the problem:
can't get to container via subdomain. It works if I use port though
curl gives different results for subdomain and port
The question:
what is the problem? and why?
how to debug it and find whether problem is docker or network based? (how to check whether request even got to my container)
I'll add that I have also tried to do it on docker-machine (virtualbox) but it wasn't working. So I have moved to localhost, but as you can see it didn't help a lot.
I am losing hope, so any hint would be appreciated. Thank you in advance

There’s no such thing as subdomains of localhost. By near-universal convention, localhost resolves to the IPv4 address 127.0.0.1 and the IPv6 address ::1.
You can still test virtual host with docker, but you will have to use port:
curl -H Host:sub.localhost http://localhost:8000

Late to respond but I was able to achieve this using Traefik's 2.x routers feature like so:
labels:
- "traefik.http.routers.<unique name>.rule=Host(`subdomain.localhost`)"
in docker-compose file
version: '3.9'
services:
app:
image: myapp:latest
labels:
- "traefik.http.routers.myapp.rule=Host(`myapp.localhost`)"
reverse-proxy:
image: traefik:v2.4
command: --api.insecure=true --providers.docker
ports:
- "80:80"
# The Web UI (enabled by --api.insecure=true)
- "9000:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
I think the reason, why it works, is that Treafik is intercepting everything on localhost and just then applying rules so its Traefik specific answer.

Related

Conflicting port 80 on nginx and apache Docker containers

Is there any way to stop 2 conflicting docker containers on the same machine since they are both using the internal?! or container port 80?
I have one ngnix docker that runs on port 9000 from the browser.
916aa1f58ca3 nginx:1.21.6-alpine "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 0.0.0.0:9000->80/tcp
and a second Apache2 that is mapped to external port 88.
4a3ba2c4847b apache-master_php "docker-php-entrypoi…" 3 hours ago Up 3 seconds 80/tcp, 0.0.0.0:88->88
They both run together except when I send a request to the apache machine. Accessing the apaches index.html on port 88 on the browser crashes the machine.
Is there a way around this?
docker-compose.yml looks like this:
apache:
ports:
- 88:88
volumes:
- ./src:/var/www/html/
for ngnix:
nginx:
ports:
- "$SENTRY_BIND:80/tcp"
image: "nginx:1.21.6-alpine"
Conflict can never occur between containers, as docker will assign above containers to a private network created specifically for these services and each container will have its own private ip.
docker configuration should be as below, assuming apache is running using the default configurations
apache:
ports:
- 80:88
volumes:
- ./src:/var/www/html/
your configurations is telling docker to map container port 88 to host port 88 which is wrong.
I think you should read the docs
https://docs.docker.com/network/

Docker request to own server

I have a docker instance running apache on port 80 and node.js+express running on port 3000. I need to make an AJAX request from the apache-served website to the node server running on port 3000.
I don't know what is the appropiate url to use. I tried localhost but that resolved to the localhost of the client browsing the webpage (also the end user) instead of the localhost of the docker image.
Thanks in advance for your help!
First you should split your containers - it is a good practice for Docker to have one container per one process.
Then you will need some tool for orchestration of these containers. You can start with docker-compose as IMO the simplest one.
It will launch all your containers and manage their network settings for you by default.
So, imaging you have following docker-compose.yml file for launching your apps:
docker-compose.yml
version: '3'
services:
apache:
image: apache
node:
image: node # or whatever
With such simple configuration you will have host names in your network apache and node. So from inside you node application, you will see apache as apache host.
Just launch it with docker-compose up
make an AJAX request from the [...] website to the node server
The JavaScript, HTML, and CSS that Apache serves up is all read and interpreted by the browser, which may or may not be running on the same host as the servers. Once you're at the browser level, code has no idea that Docker is involved with any of this.
If you can get away with only sending links without hostnames <img src="/assets/foo.png"> that will always work without any configuration. Otherwise you need to use the DNS name or IP address of the host, in exactly the same way you would as if you were running the two services directly on the host without Docker.

How to use port forwarding to connect to docker container using DNS name

I have 2 redis containers running on same machine m1.
container1 has port mapping 6379 to 6400
docker run -d -p 6379:6400 myredisimage1
container2 has port mapping 6379 to 7500
docker run -d -p 6379:7500 myredisimage2
I am looking for a solution where other machine m2 can communicate to machine m1, using different DNS names but same port number.
redis.container1.com:6379
redis.container2.com:6379
and I would like to redirect that request to proper containers inside machine m1.
Is this possible to achieve this ?
This is possible, but hacky. First, ask yourself if you really need to do this, or if you can get away with just using different ports for the containers. Anyway, if you do absolutely need to do this, here's how:
Each docker container gets its own ip address accessible from the host machine. AFAIK, these are generated pseudo-randomly at run-time, but they are accessible by doing a docker inspect $CONTAINER_ID, for example:
docker inspect e804af2472ca
[
{
"Id": "e804af2472ca605dec0035f45d3bd05c1fbccee31e6c09381b0c16657378932f",
"Created": "2016-02-02T21:34:12.49059198Z",
...
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
**"IPAddress": "172.17.0.6"**,
"IPPrefixLen": 16,
"IPv6Gateway": "",
...
}
}
]
In this case, we know this container's ip address accessible from the host is 172.17.0.1. That ip address is fully usable from the host, so you can have something proxy redis.container1.com to it and redis.container2.com to your other ip. You'd need to reload the proxy address every time the box goes up, so this would definitely not be ideal, but it should work.
Again, my recommendation overall is don't do this.
I'm not sure if I'm getting you right.
But how could you start two container with both working on the same port?
It seems to me that this should be dealt with by using load balancer. Try HAProxy and set up two acl's for each domain name.
I would go with something like this: (Using docker-compose)
Docker Copose setup to deploy docker images:
redis-1:
container_name: redis-1
image: myredis
restart: always
expose:
- "6400"
redis-2:
container_name: redis-2
image: myredis
restart: always
expose:
- "6400"
haproxy:
container_name: haproxy
image: million12/haproxy
restart: always
command: -n 500
ports:
- "6379:6379"
links:
- redis-1:redis.server.one
- redis-2:redis.server.two
volumes:
- /path/to/my/haproxy.cfg:/etc/haproxy/haproxy.cfg
And then custom haproxy config:
global
chroot /var/lib/haproxy
user haproxy
group haproxy
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
ssl-default-bind-ciphers AES256+EECDH:AES256+EDH:AES128+EDH:EECDH:!aNULL:!eNULL:!LOW:!DES:!3DES:!RC4
spread-checks 4
tune.maxrewrite 1024
tune.ssl.default-dh-param 2048
defaults
mode http
balance roundrobin
option dontlognull
option dontlog-normal
option redispatch
maxconn 5000
timeout connect 10s
timeout client 25s
timeout server 25s
timeout queue 30s
timeout http-request 10s
timeout http-keep-alive 30s
# Stats
stats enable
stats refresh 30s
stats hide-version
frontend http-in
bind *:6379
mode tcp
acl is_redis1 hdr_end(host) -i redis.server.one
acl is_redis2 hdr_end(host) -i redis.server.two
use_backend redis1 if is_redis1
use_backend redis2 if is_redis2
default_backend redis1
backend redis1
server r1 redis.server.one:6379
backend redi2
server r2 redis.server.two:6379

My websites running in docker containers, how to implement virtual host?

I am running two websites in two docker containers respectively in a vps.
e.g. www.myblog.com and www.mybusiness.com
How can I implement virtualhost in the vps so that the two websites can both use port 80.
I asked this question somewhere else, and was suggested to take a look at: https://github.com/hipache/hipache and https://www.tutum.co/
They look a bit curving. I am trying to find if there is a straightforward way to achieve that. Thanks!
In addition, forgot to mention my vps is a Ubuntu 14.04 box.
Take a look at jwilder/nginx-proxy project.
Automated nginx proxy for Docker containers using docker-gen
It's the easiest way to proxy your docker containers. You don't need to edit the proxy config file every time you restart a container or start a new one. It all happens automatically for you by docker-gen which generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
Usage
To run it:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock \
jwilder/nginx-proxy
Then start any containers you want proxied with an env var VIRTUAL_HOST=subdomain.youdomain.com
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
Provided your DNS is setup to forward foo.bar.com to the a host running nginx-proxy, the request will be routed to a container with the VIRTUAL_HOST env var set.
Multiple Ports
If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one. If your container only exposes one port and it has a VIRTUAL_HOST env var set, that port will be selected.
You need a reverse proxy. We use nginx and haproxy. They both work well, and are easy to run from a docker container. A nice way to run the entire setup would be to use docker-compose (formerly fig) to create the two website containers with no externally visible ports, and use a, say, haproxy container with links to both website containers. Then the entire combination exposes exactly one port (80) to the network, and the haproxy container forwards traffic to one or the other container based on the hostname of the request.
---
proxy:
build: proxy
ports:
- "80:80"
links:
- blog
- work
blog:
build: blog
work:
build: work
Then a haproxy config such as,
global
log 127.0.0.1 local0
maxconn 2000
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
log global
option dontlognull
option redispatch
retries 3
timeout connect 5000s
timeout client 1200000s
timeout server 1200000s
### HTTP frontend
frontend http_proxy
mode http
bind *:80
option forwardfor except 127.0.0.0/8
option httplog
option http-server-close
acl blog_url hdr_beg(host) myblog
use_backend blog if blog_url
acl work_url hdr_beg(host) mybusiness
use_backend work if work_url
### HTTP backends
backend blog
mode http
server blog1 blog:80 check
backend work
mode http
server work1 work:80 check

Forward host port to docker container

Is it possible to have a Docker container access ports opened by the host? Concretely I have MongoDB and RabbitMQ running on the host and I'd like to run a process in a Docker container to listen to the queue and (optionally) write to the database.
I know I can forward a port from the container to the host (via the -p option) and have a connection to the outside world (i.e. internet) from within the Docker container but I'd like to not expose the RabbitMQ and MongoDB ports from the host to the outside world.
EDIT: some clarification:
Starting Nmap 5.21 ( http://nmap.org ) at 2013-07-22 22:39 CEST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00027s latency).
PORT STATE SERVICE
6311/tcp open unknown
joelkuiper#vps20528 ~ % docker run -i -t base /bin/bash
root#f043b4b235a7:/# apt-get install nmap
root#f043b4b235a7:/# nmap 172.16.42.1 -p 6311 # IP found via docker inspect -> gateway
Starting Nmap 6.00 ( http://nmap.org ) at 2013-07-22 20:43 UTC
Nmap scan report for 172.16.42.1
Host is up (0.000060s latency).
PORT STATE SERVICE
6311/tcp filtered unknown
MAC Address: E2:69:9C:11:42:65 (Unknown)
Nmap done: 1 IP address (1 host up) scanned in 13.31 seconds
I had to do this trick to get any internet connection within the container: My firewall is blocking network connections from the docker container to outside
EDIT: Eventually I went with creating a custom bridge using pipework and having the services listen on the bridge IP's. I went with this approach instead of having MongoDB and RabbitMQ listen on the docker bridge because it gives more flexibility.
A simple but relatively insecure way would be to use the --net=host option to docker run.
This option makes it so that the container uses the networking stack of the host. Then you can connect to services running on the host simply by using "localhost" as the hostname.
This is easier to configure because you won't have to configure the service to accept connections from the IP address of your docker container, and you won't have to tell the docker container a specific IP address or host name to connect to, just a port.
For example, you can test it out by running the following command, which assumes your image is called my_image, your image includes the telnet utility, and the service you want to connect to is on port 25:
docker run --rm -i -t --net=host my_image telnet localhost 25
If you consider doing it this way, please see the caution about security on this page:
https://docs.docker.com/articles/networking/
It says:
--net=host -- Tells Docker to skip placing the container inside of a separate network stack. In essence, this choice tells Docker to not containerize the container's networking! While container processes will still be confined to their own filesystem and process list and resource limits, a quick ip addr command will show you that, network-wise, they live “outside” in the main Docker host and have full access to its network interfaces. Note that this does not let the container reconfigure the host network stack — that would require --privileged=true — but it does let container processes open low-numbered ports like any other root process. It also allows the container to access local network services like D-bus. This can lead to processes in the container being able to do unexpected things like restart your computer. You should use this option with caution.
Your docker host exposes an adapter to all the containers. Assuming you are on recent ubuntu, you can run
ip addr
This will give you a list of network adapters, one of which will look something like
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 22:23:6b:28:6b:e0 brd ff:ff:ff:ff:ff:ff
inet 172.17.42.1/16 scope global docker0
inet6 fe80::a402:65ff:fe86:bba6/64 scope link
valid_lft forever preferred_lft forever
You will need to tell rabbit/mongo to bind to that IP (172.17.42.1). After that, you should be able to open connections to 172.17.42.1 from within your containers.
As stated in one of the comments, this works for Mac (probably for Windows/Linux too):
I WANT TO CONNECT FROM A CONTAINER TO A SERVICE ON THE HOST
The host has a changing IP address (or none if you have no network access). We recommend that you connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.
You can also reach the gateway using gateway.docker.internal.
Quoted from https://docs.docker.com/docker-for-mac/networking/
This worked for me without using --net=host.
You could also create an ssh tunnel.
docker-compose.yml:
---
version: '2'
services:
kibana:
image: "kibana:4.5.1"
links:
- elasticsearch
volumes:
- ./config/kibana:/opt/kibana/config:ro
elasticsearch:
build:
context: .
dockerfile: ./docker/Dockerfile.tunnel
entrypoint: ssh
command: "-N elasticsearch -L 0.0.0.0:9200:localhost:9200"
docker/Dockerfile.tunnel:
FROM buildpack-deps:jessie
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive \
apt-get -y install ssh && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY ./config/ssh/id_rsa /root/.ssh/id_rsa
COPY ./config/ssh/config /root/.ssh/config
COPY ./config/ssh/known_hosts /root/.ssh/known_hosts
RUN chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/config && \
chown $USER:$USER -R /root/.ssh
config/ssh/config:
# Elasticsearch Server
Host elasticsearch
HostName jump.host.czerasz.com
User czerasz
ForwardAgent yes
IdentityFile ~/.ssh/id_rsa
This way the elasticsearch has a tunnel to the server with the running service (Elasticsearch, MongoDB, PostgreSQL) and exposes port 9200 with that service.
TLDR;
For local development only, do the following:
Start the service or SSH tunnel on your laptop/computer/PC/Mac.
Build/run your Docker image/container to connect to hostname host.docker.internal:<hostPort>
Note: There is also gateway.docker.internal, which I have not tried.
END_TLDR;
For example, if you were using this in your container:
PGPASSWORD=password psql -h localhost -p 5432 -d mydb -U myuser
change it to this:
PGPASSWORD=password psql -h host.docker.internal -p 5432 -d mydb -U myuser
This magically connects to the service running on my host machine. You do not need to use --net=host or -p "hostPort:ContainerPort" or -P
Background
For details see: https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds
I used this with an SSH tunnel to an AWS RDS Postgres Instance on Windows 10. I only had to change from using localhost:containerPort in the container to host.docker.internal:hostPort.
I had a similar problem accessing a LDAP-Server from a docker container.
I set a fixed IP for the container and added a firewall rule.
docker-compose.yml:
version: '2'
services:
containerName:
image: dockerImageName:latest
extra_hosts:
- "dockerhost:192.168.50.1"
networks:
my_net:
ipv4_address: 192.168.50.2
networks:
my_net:
ipam:
config:
- subnet: 192.168.50.0/24
iptables rule:
iptables -A INPUT -j ACCEPT -p tcp -s 192.168.50.2 -d $192.168.50.1 --dport portnumberOnHost
Inside the container access dockerhost:portnumberOnHost
If MongoDB and RabbitMQ are running on the Host, then the port should already exposed as it is not within Docker.
You do not need the -p option in order to expose ports from container to host. By default, all port are exposed. The -p option allows you to expose a port from the container to the outside of the host.
So, my guess is that you do not need -p at all and it should be working fine :)
For newer versions of Docker, this worked for me. Create the tunnel like this (notice the 0.0.0.0 at the start):
-L 0.0.0.0:8080:localhost:8081
This will allow anyone with access to your computer to connect to port 8080 and thus access port 8081 on the connected server.
Then, inside the container just use "host.docker.internal", for example:
curl host.docker.internal:8081
why not use slightly different solution, like this?
services:
kubefwd:
image: txn2/kubefwd
command: ...
app:
image: bash
command:
- sleep
- inf
init: true
network_mode: service:kubefwd
REF: txn2/kubefwd: Bulk port forwarding Kubernetes services for local development.
Easier way under all platforms nowadays is to use host.docker.internal. Let's first start with the Docker run command:
docker run --add-host=host.docker.internal:host-gateway [....]
Or add the following to your service, when using Docker Compose:
extra_hosts:
- "host.docker.internal:host-gateway"
Full example of such a Docker Compose file should then look like this:
version: "3"
services:
your_service:
image: username/docker_image_name
restart: always
networks:
- your_bridge_network
volumes:
- /home/user/test.json:/app/test.json
ports:
- "8080:80"
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
your_bridge_network:
Again, it's just an example. But in if this docker image will start a service on port 80, it will be available on the host on port 8080.
And more importantly for your use-case; if the Docker container want to use a service from your host system that would now be possible using the special host.docker.internal name. That name will automatically be resolved into the internal Docker IP address (of the docker0 interface).
Anyway, let's say... you also running a web service on your host machine on (port 80). You should now be able to reach that service within your Docker container.. Try it out: nc -vz host.docker.internal 80.
All WITHOUT using network_mode: "host".

Resources