Is there any way to stop 2 conflicting docker containers on the same machine since they are both using the internal?! or container port 80?
I have one ngnix docker that runs on port 9000 from the browser.
916aa1f58ca3 nginx:1.21.6-alpine "/docker-entrypoint.…" 3 minutes ago Up 3 minutes 0.0.0.0:9000->80/tcp
and a second Apache2 that is mapped to external port 88.
4a3ba2c4847b apache-master_php "docker-php-entrypoi…" 3 hours ago Up 3 seconds 80/tcp, 0.0.0.0:88->88
They both run together except when I send a request to the apache machine. Accessing the apaches index.html on port 88 on the browser crashes the machine.
Is there a way around this?
docker-compose.yml looks like this:
apache:
ports:
- 88:88
volumes:
- ./src:/var/www/html/
for ngnix:
nginx:
ports:
- "$SENTRY_BIND:80/tcp"
image: "nginx:1.21.6-alpine"
Conflict can never occur between containers, as docker will assign above containers to a private network created specifically for these services and each container will have its own private ip.
docker configuration should be as below, assuming apache is running using the default configurations
apache:
ports:
- 80:88
volumes:
- ./src:/var/www/html/
your configurations is telling docker to map container port 88 to host port 88 which is wrong.
I think you should read the docs
https://docs.docker.com/network/
Related
I currently have two docker containers running:
ab1ae510f069 471b8de074c4 "docker-entrypoint.s…" 8 minutes ago Up 8 minutes 0.0.0.0:3001->3001/tcp hopeful_bassi
2d4797b77fbf 5985005576a6 "nginx -g 'daemon of…" 25 minutes ago Up 25 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp wizardly_cori
One is my client and the other (port 3001) is my server.
The issue I'm facing is I've just added SSL to my site, but now I can't access the server. My theory is that the server needs both port 443 and port 3001 open, but I can't have port 443 open on both containers. I can run the server via HTTP locally, so I think that also points to this conclusion.
Is there anything I can do to have both using https? The client won't talk to the server if the server uses http (for obvious reasons).
Edit:
I'm now not sure if it is to do with port 443, as I killed my client and tried to just run the server, but it still gave me connection refused:
docker run -dit --restart unless-stopped -p 3001:3001 -p 443:443 471b8de074c4
If you open the port 443 for a docker container, it means that a docker-managed tool will be started. This (anyways, highly sup-optimal) tool will forward the TCP requrest to your host port 443 to the container.
If you want two containers to use the port 443, docker would want to start this portforwarder twice, on the same port. As your docker output shows, it could happen only once. Maybe digging (deeply) in the (nearly non-existent) docker logs, you can see also the relevant error message.
The problem you've found is not docker-dependant, it is the same problem what you would face also in a container-less environment - you simply can't start multiple service processes listening on the same TCP port.
Also the solution is coming from the world before the containers.
You need a central proxy service, this would listen on your port 443, and forward the requests - depending on the asked virtualhost - to the corresponding container.
Dig into the docker containers, it is nearly sure that such a https forward proxy exists. This third container will forward the requests where you want. Of course you will need to configure it.
From that moment, you don't even need to have https in your containers (although you can if you want), what helps a lot in productive, correctly certified ssl environments. Only your proxy will need the certificates. So:
/---> container A (tcp:80)
tcp:443 -- proxy
\---> container B (tcp:80)
What I want to achieve:
docker swarm on localhost
dockerized reverse proxy which would forward subdomain.domain to container with app
container with app
What I have done:
changed /etc/hosts that now looks like:
127.0.0.1 localhost
127.0.0.1 subdomain.localhost
set up traefik to forward word.beluga to specific container
What is the problem:
can't get to container via subdomain. It works if I use port though
curl gives different results for subdomain and port
The question:
what is the problem? and why?
how to debug it and find whether problem is docker or network based? (how to check whether request even got to my container)
I'll add that I have also tried to do it on docker-machine (virtualbox) but it wasn't working. So I have moved to localhost, but as you can see it didn't help a lot.
I am losing hope, so any hint would be appreciated. Thank you in advance
There’s no such thing as subdomains of localhost. By near-universal convention, localhost resolves to the IPv4 address 127.0.0.1 and the IPv6 address ::1.
You can still test virtual host with docker, but you will have to use port:
curl -H Host:sub.localhost http://localhost:8000
Late to respond but I was able to achieve this using Traefik's 2.x routers feature like so:
labels:
- "traefik.http.routers.<unique name>.rule=Host(`subdomain.localhost`)"
in docker-compose file
version: '3.9'
services:
app:
image: myapp:latest
labels:
- "traefik.http.routers.myapp.rule=Host(`myapp.localhost`)"
reverse-proxy:
image: traefik:v2.4
command: --api.insecure=true --providers.docker
ports:
- "80:80"
# The Web UI (enabled by --api.insecure=true)
- "9000:8080"
volumes:
# So that Traefik can listen to the Docker events
- /var/run/docker.sock:/var/run/docker.sock
I think the reason, why it works, is that Treafik is intercepting everything on localhost and just then applying rules so its Traefik specific answer.
I'm trying to setup a spark cluster using this link - https://github.com/actionml/docker-spark
When I created my container(2-worker and 1-master), I see all the ports being mapped to the same ports on the host.
I'm wondering how to access my master web ui for spark?
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b54c5fd1442c actionml/spark "/entrypoint.sh wo..." 2 minutes ago Up 2 minutes 4040/tcp, 6066/tcp, 7001-7006/tcp, 7077/tcp, 8080-8081/tcp spark-worker1
2c987a057223 actionml/spark "/entrypoint.sh wo..." 3 minutes ago Up 3 minutes 4040/tcp, 6066/tcp, 7001-7006/tcp, 7077/tcp, 8080-8081/tcp spark-worker0
b1d34441507e actionml/spark "/entrypoint.sh ma..." 9 minutes ago Up 9 minutes 4040/tcp, 6066/tcp, 7001-7006/tcp, 7077/tcp, 8080-8081/tcp spark-master
As stated in the README file of the repository, when starting master, you can specify the web ui port:
docker run --rm -it actionml/docker-spark master --webui-port PORT
--webui-port PORT Port for web UI (default: 8080)
As you can see the default is 8080.
However you need to expose the port so that is it accessible:
docker run -p 8080:8080 --rm -it actionml/docker-spark master
You can now open the browser and see the ui at localhost:8080
I have 2 redis containers running on same machine m1.
container1 has port mapping 6379 to 6400
docker run -d -p 6379:6400 myredisimage1
container2 has port mapping 6379 to 7500
docker run -d -p 6379:7500 myredisimage2
I am looking for a solution where other machine m2 can communicate to machine m1, using different DNS names but same port number.
redis.container1.com:6379
redis.container2.com:6379
and I would like to redirect that request to proper containers inside machine m1.
Is this possible to achieve this ?
This is possible, but hacky. First, ask yourself if you really need to do this, or if you can get away with just using different ports for the containers. Anyway, if you do absolutely need to do this, here's how:
Each docker container gets its own ip address accessible from the host machine. AFAIK, these are generated pseudo-randomly at run-time, but they are accessible by doing a docker inspect $CONTAINER_ID, for example:
docker inspect e804af2472ca
[
{
"Id": "e804af2472ca605dec0035f45d3bd05c1fbccee31e6c09381b0c16657378932f",
"Created": "2016-02-02T21:34:12.49059198Z",
...
"Gateway": "172.17.0.1",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
**"IPAddress": "172.17.0.6"**,
"IPPrefixLen": 16,
"IPv6Gateway": "",
...
}
}
]
In this case, we know this container's ip address accessible from the host is 172.17.0.1. That ip address is fully usable from the host, so you can have something proxy redis.container1.com to it and redis.container2.com to your other ip. You'd need to reload the proxy address every time the box goes up, so this would definitely not be ideal, but it should work.
Again, my recommendation overall is don't do this.
I'm not sure if I'm getting you right.
But how could you start two container with both working on the same port?
It seems to me that this should be dealt with by using load balancer. Try HAProxy and set up two acl's for each domain name.
I would go with something like this: (Using docker-compose)
Docker Copose setup to deploy docker images:
redis-1:
container_name: redis-1
image: myredis
restart: always
expose:
- "6400"
redis-2:
container_name: redis-2
image: myredis
restart: always
expose:
- "6400"
haproxy:
container_name: haproxy
image: million12/haproxy
restart: always
command: -n 500
ports:
- "6379:6379"
links:
- redis-1:redis.server.one
- redis-2:redis.server.two
volumes:
- /path/to/my/haproxy.cfg:/etc/haproxy/haproxy.cfg
And then custom haproxy config:
global
chroot /var/lib/haproxy
user haproxy
group haproxy
# Default SSL material locations
ca-base /etc/ssl/certs
crt-base /etc/ssl/private
# Default ciphers to use on SSL-enabled listening sockets.
ssl-default-bind-ciphers AES256+EECDH:AES256+EDH:AES128+EDH:EECDH:!aNULL:!eNULL:!LOW:!DES:!3DES:!RC4
spread-checks 4
tune.maxrewrite 1024
tune.ssl.default-dh-param 2048
defaults
mode http
balance roundrobin
option dontlognull
option dontlog-normal
option redispatch
maxconn 5000
timeout connect 10s
timeout client 25s
timeout server 25s
timeout queue 30s
timeout http-request 10s
timeout http-keep-alive 30s
# Stats
stats enable
stats refresh 30s
stats hide-version
frontend http-in
bind *:6379
mode tcp
acl is_redis1 hdr_end(host) -i redis.server.one
acl is_redis2 hdr_end(host) -i redis.server.two
use_backend redis1 if is_redis1
use_backend redis2 if is_redis2
default_backend redis1
backend redis1
server r1 redis.server.one:6379
backend redi2
server r2 redis.server.two:6379
I am running two websites in two docker containers respectively in a vps.
e.g. www.myblog.com and www.mybusiness.com
How can I implement virtualhost in the vps so that the two websites can both use port 80.
I asked this question somewhere else, and was suggested to take a look at: https://github.com/hipache/hipache and https://www.tutum.co/
They look a bit curving. I am trying to find if there is a straightforward way to achieve that. Thanks!
In addition, forgot to mention my vps is a Ubuntu 14.04 box.
Take a look at jwilder/nginx-proxy project.
Automated nginx proxy for Docker containers using docker-gen
It's the easiest way to proxy your docker containers. You don't need to edit the proxy config file every time you restart a container or start a new one. It all happens automatically for you by docker-gen which generates reverse proxy configs for nginx and reloads nginx when containers are started and stopped.
Usage
To run it:
$ docker run -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock \
jwilder/nginx-proxy
Then start any containers you want proxied with an env var VIRTUAL_HOST=subdomain.youdomain.com
$ docker run -e VIRTUAL_HOST=foo.bar.com ...
Provided your DNS is setup to forward foo.bar.com to the a host running nginx-proxy, the request will be routed to a container with the VIRTUAL_HOST env var set.
Multiple Ports
If your container exposes multiple ports, nginx-proxy will default to the service running on port 80. If you need to specify a different port, you can set a VIRTUAL_PORT env var to select a different one. If your container only exposes one port and it has a VIRTUAL_HOST env var set, that port will be selected.
You need a reverse proxy. We use nginx and haproxy. They both work well, and are easy to run from a docker container. A nice way to run the entire setup would be to use docker-compose (formerly fig) to create the two website containers with no externally visible ports, and use a, say, haproxy container with links to both website containers. Then the entire combination exposes exactly one port (80) to the network, and the haproxy container forwards traffic to one or the other container based on the hostname of the request.
---
proxy:
build: proxy
ports:
- "80:80"
links:
- blog
- work
blog:
build: blog
work:
build: work
Then a haproxy config such as,
global
log 127.0.0.1 local0
maxconn 2000
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
log global
option dontlognull
option redispatch
retries 3
timeout connect 5000s
timeout client 1200000s
timeout server 1200000s
### HTTP frontend
frontend http_proxy
mode http
bind *:80
option forwardfor except 127.0.0.0/8
option httplog
option http-server-close
acl blog_url hdr_beg(host) myblog
use_backend blog if blog_url
acl work_url hdr_beg(host) mybusiness
use_backend work if work_url
### HTTP backends
backend blog
mode http
server blog1 blog:80 check
backend work
mode http
server work1 work:80 check