I have troubles installing several app using docker compose - linux

For my personnal knowledge I want set up my server on docker (using docker compose).
And I have some troubles setting up several app (probem comes from the ports).
I have a completly clean debian 8 server.
I created 2 repositories one for nextcloud the other one for bitwarden
I started first next cloud everythings is fine so after that I launch bitwarden and I have an error because I'm using the same port. But because I want to use letsencrypt for both and an https web site how am I suppose to configure the ports and the reverse proxy.
this one is for nextcloud
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=toor
- MYSQL_PASSWORD=mysql
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.YOUR-DOMAIN
- LETSENCRYPT_HOST=nextcloud.YOUR-DOMAIN
- LETSENCRYPT_EMAIL=YOUR-EMAIL
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
this one is for bitwarden
version: "3"
services:
bitwarden:
image: bitwardenrs/server
restart: always
volumes:
- ./bw-data:/data
environment:
WEBSOCKET_ENABLED: "true"
SIGNUPS_ALLOWED: "true"
caddy:
image: abiosoft/caddy
restart: always
volumes:
- ./Caddyfile:/etc/Caddyfile:ro
- caddycerts:/root/.caddy
ports:
- 80:80 # needed for Let's Encrypt
- 443:443
environment:
ACME_AGREE: "true"
DOMAIN: "bitwarden.example.org"
EMAIL: "bitwarden#example.org"
volumes:
caddycerts:
The error is :
Blockquote ERROR: for root_caddy_1 Cannot start service caddy: driver failed programming external connectivity on endpoint root_caddy_1 xxxxxxxxxxxxxxxxxx : Bind for 0.0.0.0:80 failed: port is already allocated

Based on your comment I will detail here the solution with multiple subdomains.
First of all the easiest solution for now is to put all the services in the same docker-compose file. If not you would have to create a network and declare that as external network in each docker-compose.yml.
Next remove the ports declaration for the proxy and caddy containers (to free up ports 80 and 443 on the host).
Create a new service and add it to the same docker-compose.yml:
nginx:
image: nginx
volumes:
- ./subdomains_conf:/etc/nginx/conf.d
ports:
- "80:80"
Next create a folder subdomanins_conf and in it a file default.conf with the contents something similar to:
server {
listen 80;
listen [::]:80;
server_name first.domain.com;
location {
proxy_pass http://proxy:80;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
}
}
server {
listen 80;
listen [::]:80;
server_name second.domain.com;
location {
proxy_pass http://caddy:80;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
}
}
You need to replace the values for server_name with your actual domain names. The configuration for SSL is similar.
You can test this setup locally by pointing the 2 domains to 127.0.0.1 in /etc/hosts. Remember that all the services should be defined in the same docker-compose.yml or you need to create a network and specify it in each docker-compose.yml otherwise the containers will not see each other.

I found an easy way to manage this problem using reverse proxy with traefik
https://docs.traefik.io/

Related

Understanding docker compose port wiring for django, react app and haproxy

I came across a docker-compose.yml which has following port configuration:
wsgi:
ports:
- 9090 // ?? Is it by default mapped to host port 80 ??
nodejs
image: nodejs:myapp
ports:
- 9999:9999
environment:
BACKEND_API_URL: http://aa.bb.cc.dd:9854/api/
haproxy
ports:
- 9854:80
I am trying to understand how the port wiring is happening here.
nodejs UI app settings needs to specify backend port which is 9854 here. This port is exposed by haproxy setting and is mapped to port 80. I know that wsgi is a django backend app. From its entrypoint.sh (in PS below) and port specification in above docker-compose.yml, I get that django listens to port 9090. But I am unable to get how this port 9090 maps to port 80 (which is then exposed by haproxy at 9854, which in turn is specified in BACKEND_API_URL by nodejs settings).
PS:
Django wsgi app has following in \wsgi\entrypoint.sh:
nohup gunicorn myapp.wsgi --bind "0.0.0.0:9090"
And nodejs react app has following in its server.js file:
const port = process.env.PORT || 9999;
My whole docker compose file:
version: "3.8"
services:
postgres:
image: postgres:11
volumes:
- my_app_postgres_volume:/var/lib/postgresql/data
- type: tmpfs
target: /dev/shm
tmpfs:
size: 536870912 # 512MB
environment:
POSTGRES_DB: my_app_db
POSTGRES_USER: my_app
POSTGRES_PASSWORD: my_app123
networks:
- my_app_network
redis:
image: redis:6.2.4
volumes:
- my_app_redis_volume:/data
networks:
- my_app_network
wsgi:
image: wsgi:my_app3_stats
volumes:
- /my_app/frontend/static/
- ./wsgi/my_app:/my_app
- /my_app/frontend/clientApp/node_modules
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- postgres
- redis
ports:
- 9090
environment:
C_FORCE_ROOT: 'true'
SERVICE_PORTS: 9090
networks:
- my_app_network
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
max_attempts: 3
window: 120s
nodejs:
image: nodejs:my_app3_stats
volumes:
- ./nodejs/frontend:/frontend
- /frontend/node_modules
depends_on:
- wsgi
ports:
- 9999:9999
environment:
BACKEND_API_URL: http://aa.bb.cc.dd:9854/api/
networks:
- my_app_network
nginx:
image: isiq/nginx-brotli:1.21.0
volumes:
- ./nginx:/etc/nginx/conf.d:ro
- ./wsgi/my_app:/my_app:ro
- my_app_nginx_volume:/var/log/nginx/
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
networks:
- my_app_network
haproxy:
image: haproxy:2.3.9
volumes:
- ./haproxy:/usr/local/etc/haproxy/:ro
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
depends_on:
- wsgi
- nodejs
- nginx
ports:
- 9854:80
networks:
- my_app_network
deploy:
placement:
constraints: [node.role == manager]
volumes:
my_app_postgres_volume:
my_app_redis_volume:
my_app_nginx_volume:
networks:
my_app_network:
driver: overlay
On your host, there are three ports visible:
http://aa.bb.cc.dd:9854 forwards to port 80 on the haproxy container.
http://aa.bb.cc.dd:9999 forwards to port 9999 on the nodejs container.
The port shown by docker-compose port wsgi 9090 forwards to port 9090 on the wsgi container.
You don't discuss the HAProxy configuration at all, but it is presumably configured to listen on port 80, and that may be the missing bit of configuration you're looking for.
Between these three containers (so not visible to your front-end application), assuming you don't have any networks: blocks in the Compose file, there are three obvious URLs: http://haproxy:80 (or just http://haproxy), http://nodejs:9999, and http://wsgi:9090 connect to their respective containers. Note that these use the "normal" ports for their service, and not the remapped port for haproxy or the randomly-chosen port for wsgi.
I'm guessing the HAProxy container is configured to do some sort of path-based routing to one or the other of the other containers. If you have this setup, you might be able to configure your React application to not include a host name in the URL at all (BACKEND_API_URL: /api/), which will make it easier to deploy. You do not need the ports: for connections between containers, and if you don't want a caller to be able to reach the back-end services without going via the proxy, you can delete their ports: blocks.

Docker containers cant communicate (Docker compose / React / NodeJs / Nginx)

My client isn't able to send requests to backend because it can't resolve the host. I'm trying to pass down the container connection info via an environment variable and use it in the client to connect. However, it is unable to do the requests at all. Any help? Nginx works fine for the frontend part but doesn't work for proxying the backend.
docker-compose.yml
version: '3.2'
services:
server:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
depends_on:
- backend
- frontend
- database
ports:
- '5000:80'
database:
image: postgres:latest
container_name: database
ports:
- "5432:5432"
restart: always
hostname: database
expose:
- 5432
backend:
build:
context: ./backend
dockerfile: ./Dockerfile
image: kalendae_backend:latest
hostname: backend
container_name: backend
ports:
- "5051:5051"
environment:
- WAIT_HOSTS=database:5432
- DATABASE_HOST=database
- DATABASE_PORT=5432
- PORT=5051
links:
- database
expose:
- 5051
frontend:
build:
context: ./frontend
dockerfile: ./Dockerfile
image: kalendae_frontend:latest
ports:
- "5050:5050"
hostname: frontend
container_name: frontend
environment:
- WAIT_HOSTS=backend:5051
- REACT_APP_BACKEND_HOST=backend
- REACT_APP_BACKEND_PORT=5051
links:
- backend
expose:
- 5050
Nginx config
upstream frontend {
server frontend:5050;
}
upstream backend {
server backend:5051;
}
upstream server {
server server:5000;
}
server {
listen 80;
location / {
proxy_pass http://frontend;
}
location backend {
proxy_pass http://backend;
}
location /backend {
proxy_pass http://backend;
}
}

Issues docker with pm2 and nginx

I'm having issues running a pm2 app on a container, I tried accessing through docker port and with an nginx proxy, but none of these solutions are working. Here's my docker config:
version: '3.5'
services:
webapp:
build:
context: .
image: ${DOCKER_IMAGE}
container_name: mypm2app
stdin_open: true
networks:
- "default"
restart: always
ports:
- "8282:8282"
extra_hosts:
- host.local:${LOCAL_IP}
db:
image: mongo:4.2.6
container_name: mongodb
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE}
volumes:
- ${MONGO_SCRIPT_PATH}:${MONGO_SCRIPT_DESTINATION_PATH}
networks:
- "default"
networks:
default:
external:
name: ${NETWORK_NAME}
Also I have this dockerfile:
FROM image
WORKDIR /var/www/html/path
COPY package.json /var/www/html/path
RUN npm install
COPY . /var/www/html/path
EXPOSE 8282/tcp
CMD pm2-runtime start ecosystem.config.js --env development
pm2 is starting the service, but I cannot access it through localhost:port.
I tried to add an nginx proxy:
nginx:
depends_on:
- webapp
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- "3002:80"
networks:
default:
ipv4_address: ${nginx_ip}$
with this docker-file
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
This is the nginx configuration, default.conf:
upstream mypm2app {
server mypm2app:8282;
}
server {
listen 80;
server_name mypm2app.local;
location / {
proxy_pass http://mypm2app/;
}
}
I would appreciate any suggestion or answer to this issue.

Docker-Compose Nginx (with static React) and Nginx

I am currently stuck on making nginx proxy to the node load balancer. It gives the following error when making a request on 185.146.87.32:5000/:
2020/06/01 13:23:09 [warn] 6#6: *1 upstream server temporarily disabled while connecting to upstream, client: 86.125.198.83, server: domain.ro, request: "GET / HTTP/1.1", upstream: "http://185.146.87.32:5002/", host: "185.146.87.32:5000"
I managed to make this work on a local system, but now I am trying to make it work on a remote server.
BACKEND_SERVER_PORT_1=5001
BACKEND_SERVER_PORT_2=5002
BACKEND_NODE_PORT=5000
BACKEND_NGINX_PORT=80
CLIENT_SERVER_PORT=3000
ADMIN_SERVER_PORT=3006
NGINX_SERVER_PORT=80
API_HOST="http://domain.ro"
This is the docker-compose:
version: '3'
services:
#####################################
# Setup for NGINX container
#####################################
nginx:
container_name: domain_back_nginx
build:
context: ./nginx
dockerfile: Dockerfile
image: domain/domain_back_nginx
ports:
- ${BACKEND_NODE_PORT}:${BACKEND_NGINX_PORT}
volumes:
- ./:/usr/src/domain
restart: always
#####################################
# Setup for backend container
#####################################
backend_1:
container_name: domain_back_server_1
build:
context: ./
dockerfile: Dockerfile
image: domain/domain_back_server_1
ports:
- ${BACKEND_SERVER_PORT_1}:${BACKEND_NODE_PORT}
volumes:
- ./:/usr/src/domain
restart: always
command: npm start
#####################################
# Setup for backend container
#####################################
backend_2:
container_name: domain_back_server_2
build:
context: ./
dockerfile: Dockerfile
image: domain/domain_back_server_2
ports:
- ${BACKEND_SERVER_PORT_2}:${BACKEND_NODE_PORT}
volumes:
- ./:/usr/src/domain
restart: always
command: npm start
The Dockerfile for node is:
FROM node:12.17.0-alpine3.9
RUN mkdir -p /usr/src/domain
ENV NODE_ENV=production
WORKDIR /usr/src/domain
COPY package*.json ./
RUN npm install --silent
COPY . .
EXPOSE 5000
The config file for nginx is:
upstream domain {
least_conn;
server backend_1 weight=1;
server backend_2 weight=1;
}
server {
listen 80;
listen [::]:80;
root /var/www/domain_app;
server_name domain.ro www.domain.ro;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://domain;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
The Dockerfile for nginx is:
FROM nginx:1.17-alpine as build
#!/bin/sh
RUN rm /etc/nginx/conf.d/default.conf
COPY default.conf /etc/nginx/conf.d
CMD ["nginx", "-g", "daemon off;"]
don't expose your backend to world,
create a docker network for your services, then expose nginx, it's the best practice,
but in your case you didnt specify backend ports in nginx.conf
upstream domain {
least_conn;
server backend_1:5000 weight=1;
server backend_2:5000 weight=1;
}
you should do below:
version: '3'
services:
#####################################
# Setup for NGINX container
#####################################
nginx:
container_name: domain_back_nginx
build:
context: ./nginx
dockerfile: Dockerfile
image: domain/domain_back_nginx
networks:
- proxy
ports:
- 5000:80
volumes:
- ./:/usr/src/domain
restart: always
#####################################
# Setup for backend container
#####################################
backend_1:
container_name: domain_back_server_1
build:
context: ./
dockerfile: Dockerfile
image: domain/domain_back_server_1
networks:
- proxy
## always expose, just in case you missed it in Dockerfile, this will expose the port(s)
## just in defined networks
expose:
- 5000
volumes:
- ./:/usr/src/domain
restart: always
command: npm start
#####################################
# Setup for backend container
#####################################
backend_2:
container_name: domain_back_server_2
build:
context: ./
dockerfile: Dockerfile
image: domain/domain_back_server_2
networks:
- proxy
## always expose, just in case you missed it in Dockerfile, this will expose the port(s)
## just in defined networks
expose:
- 5000
volumes:
- ./:/usr/src/domain
restart: always
command: npm start
networks:
proxy:
external:
name: proxy
but after all, i recommend jwilder/nginx-proxy

Docker from a container calls to another container (Connection Refused)

I have container for two NodeJS services and one Nginx for reverse-proxy.
I have make NGINX on port 80 so it's publicly available via localhost on my browser
I also use reverse-proxy to proxy_pass to each responsible service.
location /api/v1/service1/ {
proxy_pass http://service1:3000/;
}
location /api/v1/service2/ {
proxy_pass http://service2:3000/;
}
In my service 1, there is an axios module that wants to call to service 2 by making a request to localhost/api/v1/service2
But, it says that connection is refused. I doubt if the localhost in service 1 refer to its container, not the docker host.
version: '3'
services:
service1:
build: './service1'
networks:
- backend
service2:
build: './service2'
networks:
- backend
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
- backend
networks:
backend:
driver: bridge
Even after using network, it still says ECONNREFUSED.
Please help.
Try adding the depends_on in your docker-compose file for the nginx, like below:
version: '3'
services:
service1:
build: './service1'
expose:
- "3000"
networks:
- backend
service2:
build: './service2'
expose:
- "3000"
networks:
- backend
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
- backend
depends_on:
- service1
- service2
networks:
backend:
driver: bridge
This would make sure that both services are running first before the nginx container attempts to connect to them. Perhaps the connection is refused because the nginx container keeps crashing due to it not finding the two services running when it executes its conf file and connect to the backends.

Resources