Docker-Compose Nginx (with static React) and Nginx - node.js

I am currently stuck on making nginx proxy to the node load balancer. It gives the following error when making a request on 185.146.87.32:5000/:
2020/06/01 13:23:09 [warn] 6#6: *1 upstream server temporarily disabled while connecting to upstream, client: 86.125.198.83, server: domain.ro, request: "GET / HTTP/1.1", upstream: "http://185.146.87.32:5002/", host: "185.146.87.32:5000"
I managed to make this work on a local system, but now I am trying to make it work on a remote server.
BACKEND_SERVER_PORT_1=5001
BACKEND_SERVER_PORT_2=5002
BACKEND_NODE_PORT=5000
BACKEND_NGINX_PORT=80
CLIENT_SERVER_PORT=3000
ADMIN_SERVER_PORT=3006
NGINX_SERVER_PORT=80
API_HOST="http://domain.ro"
This is the docker-compose:
version: '3'
services:
#####################################
# Setup for NGINX container
#####################################
nginx:
container_name: domain_back_nginx
build:
context: ./nginx
dockerfile: Dockerfile
image: domain/domain_back_nginx
ports:
- ${BACKEND_NODE_PORT}:${BACKEND_NGINX_PORT}
volumes:
- ./:/usr/src/domain
restart: always
#####################################
# Setup for backend container
#####################################
backend_1:
container_name: domain_back_server_1
build:
context: ./
dockerfile: Dockerfile
image: domain/domain_back_server_1
ports:
- ${BACKEND_SERVER_PORT_1}:${BACKEND_NODE_PORT}
volumes:
- ./:/usr/src/domain
restart: always
command: npm start
#####################################
# Setup for backend container
#####################################
backend_2:
container_name: domain_back_server_2
build:
context: ./
dockerfile: Dockerfile
image: domain/domain_back_server_2
ports:
- ${BACKEND_SERVER_PORT_2}:${BACKEND_NODE_PORT}
volumes:
- ./:/usr/src/domain
restart: always
command: npm start
The Dockerfile for node is:
FROM node:12.17.0-alpine3.9
RUN mkdir -p /usr/src/domain
ENV NODE_ENV=production
WORKDIR /usr/src/domain
COPY package*.json ./
RUN npm install --silent
COPY . .
EXPOSE 5000
The config file for nginx is:
upstream domain {
least_conn;
server backend_1 weight=1;
server backend_2 weight=1;
}
server {
listen 80;
listen [::]:80;
root /var/www/domain_app;
server_name domain.ro www.domain.ro;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://domain;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
The Dockerfile for nginx is:
FROM nginx:1.17-alpine as build
#!/bin/sh
RUN rm /etc/nginx/conf.d/default.conf
COPY default.conf /etc/nginx/conf.d
CMD ["nginx", "-g", "daemon off;"]

don't expose your backend to world,
create a docker network for your services, then expose nginx, it's the best practice,
but in your case you didnt specify backend ports in nginx.conf
upstream domain {
least_conn;
server backend_1:5000 weight=1;
server backend_2:5000 weight=1;
}
you should do below:
version: '3'
services:
#####################################
# Setup for NGINX container
#####################################
nginx:
container_name: domain_back_nginx
build:
context: ./nginx
dockerfile: Dockerfile
image: domain/domain_back_nginx
networks:
- proxy
ports:
- 5000:80
volumes:
- ./:/usr/src/domain
restart: always
#####################################
# Setup for backend container
#####################################
backend_1:
container_name: domain_back_server_1
build:
context: ./
dockerfile: Dockerfile
image: domain/domain_back_server_1
networks:
- proxy
## always expose, just in case you missed it in Dockerfile, this will expose the port(s)
## just in defined networks
expose:
- 5000
volumes:
- ./:/usr/src/domain
restart: always
command: npm start
#####################################
# Setup for backend container
#####################################
backend_2:
container_name: domain_back_server_2
build:
context: ./
dockerfile: Dockerfile
image: domain/domain_back_server_2
networks:
- proxy
## always expose, just in case you missed it in Dockerfile, this will expose the port(s)
## just in defined networks
expose:
- 5000
volumes:
- ./:/usr/src/domain
restart: always
command: npm start
networks:
proxy:
external:
name: proxy
but after all, i recommend jwilder/nginx-proxy

Related

docker webserver can't find file to mount

I just need to deploy and add ssl so I used this ref. so if you have any ref to an another solution, please put it.
I've a nodejs website I'm trying to deploy with docker and add ssl certificate
but when i do docker-compose up I get this error:
can't mount "website-opt/views/:/var/lib/docker/volumes/websiteopt_web-root/_data" ... file doesn't exist
this is my Dockerfile:
FROM node:14
ENV NODE_ENV=production
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
WORKDIR /home/node/app
COPY package*.json ./
USER node
RUN npm install --production
COPY --chown=node:node . .
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml:
version: '3'
services:
app:
container_name: app
restart: always
build: .
networks:
- app-network
webserver:
image: nginx
container_name: webserver
restart: always
ports:
- '80:80'
volumes:
- web-root:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
networks:
- app-network
certbot:
image: certbot/certbot
container_name: certbot
volumes:
- certbot-etc:/etc/letsencrypt
- certbot-var:/var/lib/letsencrypt
- web-root:/var/www/html
depends_on:
- webserver
command:
certonly --webroot --webroot-path=/var/www/html --email
domain#mail.com --agree-tos --no-eff-email --staging -d
domain.com -d www.domain.com
volumes:
certbot-etc:
certbot-var:
web-root:
driver: local
driver_opts:
type: none
device: website-opt/views/
o: bind
networks:
app-network:
driver: bridge
nginx:
server {
listen 80;
listen [::]:80;
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name domain.com www.domain.com;
location / {
proxy_pass http://domain:3000;
}
location ~ /.well-known/acme-challenge {
allow all;
root /var/www/html;
}
}

How to configure Nginx with Docker to run Node and Mongo in the correct way

I'm building a project in Node Mongo and using Docker plus Nginx to run all on my local machine MacBook pro-Catalina.
I have several issues I'm unable to fix on my own and I don't know how to configure Nginx and Docker to make all work correctly. At the end of the post will be shown my Dockerfile, Docker compose and Nginx.
The Node container is not able to connect to the Mongo DB with this error when I run my containers: failed to connect to server [127.0.0.1:27017] on first connect but via a GUI as compass I'm able to connect to the same URL.
EDIT I tried to use the name of the container and it's IP bu error failed to connect to server [wetaxitask_mongodb:27017] on first connect [Error: connect ECONNREFUSED 172.18.0.3:27017
The next issue is I'm getting an error of Python even tho I'm installing it on my container and the error is as shown in screenshot as I don't know how to describe:
When all is running whatever endpoint I'm trying to hit my response is the Nginx welcome page only so probably I did wrong the conf itself
Dockerfile:
# Developpment stage
FROM node:12.18-alpine AS base-builder
RUN apk update
RUN apk add --no-cache python make g++
RUN apk add --no-cache libc6-compat
WORKDIR /usr/src/app
COPY ["tsconfig.json", "package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
ADD . /usr/src/app
COPY ./.env /usr/src/app/.env
RUN npm cache clean --force
RUN npm install
# ------------------------------------------------------
# Production Build
# ------------------------------------------------------
FROM nginx:1.19.0-alpine
RUN apk add --no-cache --repository http://nl.alpinelinux.org/alpine/edge/main libuv \
&& apk add --update nodejs npm
WORKDIR /usr/src/app
RUN ls -l /usr/src/app/
COPY --from=0 /usr/src/app/ .
RUN rm /etc/nginx/conf.d/default.conf
RUN rm -rf /docker-entrypoint.d
COPY nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 3000
EXPOSE 80
#CMD ["nginx", "-g", "daemon off;","npm", "start"]
CMD nginx ; exec npm start
Docker compose:
version: '3.8'
services:
wetaxitask:
container_name: wetaxitask_api_dev
image: wetaxitask
restart: always
build: .
depends_on:
- mongodb
- redis
env_file: .env
ports:
- 8080:80
links:
- redis
- mongodb
mongodb:
container_name: wetaxitask_mongodb
image: mongo:latest
restart: always
ports:
- 27017:27017
volumes:
- ./data:/data/db
environment:
- MONGO_INITDB_DATABASE=wetaxitask
redis:
container_name: wetaxitask_redis
image: redis:latest
ports:
- 6379:6379
Nginx:
server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
location /welcome {
proxy_pass http://127.0.0.1:3000;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Try to change the hostname from localhost(127.0.0.1) to mongodb (the name of your service defined in docker-compose). Docker dns magic will do the rest.
edit : add them to a common network in your docker compose and the containers should be able to communicate with their name as hostname in the requests
version: '3.8'
services:
wetaxitask:
container_name: wetaxitask_api_dev
image: wetaxitask
restart: always
build: .
depends_on:
- mongodb
- redis
env_file: .env
ports:
- 8080:80
links:
- redis
- mongodb
networks:
- my-network
mongodb:
container_name: wetaxitask_mongodb
image: mongo:latest
restart: always
ports:
- 27017:27017
volumes:
- ./data:/data/db
environment:
- MONGO_INITDB_DATABASE=wetaxitask
networks:
- my-network
redis:
container_name: wetaxitask_redis
image: redis:latest
ports:
- 6379:6379
networks:
- my-network
networks:
my-network:
#this should define a network called my-network with default settings

Docker containers cant communicate (Docker compose / React / NodeJs / Nginx)

My client isn't able to send requests to backend because it can't resolve the host. I'm trying to pass down the container connection info via an environment variable and use it in the client to connect. However, it is unable to do the requests at all. Any help? Nginx works fine for the frontend part but doesn't work for proxying the backend.
docker-compose.yml
version: '3.2'
services:
server:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
depends_on:
- backend
- frontend
- database
ports:
- '5000:80'
database:
image: postgres:latest
container_name: database
ports:
- "5432:5432"
restart: always
hostname: database
expose:
- 5432
backend:
build:
context: ./backend
dockerfile: ./Dockerfile
image: kalendae_backend:latest
hostname: backend
container_name: backend
ports:
- "5051:5051"
environment:
- WAIT_HOSTS=database:5432
- DATABASE_HOST=database
- DATABASE_PORT=5432
- PORT=5051
links:
- database
expose:
- 5051
frontend:
build:
context: ./frontend
dockerfile: ./Dockerfile
image: kalendae_frontend:latest
ports:
- "5050:5050"
hostname: frontend
container_name: frontend
environment:
- WAIT_HOSTS=backend:5051
- REACT_APP_BACKEND_HOST=backend
- REACT_APP_BACKEND_PORT=5051
links:
- backend
expose:
- 5050
Nginx config
upstream frontend {
server frontend:5050;
}
upstream backend {
server backend:5051;
}
upstream server {
server server:5000;
}
server {
listen 80;
location / {
proxy_pass http://frontend;
}
location backend {
proxy_pass http://backend;
}
location /backend {
proxy_pass http://backend;
}
}

Issues docker with pm2 and nginx

I'm having issues running a pm2 app on a container, I tried accessing through docker port and with an nginx proxy, but none of these solutions are working. Here's my docker config:
version: '3.5'
services:
webapp:
build:
context: .
image: ${DOCKER_IMAGE}
container_name: mypm2app
stdin_open: true
networks:
- "default"
restart: always
ports:
- "8282:8282"
extra_hosts:
- host.local:${LOCAL_IP}
db:
image: mongo:4.2.6
container_name: mongodb
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE}
volumes:
- ${MONGO_SCRIPT_PATH}:${MONGO_SCRIPT_DESTINATION_PATH}
networks:
- "default"
networks:
default:
external:
name: ${NETWORK_NAME}
Also I have this dockerfile:
FROM image
WORKDIR /var/www/html/path
COPY package.json /var/www/html/path
RUN npm install
COPY . /var/www/html/path
EXPOSE 8282/tcp
CMD pm2-runtime start ecosystem.config.js --env development
pm2 is starting the service, but I cannot access it through localhost:port.
I tried to add an nginx proxy:
nginx:
depends_on:
- webapp
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- "3002:80"
networks:
default:
ipv4_address: ${nginx_ip}$
with this docker-file
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
This is the nginx configuration, default.conf:
upstream mypm2app {
server mypm2app:8282;
}
server {
listen 80;
server_name mypm2app.local;
location / {
proxy_pass http://mypm2app/;
}
}
I would appreciate any suggestion or answer to this issue.

I have troubles installing several app using docker compose

For my personnal knowledge I want set up my server on docker (using docker compose).
And I have some troubles setting up several app (probem comes from the ports).
I have a completly clean debian 8 server.
I created 2 repositories one for nextcloud the other one for bitwarden
I started first next cloud everythings is fine so after that I launch bitwarden and I have an error because I'm using the same port. But because I want to use letsencrypt for both and an https web site how am I suppose to configure the ports and the reverse proxy.
this one is for nextcloud
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=toor
- MYSQL_PASSWORD=mysql
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.YOUR-DOMAIN
- LETSENCRYPT_HOST=nextcloud.YOUR-DOMAIN
- LETSENCRYPT_EMAIL=YOUR-EMAIL
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
this one is for bitwarden
version: "3"
services:
bitwarden:
image: bitwardenrs/server
restart: always
volumes:
- ./bw-data:/data
environment:
WEBSOCKET_ENABLED: "true"
SIGNUPS_ALLOWED: "true"
caddy:
image: abiosoft/caddy
restart: always
volumes:
- ./Caddyfile:/etc/Caddyfile:ro
- caddycerts:/root/.caddy
ports:
- 80:80 # needed for Let's Encrypt
- 443:443
environment:
ACME_AGREE: "true"
DOMAIN: "bitwarden.example.org"
EMAIL: "bitwarden#example.org"
volumes:
caddycerts:
The error is :
Blockquote ERROR: for root_caddy_1 Cannot start service caddy: driver failed programming external connectivity on endpoint root_caddy_1 xxxxxxxxxxxxxxxxxx : Bind for 0.0.0.0:80 failed: port is already allocated
Based on your comment I will detail here the solution with multiple subdomains.
First of all the easiest solution for now is to put all the services in the same docker-compose file. If not you would have to create a network and declare that as external network in each docker-compose.yml.
Next remove the ports declaration for the proxy and caddy containers (to free up ports 80 and 443 on the host).
Create a new service and add it to the same docker-compose.yml:
nginx:
image: nginx
volumes:
- ./subdomains_conf:/etc/nginx/conf.d
ports:
- "80:80"
Next create a folder subdomanins_conf and in it a file default.conf with the contents something similar to:
server {
listen 80;
listen [::]:80;
server_name first.domain.com;
location {
proxy_pass http://proxy:80;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
}
}
server {
listen 80;
listen [::]:80;
server_name second.domain.com;
location {
proxy_pass http://caddy:80;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
}
}
You need to replace the values for server_name with your actual domain names. The configuration for SSL is similar.
You can test this setup locally by pointing the 2 domains to 127.0.0.1 in /etc/hosts. Remember that all the services should be defined in the same docker-compose.yml or you need to create a network and specify it in each docker-compose.yml otherwise the containers will not see each other.
I found an easy way to manage this problem using reverse proxy with traefik
https://docs.traefik.io/

Resources