My client isn't able to send requests to backend because it can't resolve the host. I'm trying to pass down the container connection info via an environment variable and use it in the client to connect. However, it is unable to do the requests at all. Any help? Nginx works fine for the frontend part but doesn't work for proxying the backend.
docker-compose.yml
version: '3.2'
services:
server:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
depends_on:
- backend
- frontend
- database
ports:
- '5000:80'
database:
image: postgres:latest
container_name: database
ports:
- "5432:5432"
restart: always
hostname: database
expose:
- 5432
backend:
build:
context: ./backend
dockerfile: ./Dockerfile
image: kalendae_backend:latest
hostname: backend
container_name: backend
ports:
- "5051:5051"
environment:
- WAIT_HOSTS=database:5432
- DATABASE_HOST=database
- DATABASE_PORT=5432
- PORT=5051
links:
- database
expose:
- 5051
frontend:
build:
context: ./frontend
dockerfile: ./Dockerfile
image: kalendae_frontend:latest
ports:
- "5050:5050"
hostname: frontend
container_name: frontend
environment:
- WAIT_HOSTS=backend:5051
- REACT_APP_BACKEND_HOST=backend
- REACT_APP_BACKEND_PORT=5051
links:
- backend
expose:
- 5050
Nginx config
upstream frontend {
server frontend:5050;
}
upstream backend {
server backend:5051;
}
upstream server {
server server:5000;
}
server {
listen 80;
location / {
proxy_pass http://frontend;
}
location backend {
proxy_pass http://backend;
}
location /backend {
proxy_pass http://backend;
}
}
Related
I'm running my containers via a docker compose file. They are in the same network and I can ping from my backend container to my database container. I use the database name as the hostname in the connection string and it doesn't bring any errors that it couldn't find the host. Instead, it just hangs up and times out.
I have a test endpoint which is just suppose to test the connection. When you use that endpoint, database container logs "invalid packet length", and on the frontend, nothing happens, then it times out. I have no idea whats wrong. Any help?
version: '3.2'
services:
server:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
depends_on:
- backend
- frontend
- database
ports:
- '5000:80'
networks:
- app_network
database:
image: postgres:latest
container_name: database
ports:
- "5432:5432"
restart: always
hostname: database
environment:
POSTGRES_PASSWORD: 1234
POSTGRES_USER: postgres
backend:
build:
context: ./backend
dockerfile: ./Dockerfile
image: kalendae:backend
hostname: backend
container_name: backend
environment:
- WAIT_HOSTS=database:5432
- DATABASE_HOST=database
- DATABASE_PORT=5432
- PORT=5051
frontend:
build:
context: ./frontend
dockerfile: ./Dockerfile
image: kalendae:frontend
hostname: frontend
container_name: frontend
environment:
- WAIT_HOSTS=backend:5051
- REACT_APP_BACKEND_HOST=localhost
- REACT_APP_BACKEND_PORT=5051
I'm having issues running a pm2 app on a container, I tried accessing through docker port and with an nginx proxy, but none of these solutions are working. Here's my docker config:
version: '3.5'
services:
webapp:
build:
context: .
image: ${DOCKER_IMAGE}
container_name: mypm2app
stdin_open: true
networks:
- "default"
restart: always
ports:
- "8282:8282"
extra_hosts:
- host.local:${LOCAL_IP}
db:
image: mongo:4.2.6
container_name: mongodb
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
MONGO_INITDB_DATABASE: ${MONGO_INITDB_DATABASE}
volumes:
- ${MONGO_SCRIPT_PATH}:${MONGO_SCRIPT_DESTINATION_PATH}
networks:
- "default"
networks:
default:
external:
name: ${NETWORK_NAME}
Also I have this dockerfile:
FROM image
WORKDIR /var/www/html/path
COPY package.json /var/www/html/path
RUN npm install
COPY . /var/www/html/path
EXPOSE 8282/tcp
CMD pm2-runtime start ecosystem.config.js --env development
pm2 is starting the service, but I cannot access it through localhost:port.
I tried to add an nginx proxy:
nginx:
depends_on:
- webapp
restart: always
build:
dockerfile: Dockerfile.dev
context: ./nginx
ports:
- "3002:80"
networks:
default:
ipv4_address: ${nginx_ip}$
with this docker-file
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
This is the nginx configuration, default.conf:
upstream mypm2app {
server mypm2app:8282;
}
server {
listen 80;
server_name mypm2app.local;
location / {
proxy_pass http://mypm2app/;
}
}
I would appreciate any suggestion or answer to this issue.
When I try to connect my backend (using Sequelize) I get this following error:
error ConnectionRefusedError [SequelizeConnectionRefusedError]: connect ECONNREFUSED 127.0.0.1:5432
docker-compose.yml:
version: "3.7"
services:
frontend:
build:
context: ./client
dockerfile: Dockerfile
image: client
ports:
- "3000:3000"
volumes:
- ./client:/usr/src/app
backend:
build:
context: ./server
dockerfile: Dockerfile
image: server
ports:
- "8000:8000"
volumes:
- ./server:/usr/src/app
db:
image: postgres
environment:
POSTGRES_DB: ckl
POSTGRES_USER: postgres
POSTGRES_PASSWORD: docker
ports:
- "5432:5432"
What am I doing wrong ?
Thanks in advance
Assuming your backend is connecting to the db you should add a depends_on:
backend:
build:
context: ./server
dockerfile: Dockerfile
image: server
depends_on:
- db
ports:
- "8000:8000"
volumes:
- ./server:/usr/src/app
The db will now be accessible at the host db:5432 if your application is configured to connect to localhost:5432 or 172.0.0.1:5432 you'll need to replace the hostname localhost with db. Your postgres connection string might also not have a host and might be trying to connect to localhost by default. Should be able to look at sequelize to figure out how to pass a host.
For my personnal knowledge I want set up my server on docker (using docker compose).
And I have some troubles setting up several app (probem comes from the ports).
I have a completly clean debian 8 server.
I created 2 repositories one for nextcloud the other one for bitwarden
I started first next cloud everythings is fine so after that I launch bitwarden and I have an error because I'm using the same port. But because I want to use letsencrypt for both and an https web site how am I suppose to configure the ports and the reverse proxy.
this one is for nextcloud
version: '3'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
ports:
- 80:80
- 443:443
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:ro
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
volumes:
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
volumes:
- db:/var/lib/mysql
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=toor
- MYSQL_PASSWORD=mysql
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
volumes:
- nextcloud:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=nextcloud.YOUR-DOMAIN
- LETSENCRYPT_HOST=nextcloud.YOUR-DOMAIN
- LETSENCRYPT_EMAIL=YOUR-EMAIL
restart: unless-stopped
volumes:
nextcloud:
db:
networks:
nextcloud_network:
this one is for bitwarden
version: "3"
services:
bitwarden:
image: bitwardenrs/server
restart: always
volumes:
- ./bw-data:/data
environment:
WEBSOCKET_ENABLED: "true"
SIGNUPS_ALLOWED: "true"
caddy:
image: abiosoft/caddy
restart: always
volumes:
- ./Caddyfile:/etc/Caddyfile:ro
- caddycerts:/root/.caddy
ports:
- 80:80 # needed for Let's Encrypt
- 443:443
environment:
ACME_AGREE: "true"
DOMAIN: "bitwarden.example.org"
EMAIL: "bitwarden#example.org"
volumes:
caddycerts:
The error is :
Blockquote ERROR: for root_caddy_1 Cannot start service caddy: driver failed programming external connectivity on endpoint root_caddy_1 xxxxxxxxxxxxxxxxxx : Bind for 0.0.0.0:80 failed: port is already allocated
Based on your comment I will detail here the solution with multiple subdomains.
First of all the easiest solution for now is to put all the services in the same docker-compose file. If not you would have to create a network and declare that as external network in each docker-compose.yml.
Next remove the ports declaration for the proxy and caddy containers (to free up ports 80 and 443 on the host).
Create a new service and add it to the same docker-compose.yml:
nginx:
image: nginx
volumes:
- ./subdomains_conf:/etc/nginx/conf.d
ports:
- "80:80"
Next create a folder subdomanins_conf and in it a file default.conf with the contents something similar to:
server {
listen 80;
listen [::]:80;
server_name first.domain.com;
location {
proxy_pass http://proxy:80;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
}
}
server {
listen 80;
listen [::]:80;
server_name second.domain.com;
location {
proxy_pass http://caddy:80;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header HOST $http_host;
}
}
You need to replace the values for server_name with your actual domain names. The configuration for SSL is similar.
You can test this setup locally by pointing the 2 domains to 127.0.0.1 in /etc/hosts. Remember that all the services should be defined in the same docker-compose.yml or you need to create a network and specify it in each docker-compose.yml otherwise the containers will not see each other.
I found an easy way to manage this problem using reverse proxy with traefik
https://docs.traefik.io/
I have container for two NodeJS services and one Nginx for reverse-proxy.
I have make NGINX on port 80 so it's publicly available via localhost on my browser
I also use reverse-proxy to proxy_pass to each responsible service.
location /api/v1/service1/ {
proxy_pass http://service1:3000/;
}
location /api/v1/service2/ {
proxy_pass http://service2:3000/;
}
In my service 1, there is an axios module that wants to call to service 2 by making a request to localhost/api/v1/service2
But, it says that connection is refused. I doubt if the localhost in service 1 refer to its container, not the docker host.
version: '3'
services:
service1:
build: './service1'
networks:
- backend
service2:
build: './service2'
networks:
- backend
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
- backend
networks:
backend:
driver: bridge
Even after using network, it still says ECONNREFUSED.
Please help.
Try adding the depends_on in your docker-compose file for the nginx, like below:
version: '3'
services:
service1:
build: './service1'
expose:
- "3000"
networks:
- backend
service2:
build: './service2'
expose:
- "3000"
networks:
- backend
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
- backend
depends_on:
- service1
- service2
networks:
backend:
driver: bridge
This would make sure that both services are running first before the nginx container attempts to connect to them. Perhaps the connection is refused because the nginx container keeps crashing due to it not finding the two services running when it executes its conf file and connect to the backends.