Docker-compose confuses building a frontend with a backend - node.js

Using Docker-compose I want to build 3 containers: backend(node.js), frontend(react.js) and MySQL.
version: '3.8'
services:
backend:
container_name: syberiaquotes-restapi
build: ./backend
env_file:
- .env
command: "sh -c 'npm install && npm run start'"
ports:
- '3000:3000'
volumes:
- ./backend:/app
- /app/node_modules
depends_on:
- db
frontend:
container_name: syberiaquotes-frontend
build: ./frontend
ports:
- '5000:5000'
volumes:
- ./frontend/src:/app/src
stdin_open: true
tty: true
depends_on:
- backend
db:
image: mysql:latest
container_name: syberiaquotes-sql
env_file:
- .env
environment:
- MYSQL_DATABASE=${SQL_DB}
- MYSQL_USER=${SQL_USER}
- MYSQL_ROOT_PASSWORD=${SQL_PASSWORD}
- MYSQL_PASSWORD=${SQL_PASSWORD}
volumes:
- data:/var/lib/mysql
restart: unless-stopped
ports:
- '3306:3306'
volumes:
data:
My files structure:
Everything worked fine until I've added a new 'frontend' container!
It seems that docker is treating my frontend container as second backend because it's trying to launch nodemon, and it's not even included in frontend dependencies!:
Obviously I have two Dockerfiles for each service, they are almost the same files.
Backend:
Frontend:
Do You have any ideas where the problem should be?

RESOLVED! I had to delete all images and volumes:
$ docker rm $(docker ps -a -q) -f
$ echo y | docker system prune --all
$ echo y | docker volume prune
and run it again:
$ docker-compose up

Related

Redirecting API call to a dockerized node-express server with nginx

I have a node-express server running inside a dockerized container with expose port 3001. My nginx is installed on OS with sudo apt install nginx. What I want is everytime a call is made to app.domain.com:3001 I want to redirect that call to localhost:3001.I am new to nginx configuration. I would prefer I could do the same with *.conf file in conf.d folder of nginx.Also the response of API should have same domain .domain.com so that I can set httpOnly cookies on a angular app running on app.domain.com
My node docker-compose file:
version: "3"
services:
container1:
image: node:18.12-alpine
working_dir: /usr/src/app
container_name: container1
depends_on:
- container_mongodb
restart: on-failure
env_file: .env
ports:
- "$APP_PORT:$APP_PORT"
volumes:
- .:/usr/src/app
networks:
- network1
command: ./wait-for.sh container_mongodb:$MONGO_PORT -- npm run dev
container_mongodb:
image: mongo:6.0.3
container_name: container_mongodb
restart: on-failure
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
ports:
- "$MONGO_EXPOSE_PORT:$MONGO_PORT"
volumes:
- container_mongodb_data:/data/db
- ./src/config/auth/mongodb.key:/data/mongodb.key
networks:
- network1
entrypoint:
- bash
- -c
- |
cp /data/mongodb.key /data/mongodb.copy.key
chmod 400 /data/mongodb.copy.key
chown 999:999 /data/mongodb.copy.key
exec docker-entrypoint.sh $$#
command: ["mongod", "--replSet", "rs0", "--bind_ip_all", "--keyFile", "/data/mongodb.copy.key"]
networks:
network1:
external: true
volumes:
container_mongodb_data:

Need some help regarding docker-compose params

I have a project with docker-compose.yml file:
version: "3.9"
services:
nginx:
build:
context: ./
dockerfile: nginx/Dockerfile
container_name: nginx_lb
volumes:
- vxx_data:/var/www/html:ro
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
ports:
- "80:80"
- "443:443"
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
depends_on:
- web
restart: unless-stopped
nodejs:
build:
context: ./
dockerfile: nodejs/Dockerfile
container_name: nodejs
volumes:
- vxx_data:/app
depends_on:
- web
restart: unless-stopped
certbot:
image: certbot/certbot
restart: unless-stopped
container_name: ssl_cert
volumes:
- ./certbot/conf:/etc/letsencrypt
- ./certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
depends_on:
- nginx
volumes:
vxx_data:
db-data:
I need to understand this line with :ro
volumes:
- vxx_data:/var/www/html:ro
Normally we access the volume data without :ro and It works but here I do not understand this switch and when I down the docker-compose it reset all work and files. This is the main issue.

Flask App runs locally but errors out on DigitalOcean

The following is my docker-compose.yml file. It has to do with my docker-compose.yml file.. I think. Like I said, the app builds locally but when I run "docker-compose up -d" in my DigitalOcean Droplet I get this error.
ERROR: Couldn't find env file: /root/.env
The following is my docker-compose.yml file.
version: '2'
services:
postgres:
image: 'postgres:9.5'
container_name: postgress
env_file:
- '.env'
volumes:
- 'postgres:/var/lib/postgresql/data'
ports:
- '5432:5432'
networks:
- db_nw
redis:
image: 'redis:3.0-alpine'
container_name: redis
command: redis-server --requirepass pass123456word
volumes:
- 'redis:/var/lib/redis/data'
ports:
- '6379:6379'
website:
restart: always
build: .
container_name: website
command: >
gunicorn -c "python:config.gunicorn" --reload "app.app:create_app()"
env_file:
- '.env'
volumes:
- '.:/app'
ports:
- 8000:8000
expose:
- 8000
networks:
- db_nw
- web_nw
depends_on:
- postgres
links:
- celery
- redis
- postgres
celery:
build: .
container_name: celery
command: celery worker -B -l info -A app.blueprints.contact.tasks
env_file:
- '.env'
volumes:
- '.:/app'
nginx:
restart: always
build: ./nginx
image: 'nginx:1.13'
container_name: nginx
volumes:
- /www/static
- .:/app
ports:
- 80:80
networks:
- web_nw
links:
- website
depends_on:
- website
networks:
db_nw:
driver: bridge
web_nw:
driver: bridge
volumes:
postgres:
redis:
My dockerfile
FROM python:3.7.5-slim-buster
RUN apt-get update \
&& apt-get install -qq -y \
build-essential libpq-dev --no-install-recommends
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY requirements.txt requirements.txt
RUN pip install --upgrade pip -r requirements.txt
COPY . .
RUN pip install --editable .
CMD gunicorn -c "python:config.gunicorn" "app.app:create_app()"
Is something wrong with my volumes in my docker-compose.yml file? or am I doing something weird in my Dockerfile with the ENV to where its hard coded to a local machine rather than the "root" directory on DigitalOcean?F
I'm new to hosting docker images so this is my first go at something like this. Thanks!
When you access a Droplet, you're generally running as root.
You appear to have copied the docker-compose.yml correctly to the Droplet but you have not copied the .env file on which it depends to the Droplet's /root/.env.
If you copy the .env file to /root/.env on the Droplet, it should work.

Running a container stops another container

I've created two basic MEAN stack apps with a common database (mongo db). I've also built docker for these apps.
Problem:
When i start a mean stack container(example-app-1) using
docker-compose up -d --build
The container runs smoothly and I'm also able to hit the container and view my page locally.
When i try to start another mean stack container(example-app-2) using
docker-compose up -d --build
my previous container is stopped and the current container works without any flaw.
Required:
I want both these containers to run simultaneously using a shared database. I need help in achieving this.
docker-compose.yml Example app -1
version: '3'
services:
example_app_1:
build:
dockerfile: dockerfile
context: ../../
image: example_app_1:1.0.0
backend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/backend/example-app-1-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_1
- BACKEND_PORT=8888
- BACKEND_IP=0.0.0.0
restart: always
ports:
- '8888:8888'
command: ['node', 'main.js']
networks:
- default
expose:
- 8888
frontend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/frontend/example_app_1
ports:
- '5200:5200'
command: ['http-server', '-p', '5200', '-o', '/app/example_app_1/frontend/example-app-1']
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose.yml for Example app 2
version: '3'
services:
example-app-2:
build:
dockerfile: dockerfile
context: ../../
image: example_app_2:1.0.0
backend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/backend/example-app-2-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_2
- BACKEND_PORT=3333
- BACKEND_IP=0.0.0.0
restart: always
networks:
- default
ports:
- '3333:3333'
command: ['node', 'main.js']
expose:
- 3333
frontend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/frontend/example-app-2
ports:
- '4200:4200'
command: ['http-server', '-p', '4200', '-o', '/app/example_app_2/frontend/example-app-2
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose will create containers named project_service. The project, by default, comes from the last component of the directory. So when you start the second docker-compose, it will stop the containers with those names, and start new ones.
Either move these two docker-compose files to separate directories so the container names are different, or run docker-compose with --project-name flag containing separate project names so the containers can be started with different names.

call a docker container by it name

I would like to know if it's possible to use my docker container name as host instead of the IP.
Let me explain, here's my docker-compose file :
version : "3"
services:
pandacola-logger:
build: ./
networks:
- logger_db
volumes:
- "./:/app"
ports:
- 8060:8060
- 10060:10060
command: npm run dev
logger-mysql:
image: mysql
networks:
- logger_db
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: Carotte1988-
MYSQL_DATABASE: logger
MYSQL_USER: logger-user
MYSQL_PASSWORD: PandaCola-
ports:
- 3306:3306
adminer:
networks:
- logger_db
image: adminer
restart: always
ports:
- 8090:8090
networks:
logger_db: {}
Sorry the intentation is a bit messy
I would like to set the name of my logger-mysql in a the .env file of my webservice (the pandacola-logger) instead of his IP adress
here's the .env file
HOST=0.0.0.0
PORT=8060
NODE_ENV=development
APP_NAME=AdonisJs
APP_URL=http://${HOST}:${PORT}
CACHE_VIEWS=false
APP_KEY=Qs1GxZrmQf18YZ9V42FWUUnnxLfPetca
DB_CONNECTION=mysql
DB_HOST=0.0.0.0 <---- here's where I want to use my container's name
DB_PORT=3306
DB_USER=logger-user
DB_PASSWORD=PandaCola-
DB_DATABASE=logger
HASH_DRIVER=bcrypt
If you can tell me first, if it's possible, and then, how to do it, it would be lovely.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Reference
For Example:
version: '2.2'
services:
redis:
image: redis
container_name: cache
expose:
- 6379
app:
build: ./
volumes:
- ./:/var/www/app
ports:
- 7731:80
environment:
- REDIS_URL=redis://cache
- NODE_ENV=development
- PORT=80
command:
sh -c 'npm i && node server.js'
networks:
default:
external:
name: "tools"

Resources