Running a container stops another container - node.js

I've created two basic MEAN stack apps with a common database (mongo db). I've also built docker for these apps.
Problem:
When i start a mean stack container(example-app-1) using
docker-compose up -d --build
The container runs smoothly and I'm also able to hit the container and view my page locally.
When i try to start another mean stack container(example-app-2) using
docker-compose up -d --build
my previous container is stopped and the current container works without any flaw.
Required:
I want both these containers to run simultaneously using a shared database. I need help in achieving this.
docker-compose.yml Example app -1
version: '3'
services:
example_app_1:
build:
dockerfile: dockerfile
context: ../../
image: example_app_1:1.0.0
backend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/backend/example-app-1-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_1
- BACKEND_PORT=8888
- BACKEND_IP=0.0.0.0
restart: always
ports:
- '8888:8888'
command: ['node', 'main.js']
networks:
- default
expose:
- 8888
frontend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/frontend/example_app_1
ports:
- '5200:5200'
command: ['http-server', '-p', '5200', '-o', '/app/example_app_1/frontend/example-app-1']
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose.yml for Example app 2
version: '3'
services:
example-app-2:
build:
dockerfile: dockerfile
context: ../../
image: example_app_2:1.0.0
backend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/backend/example-app-2-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_2
- BACKEND_PORT=3333
- BACKEND_IP=0.0.0.0
restart: always
networks:
- default
ports:
- '3333:3333'
command: ['node', 'main.js']
expose:
- 3333
frontend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/frontend/example-app-2
ports:
- '4200:4200'
command: ['http-server', '-p', '4200', '-o', '/app/example_app_2/frontend/example-app-2
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network

docker-compose will create containers named project_service. The project, by default, comes from the last component of the directory. So when you start the second docker-compose, it will stop the containers with those names, and start new ones.
Either move these two docker-compose files to separate directories so the container names are different, or run docker-compose with --project-name flag containing separate project names so the containers can be started with different names.

Related

Is it possible to mount my existing Mongo DataBase with docker-compose?

After some months of development I got to a point where it is better to dockerize my MERN application. I managed to create .yaml file and everything is working OK but the problem is that I already have big amount of data that is collected. I want to be able to mount this data to container but I don't know how to do it. Read a lot of stuff but still my data is not appearing after composing the applications. Here is how my docker-compose.yaml file looks-like:
version: '3.9'
services:
#MongoDB Service
mongo_db:
container_name: db_container
image: mongo:latest
restart: always
ports:
- 2717:27017
volumes:
- /mnt/c/temp/mongo/db:/data/db
#Node API Service
api:
build: .
ports:
- 4001:4001
environment:
PORT: 4001
MONGODB_URI: mongodb://db_container:27017
DB_NAME: project-system
depends_on:
- mongo_db
volumes:
mongo_db:
As you can see in this row:
volumes:
- /mnt/c/temp/mongo/db:/data/db
I am trying to point the path from my C:\ drive but this doesn't work. I also tried the same row in:
volumes:
mongo_db:
(the bottom of file) but again without success. Basically my existing DB is on
C:\data\db
How can I point this to be the source of MongoDB service?
First, you need to create the dump from local MongoDB and copy those files to docker MongoDB. You can use these commands to create:
mongodump --uri 'mongodb://localhost:27017/yourdatabase' --archive=<your file> --gzip
mongorestore --uri 'mongodb://remotehost:27017/yourdatabase' --archive=<your file> --gzip
You should be able to access the docker from local host.
Note: Reference this answer if you don't get it correct.
You can do these changes on the path you are mounting to make data persistent. Create a new folder C:/data/docker_mongo to make data persistent.
version: '3.9'
services:
#MongoDB Service
mongo_db:
container_name: db_container
image: mongo:latest
restart: always
ports:
- 2717:27017
volumes:
- C:/data/docker_mongo:/data/db
#Node API Service
api:
build: .
ports:
- 4001:4001
environment:
PORT: 4001
MONGODB_URI: mongodb://db_container:27017
DB_NAME: project-system
depends_on:
- mongo_db
volumes:
mongo_db:

How to run one server docker container before client container

I have a fairly complex application where I have next js on the client and on the backend I have graphql and I have nginx as a reverse proxy.
I am using next JS incremental static site regeneration functionality on the index page so that's why I want my server up and running before my client container start building because when I run npm run build it is going to fetch some data from the graphql server here is my docker compose file
version: "3"
services:
mynginx:
container_name: mynginx
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
graphql:
container_name: graphql_server
depends_on:
- mynginx
build:
context: ./server
dockerfile: Dockerfile
mynextjs:
container_name: nextjs_server
depends_on:
- graphql
build:
context: ./client
dockerfile: Dockerfile
depends_on with healthcheck, to start container when another already works
https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
something like this
services:
mynginx:
container_name: mynginx
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
healthcheck:
test: ["CMD-SHELL", "wget -O /dev/null http://localhost || exit 1"]
timeout: 10s
graphql:
...
depends_on:
depends_on:
mynginx:
condition: service_healthy
In simple terms, what you need is to wait until graphql service is properly finished starting before you run mynextjs service.
I'm afraid that depends_on (or links) might not work out of the box in this case. The reason is, although Compose does start graphql service before it starts mynextjs service, it doesn't wait until graphql service is in READY state to start the dependant services.
Basically, what you need is to find a way to tell Compose to wait until graphql service is in READY state.
The solution is described in Control startup and shutdown order in Compose documentation
I hope this helps you. Cheers 🍻 !!!
If I understand your problem correctly, you need to delay beginning the build of the image for the frontend container, until the backend containers are running and ready.
The easiest way that comes to mind would be to use profiles to allow starting these independently.
Eg:
version: "3"
services:
mynginx:
container_name: mynginx
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- 80:80
profiles: ["backend"]
graphql:
container_name: graphql_server
depends_on:
- mynginx
build:
context: ./server
dockerfile: Dockerfile
profiles: ["backend"]
mynextjs:
container_name: nextjs_server
depends_on:
- graphql
build:
context: ./client
dockerfile: Dockerfile
profiles: ["frontend"]
Then chain the starting something like:
docker-compose --profile backend -d up && docker-compose --profile frontend -d up
Another option might be splitting into two separate compose files, with a shared docker network between them.
ref: https://docs.docker.com/compose/profiles/

Docker-compose confuses building a frontend with a backend

Using Docker-compose I want to build 3 containers: backend(node.js), frontend(react.js) and MySQL.
version: '3.8'
services:
backend:
container_name: syberiaquotes-restapi
build: ./backend
env_file:
- .env
command: "sh -c 'npm install && npm run start'"
ports:
- '3000:3000'
volumes:
- ./backend:/app
- /app/node_modules
depends_on:
- db
frontend:
container_name: syberiaquotes-frontend
build: ./frontend
ports:
- '5000:5000'
volumes:
- ./frontend/src:/app/src
stdin_open: true
tty: true
depends_on:
- backend
db:
image: mysql:latest
container_name: syberiaquotes-sql
env_file:
- .env
environment:
- MYSQL_DATABASE=${SQL_DB}
- MYSQL_USER=${SQL_USER}
- MYSQL_ROOT_PASSWORD=${SQL_PASSWORD}
- MYSQL_PASSWORD=${SQL_PASSWORD}
volumes:
- data:/var/lib/mysql
restart: unless-stopped
ports:
- '3306:3306'
volumes:
data:
My files structure:
Everything worked fine until I've added a new 'frontend' container!
It seems that docker is treating my frontend container as second backend because it's trying to launch nodemon, and it's not even included in frontend dependencies!:
Obviously I have two Dockerfiles for each service, they are almost the same files.
Backend:
Frontend:
Do You have any ideas where the problem should be?
RESOLVED! I had to delete all images and volumes:
$ docker rm $(docker ps -a -q) -f
$ echo y | docker system prune --all
$ echo y | docker volume prune
and run it again:
$ docker-compose up

Docker-Compose File Invalid

I am using Docker and Docker-compose for the first time and I am getting this Error ERROR: The Compose file is invalid because: Service volumes have neither an image nor a build context specified. At least one must be provided. but my docker-compose file has a build context specified. I tried moving the build context around different places but still no headway. Here is my docker-compose file
version: '3.4'
services:
express:
container_name: express
image: node:12-alpine
volumes:
- type: bind
source: ./
target: /app
- type: volume
source: nodemodules
target: /app/node_modules
volume:
nocopy: true
build: .
working_dir: /app
command: npm run dev
ports:
- '4000:4000'
environment:
- NODE_ENV=development
- PORT=4000
volumes:
nodemodules:
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- '27017:27017'
- '27018:27018'
- '27019:27019'
What exactly could I be doing wrong? Thank you.
As #DavidMaze wrote:
The lines called volumes: and nodemodules: are wrong indented. They are at the same level as other services, but are empty. You probably want to delete these lines.
Take a look: docker-compose-file.
I was able to fix it after some days of googling. Apparently, all I had to do was to put
volumes:
nodemodules:
exactly like that at the last lines of the file and remove them from the services mapping. So my file became like this at the end of the day.
version: '3.4'
services:
express:
container_name: express
image: node:12-alpine
volumes:
- type: bind
source: ./
target: /app
- type: volume
source: nodemodules
target: /app/node_modules
volume:
nocopy: true
build: .
working_dir: /app
command: npm run dev
ports:
- '4000:4000'
environment:
- NODE_ENV=development
- PORT=4000
links:
- mongo
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db
ports:
- '27017:27017'
- '27018:27018'
- '27019:27019'
volumes:
nodemodules:

call a docker container by it name

I would like to know if it's possible to use my docker container name as host instead of the IP.
Let me explain, here's my docker-compose file :
version : "3"
services:
pandacola-logger:
build: ./
networks:
- logger_db
volumes:
- "./:/app"
ports:
- 8060:8060
- 10060:10060
command: npm run dev
logger-mysql:
image: mysql
networks:
- logger_db
command: --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: Carotte1988-
MYSQL_DATABASE: logger
MYSQL_USER: logger-user
MYSQL_PASSWORD: PandaCola-
ports:
- 3306:3306
adminer:
networks:
- logger_db
image: adminer
restart: always
ports:
- 8090:8090
networks:
logger_db: {}
Sorry the intentation is a bit messy
I would like to set the name of my logger-mysql in a the .env file of my webservice (the pandacola-logger) instead of his IP adress
here's the .env file
HOST=0.0.0.0
PORT=8060
NODE_ENV=development
APP_NAME=AdonisJs
APP_URL=http://${HOST}:${PORT}
CACHE_VIEWS=false
APP_KEY=Qs1GxZrmQf18YZ9V42FWUUnnxLfPetca
DB_CONNECTION=mysql
DB_HOST=0.0.0.0 <---- here's where I want to use my container's name
DB_PORT=3306
DB_USER=logger-user
DB_PASSWORD=PandaCola-
DB_DATABASE=logger
HASH_DRIVER=bcrypt
If you can tell me first, if it's possible, and then, how to do it, it would be lovely.
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
Reference
For Example:
version: '2.2'
services:
redis:
image: redis
container_name: cache
expose:
- 6379
app:
build: ./
volumes:
- ./:/var/www/app
ports:
- 7731:80
environment:
- REDIS_URL=redis://cache
- NODE_ENV=development
- PORT=80
command:
sh -c 'npm i && node server.js'
networks:
default:
external:
name: "tools"

Resources