I am building an app that needs a database, a CMS and a Frontend (NodeJS)
So I made a docker-compose file to deploy everything.
I want my frontend to wait for the CMS to be ready (serving at port 1337) but the depends on doesn't seem to be working...
My frontend app runs npm run build before my backend app finished its npm install...
Here's my docker-compose.yaml
version: "3"
services:
database:
build: ./database
container_name: "mariaDB"
restart: always
ports:
- "3306:3306"
backend:
build: ./server
container_name: "backend"
restart: always
ports:
- "1337:1337"
depends_on:
- database
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:1337/", '||', "exit", "1"]
interval: 30s
timeout: 10s
retries: 5
frontend:
build: ./client
container_name: "frontend"
restart: always
ports:
- "80:3000"
depends_on:
backend:
condition: service_healthy
I have heard about healthchecks but it is not working neither (I may be wrong in my implementation of it)
Related
I am trying to send an HTTP request with Axios from react that build into Nginx container to express that build into node container.
when I am using the container name for the HTTP request to communicate between different backend containers everything works,
but when I try it with react app it fails and sends me a CORS error. this only works if I explicitly type the express container IP address.
I am using docker-compose to build everything so I don't know the address before.
there is a way to get around this issue?
cinema-svc:
build: ../cinema_web-service/cinema-svc
container_name: cinema-svc
depends_on:
- auth-svc
- subs-svc
- users-svc
networks:
- cinemanet
- frontnet
restart: unless-stopped
react-client:
build: ../react_client/client
container_name: react-client
depends_on:
- cinema-svc
networks:
- frontnet
ports:
- "80:80"
restart: unless-stopped
Update
this is what solved my issue:
cinema-svc:
build: ../cinema_web-service/cinema-svc
container_name: cinema-svc
depends_on:
- auth-svc
- subs-svc
- users-svc
networks:
cinemanet:
frontnet:
ipv4_address: '172.25.0.20'
restart: unless-stopped
react-client:
build: ../react_client/client
container_name: react-client
depends_on:
- cinema-svc
networks:
- frontnet
ports:
- "80:80"
restart: unless-stopped
networks:
frontnet:
driver: bridge
ipam:
config:
- subnet: 172.25.0.0/24
I would like to implement a hot reloading functionality from development evinronement such that when i change anything in the source code it will reflect the changes up to the docker container by mounting the volume and hence see the changes live in my localhost.
Below is my docker-compose file
version: '3.9'
services:
server:
restart: always
build:
context: ./server
dockerfile: Dockerfile
volumes:
# don't overwrite this folder in container with the local one
- ./app/node_modules
# map current local directory to the /app inside the container
#This is a must for development in order to update our container whenever a change to the source code is made. Without this, you would have to rebuild the image each time you make a change to source code.
- ./server:/app
# ports:
# - 3001:3001
depends_on:
- mongodb
environment:
NODE_ENV: ${NODE_ENV}
MONGO_URI: mongodb://${MONGO_ROOT_USERNAME}:${MONGO_ROOT_PASSWORD}#mongodb
networks:
- anfel-network
client:
stdin_open: true
build:
context: ./client
dockerfile: Dockerfile
volumes:
- ./app/node_modules
- ./client:/app
# ports:
# - 3000:3000
depends_on:
- server
networks:
- anfel-network
mongodb:
image: mongo
restart: always
ports:
- 27017:27017
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_ROOT_PASSWORD}
volumes:
# for persistence storage
- mongodb-data:/data/db
networks:
- anfel-network
# mongo express used during development
mongo-express:
image: mongo-express
depends_on:
- mongodb
ports:
- 8081:8081
environment:
ME_CONFIG_MONGODB_ADMINUSERNAME: ${MONGO_ROOT_USERNAME}
ME_CONFIG_MONGODB_ADMINPASSWORD: ${MONGO_ROOT_PASSWORD}
ME_CONFIG_MONGODB_PORT: 27017
ME_CONFIG_MONGODB_SERVER: mongodb
ME_CONFIG_BASICAUTH_USERNAME: root
ME_CONFIG_BASICAUTH_PASSWORD: root
volumes:
- mongodb-data
networks:
- anfel-network
nginx:
restart: always
depends_on:
- server
- client
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- '8080:80'
networks:
- anfel-network
# volumes:
# - ./nginx/default.conf:/etc/nginx/conf.d/default.conf
networks:
anfel-network:
driver: bridge
volumes:
mongodb-data:
driver: local
Any suggestions would be appreciated.
You have to create a bind mount, this can help you
I'm running my containers via a docker compose file. They are in the same network and I can ping from my backend container to my database container. I use the database name as the hostname in the connection string and it doesn't bring any errors that it couldn't find the host. Instead, it just hangs up and times out.
I have a test endpoint which is just suppose to test the connection. When you use that endpoint, database container logs "invalid packet length", and on the frontend, nothing happens, then it times out. I have no idea whats wrong. Any help?
version: '3.2'
services:
server:
restart: always
build:
dockerfile: Dockerfile
context: ./nginx
depends_on:
- backend
- frontend
- database
ports:
- '5000:80'
networks:
- app_network
database:
image: postgres:latest
container_name: database
ports:
- "5432:5432"
restart: always
hostname: database
environment:
POSTGRES_PASSWORD: 1234
POSTGRES_USER: postgres
backend:
build:
context: ./backend
dockerfile: ./Dockerfile
image: kalendae:backend
hostname: backend
container_name: backend
environment:
- WAIT_HOSTS=database:5432
- DATABASE_HOST=database
- DATABASE_PORT=5432
- PORT=5051
frontend:
build:
context: ./frontend
dockerfile: ./Dockerfile
image: kalendae:frontend
hostname: frontend
container_name: frontend
environment:
- WAIT_HOSTS=backend:5051
- REACT_APP_BACKEND_HOST=localhost
- REACT_APP_BACKEND_PORT=5051
I'm trying to make requests to my server container from my client app in another container. The Docker Compose docs state that the network is setup automatically, so shouldn't all ports be accessible from all containers? When I make a curl request to port 4000 from outside of the container (in a fresh terminal), it works. However when I enter the client container (selektor-client) and try the same request, it fails.
curl --request POST http://localhost:4000/api/music
What am I doing wrong?
docker-compose.yaml:
version: "3"
services:
client:
container_name: selektor-client
restart: always
build: ./client
ports:
- "3000:3000"
volumes:
- ./client/:/client/
- /client/node_modules/
command: ["yarn", "start"]
server:
container_name: selektor-server
restart: always
build: ./server
ports:
- "4000:4000"
volumes:
- ./server/:/server/
depends_on:
- mongo
command: ["yarn", "start"]
mongo:
container_name: selektor-mongo
command: mongod --noauth
build: .
restart: always
volumes:
# - ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
- data-volume:/data/db
ports:
- "27017:27017"
volumes:
data-volume:
When you call containers of the same stack each other, the host name is, by default, the service name, and the port is the internal port: <servicename>:<internal_port>. So, based on this part of your example:
version: "3"
server:
container_name: selektor-server
restart: always
build: ./server
ports:
- "4000:4000"
volumes:
- ./server/:/server/
depends_on:
- mongo
command: ["yarn", "start"]
The url you client have to use to reach the server is http://server:4000
I had the same problem . my solution was registering network and giving subnet mast to the network and register the static ip to each container :
version: "3"
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/24
services:
mongodb:
image: mongo:4.2.1
container_name: mongo
command: mongod --auth
hostname: mongo
networks:
my_network:
ipv4_address: 172.20.0.5
volumes:
- /data/db/mongo:/data/db
ports:
- "27017:27017"
rabitmq:
hostname: rabbitmq
container_name: rabbitmq
image: rabbitmq:latest
networks:
my_network:
ipv4_address: 172.20.0.3
volumes:
- /var/lib/rabbitmq:/data/db
ports:
- "5672:5672"
- "15672:15672"
restart: always
you can access through IP addresses
I've created two basic MEAN stack apps with a common database (mongo db). I've also built docker for these apps.
Problem:
When i start a mean stack container(example-app-1) using
docker-compose up -d --build
The container runs smoothly and I'm also able to hit the container and view my page locally.
When i try to start another mean stack container(example-app-2) using
docker-compose up -d --build
my previous container is stopped and the current container works without any flaw.
Required:
I want both these containers to run simultaneously using a shared database. I need help in achieving this.
docker-compose.yml Example app -1
version: '3'
services:
example_app_1:
build:
dockerfile: dockerfile
context: ../../
image: example_app_1:1.0.0
backend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/backend/example-app-1-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_1
- BACKEND_PORT=8888
- BACKEND_IP=0.0.0.0
restart: always
ports:
- '8888:8888'
command: ['node', 'main.js']
networks:
- default
expose:
- 8888
frontend:
image: 'example_app_1:1.0.0'
working_dir: /app/example_app_1/frontend/example_app_1
ports:
- '5200:5200'
command: ['http-server', '-p', '5200', '-o', '/app/example_app_1/frontend/example-app-1']
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose.yml for Example app 2
version: '3'
services:
example-app-2:
build:
dockerfile: dockerfile
context: ../../
image: example_app_2:1.0.0
backend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/backend/example-app-2-api
environment:
- DB_URL=mongodb://172.17.0.1:27017/example_app_2
- BACKEND_PORT=3333
- BACKEND_IP=0.0.0.0
restart: always
networks:
- default
ports:
- '3333:3333'
command: ['node', 'main.js']
expose:
- 3333
frontend:
image: 'example-app-2:1.0.0'
working_dir: /app/example_app_2/frontend/example-app-2
ports:
- '4200:4200'
command: ['http-server', '-p', '4200', '-o', '/app/example_app_2/frontend/example-app-2
restart: always
depends_on:
- backend
networks:
default:
external:
name: backend_network
docker-compose will create containers named project_service. The project, by default, comes from the last component of the directory. So when you start the second docker-compose, it will stop the containers with those names, and start new ones.
Either move these two docker-compose files to separate directories so the container names are different, or run docker-compose with --project-name flag containing separate project names so the containers can be started with different names.