How to Build Docker-compose file for multiple containers - node.js

I have two different containers (1. Frontend app, 2. Webserver ) 1st dependent on server.
Currently both containers are built using seperate docker files and running perfectly fine in localhost envionment. Application is built using nodejs and angular.
I am using docker desktop for windows server 2019.
Using user defined network for both containers to communicate with each other:
docker network create --driver bridge dev_network
I need to build a docker compose file for both of them but don't have enough knowledge on how to build a working compose file. Will be glad if anyone could help me for the same.
Thanks for your time!
Frontend docker file:
FROM node:latest as build
WORKDIR /usr/local/app
COPY ./ /usr/local/app/
RUN npm install
FROM nginx:latest
COPY --from=build /usr/local/app/dist/ClientPortal /usr/share/nginx/html
EXPOSE 80
WebServer dockerfile:
FROM node:14-alpine
ENV NODE_ENV=production
WORKDIR /usr/src/app
COPY ["package.json", "package-lock.json*", "npm-shrinkwrap.json*", "./"]
RUN npm install --production --silent && mv node_modules ../
COPY . .
EXPOSE 3000 8600
CMD ["node", "server.js"]
I tried with following compose file
Defined volume to a windows drive as server image folders along with docker file is located into the directory.
Issue is i docker-compose build is building both the images but when i fire the up command the server container fails to load and exits with exit code 1
Error message: Error: Cannot find module '/usr/src/app/server.js'
Docker-Compose file:
version: '3.4'
services:
clientportal:
image: clientportal
container_name: cspfrontend
build:
context: .
dockerfile: ./Dockerfile
environment:
NODE_ENV: production
networks:
- dev_network
ports:
- 80:80
clientportalserver:
image: clientportalserver
container_name: cspserver
build:
context: .
dockerfile: E:\Work\ClientPortalServer/Dockerfile
volumes:
- E:\Work\ClientPortalServer
networks:
- dev_network
ports:
- 3000:3000
networks:
dev_network:
driver: bridge

I would go with something like this in a docker-compose.yml in the root folder:
version: "3"
services:
frontend:
build: ./frontend
ports:
- 80:80
networks:
- dev_network
depends_on:
- backend
backend:
build: ./backend
ports:
- 3000:3000
expose:
- "3000"
- "8600"
networks:
- dev_network
networks:
dev_network:
driver: bridge
This requires you to have a frontend and a backend folder to your projects, and in those folder you have to have those Dockerfiles, that you showed. I don't know what port are you using in your backend project, so I guessed with port 3000.
project root
│ docker-compose.yml
│
└───frontend
│ │ Dockerfile
│ │ ...
│
└───backend
│ Dockerfile
│ ...

You should start by reading the documentation on the website.
https://docs.docker.com/compose/
The first page shows a working example of a simple compose file with a web service and redis service. If you managed to get the containers running, then the options found in the Compose file reference will be very familiar to you.
You may find this link of particular interest in regards to using an existing Docker network:
https://docs.docker.com/compose/networking/#use-a-pre-existing-network

Related

NestJS does not connect with MongoDB when using Docker containers

NestJS App connect normally with MongoDB
but, after creating a docker containers for them
NestJS does not connect with MongoDB
here's Dockerfile
# Base image
FROM node:16-alpine
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
# Install app dependencies
RUN yarn install
# Bundle app source
COPY . .
# Creates a "dist" folder with the production build
RUN yarn build
here's the docker compose file
version: '3.8'
services:
mongodb:
image: mongo:latest
env_file:
- .env
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
api:
build: .
volumes:
- .:/app
- /app/node_modules
ports:
- ${PORT}:${PORT}
command: npm run start:dev
env_file:
- .env
depends_on:
- mongodb
volumes:
mongodb_data_container:
here's .env file
PORT=3000
DB_CONNECTION_STRING=mongodb://127.0.0.1:27017/db-name
here's the connect method inside NestJS app
MongooseModule.forRoot(process.env.DB_CONNECTION_STRING)
For everyone facing the same issue
replace mongodb://127.0.0.1:27017/db-name
with mongodb://mongodb:27017/db-name

Docker-compose builds but app does not serve on localhost

Docker newbie here. Docker-compose file builds without any issues but when I try to run my app on localhost:4200, I get a message - localhost didn't send any data on chrome and the server unexpectedly dropped the connection in safari. I am working on MacOs Catalina. Here is my yml file:
version: '3.0'
services:
my-portal:
build: .
ports:
- "4200:4200"
depends_on:
- backend
backend:
build: ./backend
ports:
- "3000:3000"
environment:
POSTGRES_HOST: host.docker.internal
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
depends_on:
-db
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: mydb
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
POSTGRES_HOST: host.docker.internal
ports:
- 5432:5432
restart: always
volumes:
- ./docker/db/data:/var/lib/postgresql/data
Log for Angular:
/docker-entrypoint.sh: Configuration complete; ready for start up
Log for Node: db connected
Log for Postgres: database system is ready to accept connections
Below are my Angular and Node Docker files:
FROM node:latest AS builder
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
EXPOSE 4200
# Stage 2
FROM nginx:alpine
COPY --from=builder /app/dist/* /usr/share/nginx/html/
Node:
FROM node:12
WORKDIR /backend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]
When I created Angular image and ran my app on localhost:4200 it worked fine. Please let me know if I am missing anything.
Your Angular container is built FROM nginx, and you use the default Nginx configuration from the Docker Hub nginx image. That listens on port 80, so that's the port number you need to use in use ports: directive:
services:
quickcoms-portal:
build: .
ports:
- "4200:80" # <-- second port must match nginx image's port
depends_on:
- backend
The EXPOSE directive in the first stage is completely ignored and you can delete it. The FROM nginx line causes docker build to basically completely start over from a new base image, so your final image is stock Nginx plus the files you COPY --from=builder.

Run multiple Docker containers at once using docker-compose

The Problem
Currently I've created a Dockerfile and a docker-compose.yml to run my rest-api and database using docker-compose up.
What I want to do now is add another container, namely the web application (build with React). I'm a little bit confused on how to do that, since I just started learning Docker 2 days ago.
Folder Structure
This is my current folder structure
Folder: rest-api (NodeJS)
Dockerfile
dockercompose.yml
The Question
In the end I want to be able to run docker-compose up to fire up both the rest-api and the web-app.
Do I need to create a seperate Dockerfile in every folder and create a 'global' docker-compose.yml to link everything together?
New folder structure:
dockercompose.yml
Folder: rest-api (NodeJS)
Dockerfile
Folder: web-app (React)
Dockerfile
My current setup to run the rest-api and database
Dockerfile
FROM node:13.10
# The destination of the app in the container
WORKDIR /usr/src/app
# Moves the package.json, package-loc.json and tsconfig.json to the specified workdir
COPY package*.json ./
COPY tsconfig.json ./
# Create user and postgres
ENV POSTGRES_USER root
ENV POSTGRES_PASSWORD 12345
ENV POSTGRES_DB postgres
ENV POSTGRES_URI 'postgres://postgres:12345#postgres:5432/postgres'
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: '3'
services:
node:
container_name: rest-api
restart: always
build: .
environment:
PORT: 3000
ports:
- '80:3000'
links:
- postgres
postgres:
container_name: postgres-database
image: postgres
environment:
POSTGRES_URI: 'postgres://postgres:12345#postgres-database:5432/postgres'
POSTGRES_PASSWORD: 12345
ports:
- '5432:5432'
Ok - so there are quite a few ways to approach this and it is pretty much more or less based on your preferance.
If you want to go with your proposed folder structure (which is fine), the you can for example do it like so:
Have a Dockerfile in the root of each of your applications which will build the specific application (as you already suggested) place your docker-compose.yml file in the parent folder of both applications (exactly as you proposed already) and then just make some changes to your docker-compose.yml (I only left the essential parts. Note that links are no longer necessary - the internal networking will resolve the service names to the corresponding service IP address)
version: '3'
services:
node:
build:
context: rest-api
environment:
PORT: 3000
ports:
- '3000:3000'
web:
image: web-app
build:
context: web-app
ports:
- 80:80
postgres:
image: postgres
environment:
POSTGRES_URI: 'postgres://postgres:12345#postgres-database:5432/postgres'
POSTGRES_PASSWORD: 12345
ports:
- '5432:5432'
So the context is what tells docker that what you are building is actually in a different directory and all of the commands executed in the Dockerfile will be relative to that folder
I also changed the port mappings, cause you probably will want to access your web app via HTTP port. Note that the web-app will be able to communicate with the rest-api container by using the node hostname as long as the node service is binding to 0.0.0.0:3000 (not 127.0.0.1:3000)

Docker Compose with Docker Toolbox: Node, Mongo, React. React app not showing in the said adress

I am trying to run Express server and React app trough docker containers.
The Express server runs correctly at the given address (the one on Kitematic GUI).
However I am unable to open the React application trough the given address, giving me site cannot be reached.
Running Windows 10 Home with Docker Toolbox.
React app dockerfile:
FROM node:10
# Set the working directory to /client
WORKDIR /frontend
# copy package.json into the container at /client
COPY package*.json ./
# install dependencies
RUN npm install
# Copy the current directory contents into the container at /client
COPY . .
# Make port 3001 available to the world outside this container
EXPOSE 3001
# Run the app when the container launches
CMD ["npm", "run", "start"]
Node/Express dockerfile:
# Use a lighter version of Node as a parent image
FROM node:10
# Set the working directory to /api
WORKDIR /backend
# copy package.json into the container at /api
COPY package*.json ./
# install dependencies
RUN npm install
# Copy the current directory contents into the container at /api
COPY . .
# Make port 3000 available to the world outside this container
EXPOSE 3000
# Run the app when the container launches
CMD ["npm", "start"]
Docker compose file:
version: '3'
services:
client:
container_name: hydrahr-client
build: .\frontend
restart: always
environment:
- REACT_APP_BASEURL=${REACT_APP_BASEURL}
expose:
- ${REACT_PORT}
ports:
- "3001:3001"
links:
- api
api:
container_name: hydrahr-api
build: ./backend
restart: always
expose:
- ${SERVER_PORT}
environment: [
'API_HOST=${API_HOST}',
'MONGO_DB=${MONGO_DB}',
'JWT_KEY=${JWT_KEY}',
'JWT_HOURS_DURATION=${JWT_HOURS_DURATION}',
'IMAP_EMAIL_LISTENER=${IMAP_EMAIL_LISTENER}',
'IMAP_USER=${IMAP_USER}',
'IMAP_PASSWORD=${IMAP_PASSWORD}',
'IMAP_HOST=${IMAP_HOST}',
'IMAP_PORT=${IMAP_PORT}',
'IMAP_TLS=${IMAP_TLS}',
'SMTP_EMAIL=${SMTP_EMAIL}',
'SMTP_PASSWORD=${SMTP_PASSWORD}',
'SMTP_HOST=${SMTP_HOST}',
'SMTP_PORT=${SMTP_PORT}',
'SMTP_TLS=${SMTP_TLS}',
'DEFAULT_SYSTEM_PASSWORD=${DEFAULT_SYSTEM_PASSWORD}',
'DEFAULT_SYSTEM_EMAIL=${DEFAULT_SYSTEM_EMAIL}',
'DEFAULT_SYSTEM_NAME=${DEFAULT_SYSTEM_NAME}',
'SERVER_PORT=${SERVER_PORT}'
]
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo
restart: always
container_name: mongo
ports:
- "27017:27017"
Running with docker-compose up -d
UPDATE 1:
I am able to run the react application using docker run -p 3000:3000 hydra-client-test after building that image.
Since running the container with -p 3000:3000 works, the client is actually probably listening on port 3000. Try setting:
ports:
- 3001:3000

Identical Docker images with different containers and ports not accessible

I'm running Docker host on my Windows dev machine and have 2 identifcal images exposing different ports (3000, 3001). Using the following docker-compose I build and run the containers but the container on port 3001 isn't available via localhost or my IP address.
DockerFile
FROM mhart/alpine-node:8
# Create an app directory (in the Docker container)
RUN mkdir -p /testdirectory
WORKDIR /testdirectory
COPY package.json /testdirectory
RUN npm install --loglevel=warn
COPY . /testdirectory
EXPOSE 3000
CMD ["node", "index.js"]
DockerFile
FROM mhart/alpine-node:8
# Create an app directory (in the Docker container)
RUN mkdir -p /test2directory
WORKDIR /test2directory
COPY package.json /test2directory
RUN npm install --loglevel=warn
COPY . /test2directory
EXPOSE 3001
CMD ["node", "index.js"]
Docker-Compose file
version: '3'
services:
testdirectory:
container_name: testdirectory
environment:
- DEBUG=1
- NODE_ENV=production
- NODE_NAME=testdirectory
- NODE_HOST=localhost
- NODE_PORT=3000
- DB_HOST=mongodb://mongo:27017/testdirectory
- DB_PORT=27017
build:
context: ./test-directory
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "3000:3000"
depends_on:
- mongodb
command: npm start
test2directory:
container_name: test2directory
environment:
- DEBUG=1
- NODE_ENV=production
- NODE_NAME=test2directory
- NODE_HOST=localhost
- NODE_PORT=3001
- DB_HOST=mongodb://mongo:27017/test2directory
- DB_PORT=27017
build:
context: ./test2-directory
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "3001:3001"
depends_on:
- mongodb
command: npm start
mongodb:
image: mongo:3.4.4
container_name: mongo
ports:
- 27017:27017
volumes:
- /data/db:/data/db
Is there any obvious I'm missing as when I run
docker container port test2directory
it returns
3001/tcp -> 0.0.0.0:3001
Found the problem! Setting the HOST to localhost in the container caused the problem and changing it to 0.0.0.0 got it working.

Resources