Docker-compose builds but app does not serve on localhost - node.js

Docker newbie here. Docker-compose file builds without any issues but when I try to run my app on localhost:4200, I get a message - localhost didn't send any data on chrome and the server unexpectedly dropped the connection in safari. I am working on MacOs Catalina. Here is my yml file:
version: '3.0'
services:
my-portal:
build: .
ports:
- "4200:4200"
depends_on:
- backend
backend:
build: ./backend
ports:
- "3000:3000"
environment:
POSTGRES_HOST: host.docker.internal
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
depends_on:
-db
db:
image: postgres:9.6-alpine
environment:
POSTGRES_DB: mydb
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: mypwd
POSTGRES_HOST: host.docker.internal
ports:
- 5432:5432
restart: always
volumes:
- ./docker/db/data:/var/lib/postgresql/data
Log for Angular:
/docker-entrypoint.sh: Configuration complete; ready for start up
Log for Node: db connected
Log for Postgres: database system is ready to accept connections
Below are my Angular and Node Docker files:
FROM node:latest AS builder
WORKDIR /app
COPY . .
RUN npm install
RUN npm run build --prod
EXPOSE 4200
# Stage 2
FROM nginx:alpine
COPY --from=builder /app/dist/* /usr/share/nginx/html/
Node:
FROM node:12
WORKDIR /backend
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]
When I created Angular image and ran my app on localhost:4200 it worked fine. Please let me know if I am missing anything.

Your Angular container is built FROM nginx, and you use the default Nginx configuration from the Docker Hub nginx image. That listens on port 80, so that's the port number you need to use in use ports: directive:
services:
quickcoms-portal:
build: .
ports:
- "4200:80" # <-- second port must match nginx image's port
depends_on:
- backend
The EXPOSE directive in the first stage is completely ignored and you can delete it. The FROM nginx line causes docker build to basically completely start over from a new base image, so your final image is stock Nginx plus the files you COPY --from=builder.

Related

WSL2, Docker & Node : Unable to request Node

I created a JS app with Docker Compose with a front, a back and a common component with Yarn Workspaces. It works on Linux. I am out of ideas to make it work on WSL.
The Docker Compose :
# Use postgres/example user/password credentials
version: '3.1'
services:
postgres:
image: postgres:latest
# restart: always
environment:
POSTGRES_PASSWORD: password
POSTGRES_DB: caddie_app
ports:
- '5432:5432'
backend:
image: node:16
volumes:
- '.:/app'
ports:
- '3001:3001' # Nest
depends_on:
- postgres
working_dir: /app
command: ["yarn", "workspace", "#caddie/backend", "start:dev"]
environment:
# with docker we listen to the postgres network, but it is reachable at #localhost on our post
DATABASE_URL: postgresql://postgres:password#postgres:5432/caddie_app?schema=public
frontend:
image: node:16
volumes:
- '.:/app'
ports:
- '3000:3000' # React
depends_on:
- backend
working_dir: /app
command: ["yarn", "workspace", "#caddie/frontend", "start"]
I can reach the database with DBeaver, I can fetch the React JS scripts on localhost:3000, but I cannot request the NestJS server on localhost:3001.
The NestJS server is listening on 0.0.0.0
await app.listen(3001, '0.0.0.0');
I allowed the ports 3000 & 3001 on the Firewall. I tried to request directly the NodeJS through the IP of WSL found in ipconfig but the problem remains. I can't figure out what's wrong.
Thanks !

Docker NodeJS app cant connect to postgres database

I am farely new to docker and docker-compose. I tried to spin up a few services using docker which contain of a nodejs (Nest.js) api, a postgres db and pgadmin. Without the API (nodejs) app beeing dockerized I could connect to the docker database containers, but now that I also have dockerized the node app, it is not connecting anymore and I am clueless why. Is there anything wrong with the way I have set it up?
Here is my docker-compose file
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '5000:5000'
depends_on:
- postgres
volumes:
- .:/app
- /app/node_modules
networks:
- postgres
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/data/postgres
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
volumes:
- /data/pgadmin:/root/.pgadmin
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge
This is the nodejs app Dockerfile which builds successfully and in the logs I see the app is trying to connect to the databse but it cant (no specific error) just that it doesnt find the db.
# Image source
FROM node:14-alpine
# Docker working directory
WORKDIR /app
# Copying file into APP directory of docker
COPY ./package.json /app/
RUN apk update && \
apk add git
# Then install the NPM module
RUN yarn install
# Copy current directory to APP folder
COPY . /app/
EXPOSE 5000
CMD ["npm", "run", "start:dev"]
I have 2 env files in my projecs root directory.
.env
docker.env
As mentioned above, when I remove the "nftapi" service from docker and run the nodejs up with a simple npm start it is connecting to the postgres container.
TypeOrmModule.forRoot({
type: 'postgres',
host: process.env.POSTGRES_HOST,
port: Number(process.env.POSTGRES_PORT),
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_DB,
synchronize:true,
entities: ['dist/**/*.entity{.ts,.js}'],
}),
The host from the .env file that is used in the typeorm module is localhost
When using networks with docker-compose you should use the name of the service as you hostname.
so in your case the hostname should be postgres and not localhost
You can read more about at here:
https://docs.docker.com/compose/networking/

AWS EC2 dockerized Nodejs app to listen over ssl (443)

I have a nodejs api dockerized running on an EC2 instance. The app itself is running inside the container on port 5000 and is mapped via docker-compose to 5000:5000.
I actually wanted the API to listen on the port I can access via https so I tried mapping the ports in the docker-compose like this "443:5000". I set an inbound rule in the EC2 security group for SSL allow access from anywhere but when I hit the IP or DNS name into my browser it is not responding.
# Image source
FROM node:14-alpine
# Docker working directory
WORKDIR /app
# Copying file into APP directory of docker
COPY ./package.json /app/
RUN apk update && \
apk add git
# Then install the NPM module
RUN yarn install
# Copy current directory to APP folder
COPY . /app/
EXPOSE 5000
CMD ["npm", "run", "start:dev"]
Is there anything I was missing?
version: "3"
services:
nftapi:
env_file:
- .env
build:
context: .
ports:
- '443:5000'
depends_on:
- postgres
volumes:
- .:/app
- /app/node_modules
networks:
- postgres
postgres:
container_name: postgres
image: postgres:latest
ports:
- "5432:5432"
volumes:
- /data/postgres:/data/postgres
env_file:
- docker.env
networks:
- postgres
pgadmin:
links:
- postgres:postgres
container_name: pgadmin
image: dpage/pgadmin4
ports:
- "8080:80"
volumes:
- /data/pgadmin:/root/.pgadmin
env_file:
- docker.env
networks:
- postgres
networks:
postgres:
driver: bridge

Docker Compose with Docker Toolbox: Node, Mongo, React. React app not showing in the said adress

I am trying to run Express server and React app trough docker containers.
The Express server runs correctly at the given address (the one on Kitematic GUI).
However I am unable to open the React application trough the given address, giving me site cannot be reached.
Running Windows 10 Home with Docker Toolbox.
React app dockerfile:
FROM node:10
# Set the working directory to /client
WORKDIR /frontend
# copy package.json into the container at /client
COPY package*.json ./
# install dependencies
RUN npm install
# Copy the current directory contents into the container at /client
COPY . .
# Make port 3001 available to the world outside this container
EXPOSE 3001
# Run the app when the container launches
CMD ["npm", "run", "start"]
Node/Express dockerfile:
# Use a lighter version of Node as a parent image
FROM node:10
# Set the working directory to /api
WORKDIR /backend
# copy package.json into the container at /api
COPY package*.json ./
# install dependencies
RUN npm install
# Copy the current directory contents into the container at /api
COPY . .
# Make port 3000 available to the world outside this container
EXPOSE 3000
# Run the app when the container launches
CMD ["npm", "start"]
Docker compose file:
version: '3'
services:
client:
container_name: hydrahr-client
build: .\frontend
restart: always
environment:
- REACT_APP_BASEURL=${REACT_APP_BASEURL}
expose:
- ${REACT_PORT}
ports:
- "3001:3001"
links:
- api
api:
container_name: hydrahr-api
build: ./backend
restart: always
expose:
- ${SERVER_PORT}
environment: [
'API_HOST=${API_HOST}',
'MONGO_DB=${MONGO_DB}',
'JWT_KEY=${JWT_KEY}',
'JWT_HOURS_DURATION=${JWT_HOURS_DURATION}',
'IMAP_EMAIL_LISTENER=${IMAP_EMAIL_LISTENER}',
'IMAP_USER=${IMAP_USER}',
'IMAP_PASSWORD=${IMAP_PASSWORD}',
'IMAP_HOST=${IMAP_HOST}',
'IMAP_PORT=${IMAP_PORT}',
'IMAP_TLS=${IMAP_TLS}',
'SMTP_EMAIL=${SMTP_EMAIL}',
'SMTP_PASSWORD=${SMTP_PASSWORD}',
'SMTP_HOST=${SMTP_HOST}',
'SMTP_PORT=${SMTP_PORT}',
'SMTP_TLS=${SMTP_TLS}',
'DEFAULT_SYSTEM_PASSWORD=${DEFAULT_SYSTEM_PASSWORD}',
'DEFAULT_SYSTEM_EMAIL=${DEFAULT_SYSTEM_EMAIL}',
'DEFAULT_SYSTEM_NAME=${DEFAULT_SYSTEM_NAME}',
'SERVER_PORT=${SERVER_PORT}'
]
ports:
- "3000:3000"
depends_on:
- mongo
mongo:
image: mongo
restart: always
container_name: mongo
ports:
- "27017:27017"
Running with docker-compose up -d
UPDATE 1:
I am able to run the react application using docker run -p 3000:3000 hydra-client-test after building that image.
Since running the container with -p 3000:3000 works, the client is actually probably listening on port 3000. Try setting:
ports:
- 3001:3000

Identical Docker images with different containers and ports not accessible

I'm running Docker host on my Windows dev machine and have 2 identifcal images exposing different ports (3000, 3001). Using the following docker-compose I build and run the containers but the container on port 3001 isn't available via localhost or my IP address.
DockerFile
FROM mhart/alpine-node:8
# Create an app directory (in the Docker container)
RUN mkdir -p /testdirectory
WORKDIR /testdirectory
COPY package.json /testdirectory
RUN npm install --loglevel=warn
COPY . /testdirectory
EXPOSE 3000
CMD ["node", "index.js"]
DockerFile
FROM mhart/alpine-node:8
# Create an app directory (in the Docker container)
RUN mkdir -p /test2directory
WORKDIR /test2directory
COPY package.json /test2directory
RUN npm install --loglevel=warn
COPY . /test2directory
EXPOSE 3001
CMD ["node", "index.js"]
Docker-Compose file
version: '3'
services:
testdirectory:
container_name: testdirectory
environment:
- DEBUG=1
- NODE_ENV=production
- NODE_NAME=testdirectory
- NODE_HOST=localhost
- NODE_PORT=3000
- DB_HOST=mongodb://mongo:27017/testdirectory
- DB_PORT=27017
build:
context: ./test-directory
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "3000:3000"
depends_on:
- mongodb
command: npm start
test2directory:
container_name: test2directory
environment:
- DEBUG=1
- NODE_ENV=production
- NODE_NAME=test2directory
- NODE_HOST=localhost
- NODE_PORT=3001
- DB_HOST=mongodb://mongo:27017/test2directory
- DB_PORT=27017
build:
context: ./test2-directory
volumes:
- .:/usr/app/
- /usr/app/node_modules
ports:
- "3001:3001"
depends_on:
- mongodb
command: npm start
mongodb:
image: mongo:3.4.4
container_name: mongo
ports:
- 27017:27017
volumes:
- /data/db:/data/db
Is there any obvious I'm missing as when I run
docker container port test2directory
it returns
3001/tcp -> 0.0.0.0:3001
Found the problem! Setting the HOST to localhost in the container caused the problem and changing it to 0.0.0.0 got it working.

Resources