Docker and NodeJS: could not connect to the container - node.js

I'm trying to dockerize a simple NodeJS API, I've tested it as a standalone and it's working. But after dockerize it I can't connect to the container, in the next image you can see two important facts: the container is permanently restarting and I could not connect to it:
After try to establish connection using a GET request the container begins to restart and after a minute later is up for short seconds.
This is my Dockerfile:
FROM node:lts-buster-slim
# Create app directory
WORKDIR /opt/myapps/noderest01
COPY package.json /opt/myapps/noderest01/package.json
COPY package-lock.json /opt/myapps/noderest01/package-lock.json
RUN npm ci
COPY . /opt/myapps/noderest01
EXPOSE 3005
CMD [ "npm", "run", "dev" ]
And this my yaml file:
services:
rest01:
container_name: rest01
ports:
- "3005:3005"
restart: always
build: .
volumes:
- rest01:/opt/myapps/noderest01
- rest01nmodules:/opt/myapps/noderest01/node_modules
networks:
- node-rest01
volumes:
rest01:
rest01nmodules:
networks:
node-rest01:
I used this command to create the image: docker-compose -f docker-compose.yaml up -d
Surely, I need to update my yaml or dockerfile to fix this, I've been searching for a while but I can't find the origin of the problem, so I want to ask for your advises how to fix and update my docker's files and connect to the container, if you have any suggestions please let me know.
Best.

Related

How to run Docker Compose on AWS EB?

I've got a problem with AWS Elastic Beanstalk. I'm create an application on EB with Docker. My 'local' application is a simply server in NestJS and Typescript where in docker-compose-yml are two services like below:
version: '3.7'
networks:
proxy:
name: proxy
services:
redis:
image: redis:6.2-alpine
ports:
- 6379:6379
command: ["redis-server", "--requirepass", "pass"]
networks:
- proxy
worker:
container_name: worker
build:
context: .
dockerfile: Dockerfile
depends_on:
- redis
ports:
- 8080:8080
expose:
- '8080'
env_file:
- .env
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
command: npm run dev
networks:
- proxy
and Dockerfile for worker container:
FROM node:16.3.0-alpine as builder
WORKDIR /dist
COPY package*.json ./
RUN npm install --force
COPY . .
RUN npm run build
EXPOSE 8080
CMD [ "npm", "run", "dev"]
From project root I'm launch commands with using AWS and EB CLI, (in order) eb init, eb create, eb deploy. Thanks of this Beanstalk create for me Ec2 instance, Load Balancing, S3 etc.. And ok. Everything going good. In Beanstalk Configuration I'm adding Environment variables and rebuild project to because after eb create app is not running cause need envs like envs to DB on RDS. And now I have a problem, after doing all of this above, and entering on URL in browser i see this:
Can someone tell me what am i doing wrong? I'm searching for fix to this issue and i set inbound rules like below:
but it not help, in envs I set PORT env to 5000 and still not working
so, can someone with experience tell me what am i doing wrong? I stuck here for a last 2 days and I have no idea what to do with this stuff... Thanks for any help!

My Docker container for a MEAN website dashboard does not work in Docker. Nodejs keeps restarting

For a project at university I am working on I am trying to get a MEAN stack website up and running via docker images and containers. However, when I run the command:
docker-compose up --build
It results in this: nodejs permanently restarting.
When the command is run, I get these messages, at various points, which look like errors to me:
failed to get console mode for stdout: The handle is invalid.
and
nodejs exited with code 0
and then it seems like the connection to the MongoDB keeps starting and ending with these errors:
db | 2021-03-30T08:50:22.519+0000 I NETWORK [listener] connection accepted from 172.21.0.2:39627 #27 (1 connection now open)
db | 2021-03-30T08:50:22.519+0000 I NETWORK [conn27] end connection 172.21.0.2:39627 (0 connections now open)
Prior to running the above command I have tested the website will work without a connection to MongoDB with: docker build . in the Angular root folder containing the Dockerfile and the Express API aspect of it works as I can visit the dashboard at http://localhost:3000/.
The full command process I run to achieve the failed state image linked above is as follows:
docker-compose pull → docker-compose build → docker-compose up --build.
I am using Docker Desktop and running the commands in Powershell on Windows 10 Pro.
My Dockerfile is as follows:
# We use the official image as a parent image.
FROM node:10-alpine
RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
# Set the working directory.
WORKDIR /home/node/app
# Copy the file(s) from your host to your current location.
COPY package*.json ./
# Change the user to node. This will apply to both the runtime user and the following commands.
USER node
# Run the command inside your image filesystem.
RUN npm install
COPY --chown=node:node . .
# Building the webstie
RUN ./node_modules/.bin/ng build
# Add metadata to the image to describe which port the container is listening on at runtime.
EXPOSE 3000
# Run the specified command within the container.
CMD [ "node", "server.js" ]
And my docker-compose.yml is:
version: '3'
services:
nodejs:
build:
context: .
dockerfile: Dockerfile
image: nodejs
container_name: nodejs
restart: unless-stopped
env_file: .env
environment:
- MONGO_USERNAME=$MONGO_USERNAME
- MONGO_PASSWORD=$MONGO_PASSWORD
- MONGO_HOSTNAME=db
- MONGO_PORT=$MONGO_PORT
- MONGO_DB=$MONGO_DB
ports:
- "3000:3000"
networks:
- app-network
command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon server.js
db:
image: mongo:4.1.8-xenial
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
- MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
volumes:
- dbdata:/data/db
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
These are the same files that have been provided by the University and supposedly work.
I am new to Docker and containerization but shall try and provide you with any additional information, should you need it.

How to copy build files from one container to another or host on docker

I am trying to dockerize application where I have a php backend and vuejs frontend. Backend is working as I expect however after running npm run build within the frontend container, I need to copy build files from dist folder to nginx container or to host and then use volume to bring those files to nginx container.
I tried to use named volume
services:
frontend:
.....
volumes:
- static:/var/www/frontend/dist
nginx:
.....
volumes:
- static:/var/www/frontend/dist
volumes:
static:
I also tried to do following as suggested on here to bring back dist folder to host
services:
frontend:
.....
volumes:
- ./frontend/dist:/var/www/frontend/dist
However none of the above options is working for me. Below is my docker-compose.yml file and frontend Dockerfile
version: "3"
services:
database:
image: mysql:5.7.22
.....
backend:
build:
context: ./docker/php
dockerfile: Dockerfile
.....
frontend:
build:
context: .
dockerfile: docker/node/Dockerfile
target: 'build-stage'
container_name: frontend
stdin_open: true
tty: true
volumes:
- ./frontend:/var/www/frontend
nginx:
build:
context: ./docker/nginx
dockerfile: Dockerfile
container_name: nginx
restart: unless-stopped
ports:
- 80:80
volumes:
- ./backend/public:/var/www/backend/public:ro
- ./frontend/dist:/var/www/frontend/dist:ro
depends_on:
- backend
- frontend
Frontend Dockerfile
# Develop stage
FROM node:lts-alpine as develop-stage
WORKDIR /var/wwww/frontend
COPY /frontend/package*.json ./
RUN npm install
COPY /frontend .
# Build stage
FROM develop-stage as build-stage
RUN npm run build
You can combine the frontend image and the Nginx image into a single multi-stage build. This basically just involves copying your docker/node/Dockerfile as-is into the start of docker/nginx/Dockerfile, and then COPY --from=build-stage into the final image. You will also need to adjust some paths since you'll need to make the build context be the root of your project.
# Essentially what you had in the question
FROM node:lts AS frontend
WORKDIR /frontend
COPY frontend/package*.json .
RUN npm install
COPY frontend .
RUN npm run build
# And then assemble the Nginx image with content
FROM nginx
COPY --from=frontend /frontend/dist /var/www/html
Once you've done this, you can completely delete the frontend container from your docker-compose.yml file. Note that it never did anything – the image didn't declare a CMD, and the docker-compose.yml didn't provide a command: to run either – and so this shouldn't really change your application.
You can use a similar technique to copy the static files from your PHP application into the Nginx proxy. When all is said and done, this leaves you with a simpler docker-compose.yml file:
version: "3.8"
services:
database:
image: mysql:5.7.22
.....
backend:
build: ./docker/php # don't need to explicitly name Dockerfile
# note: will not need volumes: to export files
.....
# no frontend container any more
nginx:
build:
context: . # because you're COPYing files from other components
dockerfile: docker/nginx/Dockerfile
restart: unless-stopped
ports:
- 80:80
# no volumes:, everything is built into the image
depends_on:
- backend
(There are two practical problems with trying to share content using Docker named volumes. The first is that volumes never update their content once they're created, and in fact hide the content in their original image, so using volumes here actually causes changes in your source code to be ignored in favor of arbitrarily old content. In environments like Kubernetes, the cluster doesn't even provide the copy-on-first-use that Docker has, and there are significant limitations on how files can be shared between containers. This works better if you build self-contained images and don't try to share files.)

Run multiple Docker containers at once using docker-compose

The Problem
Currently I've created a Dockerfile and a docker-compose.yml to run my rest-api and database using docker-compose up.
What I want to do now is add another container, namely the web application (build with React). I'm a little bit confused on how to do that, since I just started learning Docker 2 days ago.
Folder Structure
This is my current folder structure
Folder: rest-api (NodeJS)
Dockerfile
dockercompose.yml
The Question
In the end I want to be able to run docker-compose up to fire up both the rest-api and the web-app.
Do I need to create a seperate Dockerfile in every folder and create a 'global' docker-compose.yml to link everything together?
New folder structure:
dockercompose.yml
Folder: rest-api (NodeJS)
Dockerfile
Folder: web-app (React)
Dockerfile
My current setup to run the rest-api and database
Dockerfile
FROM node:13.10
# The destination of the app in the container
WORKDIR /usr/src/app
# Moves the package.json, package-loc.json and tsconfig.json to the specified workdir
COPY package*.json ./
COPY tsconfig.json ./
# Create user and postgres
ENV POSTGRES_USER root
ENV POSTGRES_PASSWORD 12345
ENV POSTGRES_DB postgres
ENV POSTGRES_URI 'postgres://postgres:12345#postgres:5432/postgres'
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: '3'
services:
node:
container_name: rest-api
restart: always
build: .
environment:
PORT: 3000
ports:
- '80:3000'
links:
- postgres
postgres:
container_name: postgres-database
image: postgres
environment:
POSTGRES_URI: 'postgres://postgres:12345#postgres-database:5432/postgres'
POSTGRES_PASSWORD: 12345
ports:
- '5432:5432'
Ok - so there are quite a few ways to approach this and it is pretty much more or less based on your preferance.
If you want to go with your proposed folder structure (which is fine), the you can for example do it like so:
Have a Dockerfile in the root of each of your applications which will build the specific application (as you already suggested) place your docker-compose.yml file in the parent folder of both applications (exactly as you proposed already) and then just make some changes to your docker-compose.yml (I only left the essential parts. Note that links are no longer necessary - the internal networking will resolve the service names to the corresponding service IP address)
version: '3'
services:
node:
build:
context: rest-api
environment:
PORT: 3000
ports:
- '3000:3000'
web:
image: web-app
build:
context: web-app
ports:
- 80:80
postgres:
image: postgres
environment:
POSTGRES_URI: 'postgres://postgres:12345#postgres-database:5432/postgres'
POSTGRES_PASSWORD: 12345
ports:
- '5432:5432'
So the context is what tells docker that what you are building is actually in a different directory and all of the commands executed in the Dockerfile will be relative to that folder
I also changed the port mappings, cause you probably will want to access your web app via HTTP port. Note that the web-app will be able to communicate with the rest-api container by using the node hostname as long as the node service is binding to 0.0.0.0:3000 (not 127.0.0.1:3000)

Docker is not building container with changes in source code

I'm relatively new with Docker and I just created an Node.js application that should connect with other services also running on Docker.
So I get the source code and a Dockerfile to setup this image and a docker-compose to orchestrate the environment.
I had a few problems in the beginning so I just updated my source code and found out that it's not getting updated in the next build of docker-compose.
For example I commented all the lines that connect to Redis and MongoDB. I run the application locally and it's fine. But when I create it again in a container, I get the errors "Connection refused..."
I tried many things and this is what i get at the momment:
Dockerfile
FROM node:9
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
CMD node app.js
EXPOSE 8090
docker-compose.yml
version: '3'
services:
app:
build: .
ports:
- "8090:8090"
container_name: app
redis:
image: redis:latest
ports:
- "6379:6379"
container_name: redis
mongodb:
image: mongo:latest
container_name: "mongodb"
volumes:
- ./data/db:/data/db
ports:
- 27017:27017
up.sh
sudo docker stop app
sudo docker rm app
docker-compose build --no-cache app
sudo docker-compose up --force-recreate
Any ideas on what could be the problem? Why doesn't it use the current source code? It is using some sort of cache.

Resources