NestJs Docker container shows errors - node.js

I have dockerized a NestJs application. But running it shows
Error: Error loading shared library /usr/src/app/node_modules/argon2/lib/binding/napi-v3/argon2.node: Exec format error
and sometimes it shows
Cannot find module 'webpack'
Strangely, it works fine on Windows but the errors come up on mac and amazon linux.
Dockerfile
###################
# BUILD FOR LOCAL DEVELOPMENT
###################
FROM node:16-alpine As development
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci
COPY . .
###################
# BUILD FOR PRODUCTION
###################
FROM node:16-alpine As build
WORKDIR /usr/src/app
COPY package*.json ./
COPY --from=development /usr/src/app/node_modules ./node_modules
COPY . .
RUN npm run build
ENV NODE_ENV production
RUN npm ci --only=production && npm cache clean --force
USER node
###################
# PRODUCTION
###################
FROM node:16-alpine As production
COPY --from=build /usr/src/app/node_modules ./node_modules
COPY --from=build /usr/src/app/dist ./dist
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: '3.9'
services:
api:
build:
dockerfile: Dockerfile
context: .
# Only will build development stage from our dockerfile
target: development
env_file:
- .env
volumes:
- api-data:/usr/src/app
# Run in dev Mode: npm run start:dev
command: npm run start:dev
ports:
- 3000:3000
depends_on:
- postgres
restart: 'always'
networks:
- prism-network
postgres:
image: postgres:14-alpine
environment:
POSTGRES_DB: 'prism'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'mysecretpassword'
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- 5432:5432
healthcheck:
test:
[
'CMD-SHELL',
'pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}',
]
interval: 10s
timeout: 5s
retries: 5
networks:
- prism-network
networks:
prism-network:
volumes:
api-data:
postgres-data:
I am stumped, why it isn't working.

Change (or check) two things in your setup:
In your docker-compose.yml file, delete the volumes: block that overwrites your application's /usr/src/app directory
services:
api:
build: { ... }
# volumes: <-- delete
# api-data:/usr/src/app <-- delete
volumes:
# api-data: <-- delete
postgres-data: # <-- keep
Create a .dockerignore file next to the Dockerfile, if you don't already have one, and make sure it includes the single line
node_modules
What's going on here? If you don't have the .dockerignore line, then the Dockerfile COPY . . line overwrites the node_modules tree from the RUN npm ci line with your host's copy of it; but if you have a different OS or architecture (for example, a Linux container on a Windows host) it can fail with the sort of error you show.
The volumes: block is a little more subtle. This causes Docker to create a named volume, and the contents of the volume replace the entire /usr/src/app tree in the image โ€“ in other words, you're running the contents of the volume and not the contents of the image. But the first time (and only the first time) you run the container, Docker copies the contents of the image into the volume. So it looks like you're running the image, and you have the same files, but they're actually coming out of the volume. If you change the image the volume does not get updated, so you're still running the old code.
Without the volumes: block, you're running the code out of the image, which is a standard Docker setup. You shouldn't need volumes: unless your application needs to store persistent data (as your database container does), or for a couple of other specialized needs like injecting configuration files or reading out logs.

Try this Dockerfile
FROM node:16-alpine
WORKDIR /usr/src/app
COPY yarn.lock ./
COPY package.json ./
RUN yarn install
COPY . .
RUN yarn build
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: "3.7"
services:
service_name:
container_name: orders_service
image: service_name:latest
build: .
env_file:
- .env
ports:
- "3001:3001"
volumes:
- .:/data
- /data/node_modules

i wouldn't delete volumes because of the hot reloading.
try this in the volumes section to be able to save (persist) data
volumes:
- /usr/src/app/node_modules
- .:/usr/src/app

Related

Path Error running AdonisJS in production via Docker

I'm getting a path error from require-ts and I have no idea how to determine what I'm missing. The app works fine locally using node ace serve but not within docker on a production server.
This is the error:
docker exec -it adonis_app node ace list:routes
TypeError
The "path" argument must be of type string. Received undefined
1 Cache.makeCachePath
/home/node/app/node_modules/#adonisjs/require-ts/build/src/Cache/index.js:41
2 Config.getCached
/home/node/app/node_modules/#adonisjs/require-ts/build/src/Config/index.js:139
3 Config.parse
/home/node/app/node_modules/#adonisjs/require-ts/build/src/Config/index.js:156
4 register
/home/node/app/node_modules/#adonisjs/require-ts/build/index.js:82
5 Object.registerForAdonis [as default]
/home/node/app/node_modules/#adonisjs/assembler/build/src/requireHook/index.js:17
6 registerTsHook
/home/node/app/node_modules/#adonisjs/core/build/src/utils/index.js:26
7 App.onFind
/home/node/app/node_modules/#adonisjs/core/build/src/Ignitor/Ace/App/index.js:132
This is my dockerfile:
ARG NODE_IMAGE=node:16.13.1-alpine
FROM $NODE_IMAGE AS base
RUN apk --no-cache add dumb-init
RUN mkdir -p /home/node/app && chown node:node /home/node/app
WORKDIR /home/node/app
USER node
RUN mkdir tmp
FROM base AS dependencies
COPY --chown=node:node ./package*.json ./
COPY --chown=node:node ./tsconfig*.json ./
RUN npm ci
COPY --chown=node:node . .
FROM dependencies AS build
RUN node ace build --production --ignore-ts-errors
FROM base AS production
ENV NODE_ENV=production
ENV PORT=$PORT
ENV HOST=0.0.0.0
COPY --chown=node:node ./package*.json ./
RUN npm ci --production
COPY --chown=node:node --from=build /home/node/app/build .
EXPOSE $PORT
CMD [ "dumb-init", "node", "build/server.js" ]
And docker-compose:
version: '3'
services:
adonis_app:
container_name: adonis_app
restart: always
build:
context: .
#target: dependencies
depends_on:
- postgres
ports:
- ${PORT}:${PORT}
- 9229:9229
env_file:
- .env
volumes:
- ./:/home/node/app
- uploads:/home/node/app/public/uploads
networks:
- adonis
#command: dumb-init node ace serve --watch --node-args="--inspect=0.0.0.0"
#command: dumb-init node build/server.js --node-args="--inspect=0.0.0.0"
postgres:
image: postgres:13.1
healthcheck:
test: [ "CMD", "pg_isready", "-q", "-d", "postgres", "-U", "root" ]
timeout: 45s
interval: 10s
retries: 10
restart: always
environment:
- POSTGRES_USER=${ROOT_DB_USER}
- POSTGRES_PASSWORD=${ROOT_DB_PASS}
- APP_DB_USER=${PG_USER}
- APP_DB_PASS=${PG_PASSWORD}
- APP_DB_NAME=${PG_DB_NAME}
volumes:
#- ./db:/docker-entrypoint-initdb.d/
- ./db:/var/lib/postgresql/data
ports:
- 54325:5432
networks:
- adonis
networks:
adonis:
driver: bridge
volumes:
db:
driver: local
uploads:
driver: local
I receive the same error when running any ace command, such as migration and seeding. What am I missing?
I finally have a solution after watching the adonis compiled code for two hours.
I found out that adonis uses a library called find-cache-dir.
And it seems that it doesn't find any cache directory, so it returns undefined (the famous one).
The solution I opted for is to just specify the cache directory in the environment variables.
For example:
ENV CACHE_DIR /home/node/app/.cache/
Consult the library documentation if you want to specify the cache directory elsewhere.
EDIT: If the library returns undefined it could also be because the node_modules directory is not writable, in this case make it writable or move the cache directory.
Thanks for reading (this is my first answer).

Cannot connect mongodb using docker, connection refused

I have the following dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
RUN yarn build
COPY . .
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
RUN addgroup -g 1001 nodejs
RUN adduser -S 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
USER nextjs
EXPOSE 3000
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry.
ENV NEXT_TELEMETRY_DISABLED 1
ENV HOST=0.0.0.0
CMD yarn start
and the following docker-compose file
version: '3.5'
services:
ellis-development:
image: ellis-development
container_name: app
build:
context: .
dockerfile: dockerfile.dev
environment:
- NEXT_PUBLIC_ENV
- SENDGRID_API_KEY
- MONGODB_URI
- NEXTAUTH_SECRET
- NEXTAUTH_URL
ports:
- 3000:3000
volumes:
- .:/app
links:
- mongodb
depends_on:
- mongodb
mongodb:
image: mongo:latest
# environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: testing123
ports:
- 27017:27017
volumes:
- data:/data/db
volumes:
data:
I have all my envs setup in a .env file like so
NEXT_PUBLIC_ENV=local
SENDGRID_API_KEY=<redacted>
MONGODB_URI=mongodb://localhost:27017/<redacted>
NEXTAUTH_SECRET=eb141u85
NEXTAUTH_URL="http://localhost:3000"
This creates the following when running docker compose
and I can connect to localhost:27017 using mongodb compass.
However, for some reason docker cannot connect to my application.
What am I doing wrong here? First time setting up mongodb with docker so ๐Ÿคทโ€โ™‚๏ธ
Fixed, spotted the error as soon as I posted the question... ๐Ÿ˜…
Changed
MONGODB_URI=mongodb://localhost:27017/<redacted>
to
MONGODB_URI=mongodb://mongodb:27017/<redacted>

Add .env.example in dist folder during docker compose up

I want to generate an .env and and an .env.example file and push it to the docker image during the docker build with Dockerfile.
The problem is that the .env.example file is not copied to the docker image. I think it works with the normal env, because I copy the env.production and save it as an env.
However, I am using node.js and typescript. Most people know that there is a way to use env viriables in the code by generating an env.d.ts and env.example.
During docker compose i am getting unfortunately that error.
no such file or directory, open '.env.example'
apparently the env.example is not copied to the dist folder. I need this one though.
Dockerfile
# stage 1 building the code
FROM node as builder
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
COPY . .
COPY .env.production .env
RUN npm run build
COPY .env.example ./dist
# stage 2
FROM node
WORKDIR /usr/app
COPY package*.json ./
RUN npm install --production
COPY --from=builder /usr/app/dist ./dist
COPY ormconfig.docker.json ./ormconfig.json
EXPOSE 4000
CMD node dist/index.js
.dockerignore
node_modules
docker-compose.yaml
version: "3.7"
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: postgres
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
web:
build:
context: ./
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- "4000:4000"
volumes:
- ./src:/src
Could somebody please help me !

Nest js Docker Cannot find module dist/main

I am building a Nest.js App using Docker-compose.
The problem is when I tried "docker-compose up prod" then it shows "Error: Cannot find module '/usr/src/app/dist/main."
Thus, I explored the files in the image of the prod, but I could find the dist folder. Also, I run dist/main and it works. However, I tried docker-compose up prod, it shows the above error.
Moreover, when I tried "docker-compose up dev." It works perfectly, making a dist folder to the host machine. The main difference between the dev and prod is the command that dev is using npm run start:dev, but prod is using npm run start:prod.
This is My DockerFile
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install rimraf
RUN npm install --only=development
COPY . .
RUN npm run build
FROM node:12.19.0-alpine3.9 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . .
COPY --from=development /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
This is my docker-compose.yaml
services:
proxy:
image: nginx:latest # ์ตœ์‹  ๋ฒ„์ „์˜ Nginx ์‚ฌ์šฉ
container_name: proxy # container ์ด๋ฆ„์€ proxy
ports:
- '80:80' # 80๋ฒˆ ํฌํŠธ๋ฅผ host์™€ container ๋งตํ•‘
networks:
- nestjs-network
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf # nginx ์„ค์ • ํŒŒ์ผ volume ๋งตํ•‘
restart: 'unless-stopped' # ๋‚ด๋ถ€์—์„œ ์—๋Ÿฌ๋กœ ์ธํ•ด container๊ฐ€ ์ฃฝ์„ ๊ฒฝ์šฐ restart
depends_on:
- prod
dev:
container_name: nestjs_api_dev
image: nestjs-api-dev:1.0.0
build:
context: .
target: development
dockerfile: ./Dockerfile
command: npm run start:dev #node dist/src/main #n
ports:
- 3001:3000
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
prod:
container_name: nestjs_api_prod
image: nestjs-api-prod:1.0.0
build:
context: .
target: production
dockerfile: ./Dockerfile
command: npm run start:prod
# ports:
# - 3000:3000
# - 9229:9229
expose:
- '3000' # ๋‹ค๋ฅธ ์ปจํ…Œ์ด๋„ˆ์—๊ฒŒ 3000๋ฒˆ ํฌํŠธ open
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
networks:
nestjs-network:```
Ok... I found the solution.
At the docker-compose.yaml, .:/usr/src/app should be removed from the volumes of the service "prod." Since the "dist" folder does not exist in the local machine, if the current local directory is mounted, then it shows Not found error. Guess I should study volume much deeper.

Install all needed node modules inside Docker container, not just copying them

The problem is the following. We have a project with several services running with docker compose.
When one of us adds a new module with npm install <module name> --save, the package*.json files are going to be updated and the module is going to be installed in the node_module folder.
Here, running docker-compose up --build everything works fine.
Nevertheless, when someone else pulls the updated versions of the package*.json files and tries to run docker-compose up --build Docker outputs the error that the module is not found.
It seems like the local node_module folder is copied directly into the Docker container.
The question is how can we make it possible that all needed node modules which are in the package*.json files are going to be installed inside the container not just copied?
Here is one DOCKERFILE:
FROM node:14
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD npm run devStart
as well as the docker-compose.yaml file:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:
We do have already a .dockerignore file:
node_modules
npm-debug.log
Update
#DazWilkin recognized that we mount our context into the container. This overrides the images content with local content. We do this because we use nodemon. Therefore, we can see changes in the code on the fly. Is there a possibility to exclude the node_modules directory from this?
I think that your issue is that Docker is caching layers and not detecting the changed package*.json.
What happens if everyone docker-compose up --build --no-cache?
NOTE because you're running npm install in the container image build, you don't need (and probably don't want) to npm install on the host beforehand
NOTE Since you're using containers, you may want to use a container registry too. That way, once the first updated docker images have been built, everyone else can pull these from the registry rather than rebuild them for themselves.
Update
I just noticed that you're mounting your context in container; you don't want to do that. It will override the work you've done building the container:
version: "3"
services:
list-service:
build:
context: ./list-service
dockerfile: Dockerfile
ports:
- "4041"
volumes:
- ./list_service:/usr/src/app <<< ---- DELETE
#- /usr/src/app/node_modules
user-service:
build:
context: ./user_service
dockerfile: Dockerfile
ports:
- "4040"
volumes:
- ./user_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
api-gateway:
build:
context: ./api_gateway
dockerfile: Dockerfile
ports:
- "5000:5000"
volumes:
- ./api_gateway:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
date_list-service:
build:
context: ./date_list_service
dockerfile: Dockerfile
ports:
- "4042"
volumes:
- ./date_list_service:/usr/src/app <<< ---- DELETE
# - /usr/src/app/node_modules
mongo:
container_name: mongo
image: mongo
command: mongod --port 27018
volumes:
- data:/data/db
volumes:
data:

Resources