I have dockerized a NestJs application. But running it shows
Error: Error loading shared library /usr/src/app/node_modules/argon2/lib/binding/napi-v3/argon2.node: Exec format error
and sometimes it shows
Cannot find module 'webpack'
Strangely, it works fine on Windows but the errors come up on mac and amazon linux.
Dockerfile
###################
# BUILD FOR LOCAL DEVELOPMENT
###################
FROM node:16-alpine As development
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci
COPY . .
###################
# BUILD FOR PRODUCTION
###################
FROM node:16-alpine As build
WORKDIR /usr/src/app
COPY package*.json ./
COPY --from=development /usr/src/app/node_modules ./node_modules
COPY . .
RUN npm run build
ENV NODE_ENV production
RUN npm ci --only=production && npm cache clean --force
USER node
###################
# PRODUCTION
###################
FROM node:16-alpine As production
COPY --from=build /usr/src/app/node_modules ./node_modules
COPY --from=build /usr/src/app/dist ./dist
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: '3.9'
services:
api:
build:
dockerfile: Dockerfile
context: .
# Only will build development stage from our dockerfile
target: development
env_file:
- .env
volumes:
- api-data:/usr/src/app
# Run in dev Mode: npm run start:dev
command: npm run start:dev
ports:
- 3000:3000
depends_on:
- postgres
restart: 'always'
networks:
- prism-network
postgres:
image: postgres:14-alpine
environment:
POSTGRES_DB: 'prism'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'mysecretpassword'
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- 5432:5432
healthcheck:
test:
[
'CMD-SHELL',
'pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}',
]
interval: 10s
timeout: 5s
retries: 5
networks:
- prism-network
networks:
prism-network:
volumes:
api-data:
postgres-data:
I am stumped, why it isn't working.
Change (or check) two things in your setup:
In your docker-compose.yml file, delete the volumes: block that overwrites your application's /usr/src/app directory
services:
api:
build: { ... }
# volumes: <-- delete
# api-data:/usr/src/app <-- delete
volumes:
# api-data: <-- delete
postgres-data: # <-- keep
Create a .dockerignore file next to the Dockerfile, if you don't already have one, and make sure it includes the single line
node_modules
What's going on here? If you don't have the .dockerignore line, then the Dockerfile COPY . . line overwrites the node_modules tree from the RUN npm ci line with your host's copy of it; but if you have a different OS or architecture (for example, a Linux container on a Windows host) it can fail with the sort of error you show.
The volumes: block is a little more subtle. This causes Docker to create a named volume, and the contents of the volume replace the entire /usr/src/app tree in the image – in other words, you're running the contents of the volume and not the contents of the image. But the first time (and only the first time) you run the container, Docker copies the contents of the image into the volume. So it looks like you're running the image, and you have the same files, but they're actually coming out of the volume. If you change the image the volume does not get updated, so you're still running the old code.
Without the volumes: block, you're running the code out of the image, which is a standard Docker setup. You shouldn't need volumes: unless your application needs to store persistent data (as your database container does), or for a couple of other specialized needs like injecting configuration files or reading out logs.
Try this Dockerfile
FROM node:16-alpine
WORKDIR /usr/src/app
COPY yarn.lock ./
COPY package.json ./
RUN yarn install
COPY . .
RUN yarn build
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: "3.7"
services:
service_name:
container_name: orders_service
image: service_name:latest
build: .
env_file:
- .env
ports:
- "3001:3001"
volumes:
- .:/data
- /data/node_modules
i wouldn't delete volumes because of the hot reloading.
try this in the volumes section to be able to save (persist) data
volumes:
- /usr/src/app/node_modules
- .:/usr/src/app
I have the following dockerfile
# Install dependencies only when needed
FROM node:16-alpine AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install --frozen-lockfile
# Rebuild the source code only when needed
FROM node:16-alpine AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
RUN yarn build
COPY . .
# Production image, copy all the files and run next
FROM node:16-alpine AS runner
WORKDIR /app
ENV NODE_ENV production
RUN addgroup -g 1001 nodejs
RUN adduser -S 1001 nextjs
# You only need to copy next.config.js if you are NOT using the default configuration
COPY --from=builder /app/next.config.js ./
COPY --from=builder /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next ./.next
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package.json ./package.json
USER nextjs
EXPOSE 3000
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry.
ENV NEXT_TELEMETRY_DISABLED 1
ENV HOST=0.0.0.0
CMD yarn start
and the following docker-compose file
version: '3.5'
services:
ellis-development:
image: ellis-development
container_name: app
build:
context: .
dockerfile: dockerfile.dev
environment:
- NEXT_PUBLIC_ENV
- SENDGRID_API_KEY
- MONGODB_URI
- NEXTAUTH_SECRET
- NEXTAUTH_URL
ports:
- 3000:3000
volumes:
- .:/app
links:
- mongodb
depends_on:
- mongodb
mongodb:
image: mongo:latest
# environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: testing123
ports:
- 27017:27017
volumes:
- data:/data/db
volumes:
data:
I have all my envs setup in a .env file like so
NEXT_PUBLIC_ENV=local
SENDGRID_API_KEY=<redacted>
MONGODB_URI=mongodb://localhost:27017/<redacted>
NEXTAUTH_SECRET=eb141u85
NEXTAUTH_URL="http://localhost:3000"
This creates the following when running docker compose
and I can connect to localhost:27017 using mongodb compass.
However, for some reason docker cannot connect to my application.
What am I doing wrong here? First time setting up mongodb with docker so 🤷♂️
Fixed, spotted the error as soon as I posted the question... 😅
Changed
MONGODB_URI=mongodb://localhost:27017/<redacted>
to
MONGODB_URI=mongodb://mongodb:27017/<redacted>
I'm new to docker so I'm sure I'm missing something.
I'm trying to create a container with a react app. I'm using docker on a Windows 10 machine.
This is my docker file
FROM node:latest
EXPOSE 3000
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
COPY . /app
WORKDIR /app
CMD ["npm","run", "start"]
and this is my docker compose
version: '3.7'
services:
sample:
container_name: prova-react1
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING=true
- COMPOSE_CONVERT_WINDOWS_PATHS=1
When i start the container if I go on the browser everything is working fine but when i go back to Visual Studio Code and I make a modification to the files and save nothing occurs to the container and even to the website
I want to generate an .env and and an .env.example file and push it to the docker image during the docker build with Dockerfile.
The problem is that the .env.example file is not copied to the docker image. I think it works with the normal env, because I copy the env.production and save it as an env.
However, I am using node.js and typescript. Most people know that there is a way to use env viriables in the code by generating an env.d.ts and env.example.
During docker compose i am getting unfortunately that error.
no such file or directory, open '.env.example'
apparently the env.example is not copied to the dist folder. I need this one though.
Dockerfile
# stage 1 building the code
FROM node as builder
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
COPY . .
COPY .env.production .env
RUN npm run build
COPY .env.example ./dist
# stage 2
FROM node
WORKDIR /usr/app
COPY package*.json ./
RUN npm install --production
COPY --from=builder /usr/app/dist ./dist
COPY ormconfig.docker.json ./ormconfig.json
EXPOSE 4000
CMD node dist/index.js
docker-compose.yaml
version: "3.7"
services:
db:
image: postgres
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: postgres
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
web:
build:
context: ./
dockerfile: ./Dockerfile
depends_on:
- db
ports:
- "4000:4000"
volumes:
- ./src:/src
exmaple how i am using env variables in my application
env.d.ts
declare namespace NodeJS {
interface ProcessEnv {
PORT: string
EMAIL: string
EMAIL_PASSWORD: string
}
}
env.exmaple (must be there)
PORT=
EMAIL=
EMAIL_PASSWORD=
index.ts
import 'dotenv-safe/config'
import express from 'express'
const app = express()
app.listen(process.env.PORT, () => {
console.log(`Server starten on port ${process.env.PORT}`)
})
So I have this working as expected with flask where I used...
volumes:
- ./api:/app
And any files that I change in the api are picked up by the running session. I'd like to do the same for the frontend code.
For node/nginx, I used the below configuration. The only way for the file changes to be picked up is if I rebuild. I'd like for file changes to be picked up as they do for python but a bit stuck on why similar set up is not working for src files. Anyone know why this might be happening?
local path structure
public\
src\
Dockerfile.client
docker--compose.yml
docker file...
FROM node:16-alpine as build-step
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY ./src ./src
COPY ./public ./public
RUN yarn install
RUN yarn build
FROM nginx:alpine
COPY --from=build-step /app/build /usr/share/nginx/html
COPY nginx/nginx.conf /etc/nginx/nginx.conf
docker-compose
client:
build:
context: .
dockerfile: Dockerfile.client
volumes:
- ./src:/src
restart: always
ports:
- "80:80"
depends_on:
- api
This is happening because you are building the application.
...
RUN yarn build
...
and them using your build folder:
FROM nginx:alpine
COPY --from=build-step /app/build /usr/share/nginx/html
I believe that what you are looking for is a live reload. You can find a good example here.
But basically what you need is a Dockerfile like this:
# Dockerfile
# Pull official Node.js image from Docker Hub
FROM node:12
# Create app directory
WORKDIR /usr/src/app
# Install dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
# Expose container port 3000
EXPOSE 3000
# Run "start" script in package.json
CMD ["npm", "start"]
your npm start script:
"start": "nodemon -L server/index.js"
and your volume:
volumes:
- ./api:/usr/src/app/serve