Docker compose failed in production server but not in pc - node.js

I have developed an application in node js that needs to connect to MongoDB because of that I implemented a wait-for.sh script, when I run the entire application in my computer docker everything goes well, but when I run it in my digital ocean ubuntu 20.04 server it crashed with the following error
Node.js v17.4.0
/usr/app/wait-for.sh:3
# original script: https://github.com/eficode/wait-for/blob/master/wait-for
^
SyntaxError: Invalid or unexpected token
at Object.compileFunction (node:vm:352:18)
at wrapSafe (node:internal/modules/cjs/loader:1026:15)
at Module._compile (node:internal/modules/cjs/loader:1061:27)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1149:10)
at Module.load (node:internal/modules/cjs/loader:975:32)
at Function.Module._load (node:internal/modules/cjs/loader:822:12)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)
at node:internal/main/run_main_module:17:47
This is the dockerfile
FROM node:17-alpine
WORKDIR /usr/app
COPY package.json .
RUN npm install --quiet
COPY . .
COPY wait-for.sh wait-for.sh
RUN chmod +x wait-for.sh
# Deleting windows characteres
RUN apt-get install -y dos2unix # Installs dos2unix Linux
RUN find . -type f -exec dos2unix {} \; # recursively removes windows related stuff
The docker compose
version: '3.3'
services:
monitor:
build:
context: .
dockerfile: Dockerfile
container_name: app
restart: unless-stopped
volumes:
- .:/usr/app/
- /usr/app/node_modules
networks:
- app-network
command: ./wait-for.sh db:27017 -- node index.js
db:
image: mongo
container_name: db
restart: unless-stopped
volumes:
- dbdata:/data/db
networks:
- app-network
networks:
app-network:
driver: bridge
volumes:
dbdata:
node_modules:
And the wait for link
wait for script
I don't know why is this happening

Related

Why does docker compose have a different behavior?

I have a NestJS project that uses TypeORM with a MySQL database.
I dockerized it using docker compose, and everything works fine on my machine (Mac).
But when I run it from my remote instance (Ubuntu 22.04) I got the following error:
server | yarn run v1.22.19
server | $ node dist/main
server | node:internal/modules/cjs/loader:998
server | throw err;
server | ^
server |
server | Error: Cannot find module '/usr/src/app/dist/main'
server | at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
server | at Module._load (node:internal/modules/cjs/loader:841:27)
server | at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
server | at node:internal/main/run_main_module:23:47 {
server | code: 'MODULE_NOT_FOUND',
server | requireStack: []
server | }
server |
server | Node.js v18.12.0
server | error Command failed with exit code 1.
server | info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
server exited with code 1
Here is my Dockerfile:
FROM node:18-alpine AS development
# Create app directory
WORKDIR /usr/src/app
# Copy files needed for dependencies installation
COPY package.json yarn.lock ./
# Disable postinstall script that tries to install husky
RUN npx --quiet pinst --disable
# Install app dependencies
RUN yarn install --pure-lockfile
# Copy all files
COPY . .
# Increase the memory limit to be able to build
ENV NODE_OPTIONS=--max_old_space_size=4096
ENV GENERATE_SOURCEMAP=false
# Entrypoint command
RUN yarn build
FROM node:18-alpine AS production
# Set env to production
ENV NODE_ENV=production
# Create app directory
WORKDIR /usr/src/app
# Copy files needed for dependencies installation
COPY package.json yarn.lock ./
# Disable postinstall script that tries to install husky
RUN npx --quiet pinst --disable
# Install app dependencies
RUN yarn install --production --pure-lockfile
# Copy all files
COPY . .
# Copy dist folder generated in development stage
COPY --from=development /usr/src/app/dist ./dist
# Entrypoint command
CMD ["node", "dist/main"]
And here is my docker-compose.yml file:
version: "3.9"
services:
server:
container_name: blognote_server
image: bladx/blognote-server:latest
build:
context: .
dockerfile: ./Dockerfile
target: production
environment:
RDS_HOSTNAME: ${MYSQL_HOST}
RDS_USERNAME: ${MYSQL_USER}
RDS_PASSWORD: ${MYSQL_PASSWORD}
JWT_SECRET: ${JWT_SECRET}
command: yarn start:prod
ports:
- "3000:3000"
networks:
- blognote-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
links:
- mysql
depends_on:
- mysql
restart: unless-stopped
mysql:
container_name: blognote_database
image: mysql:8.0
command: mysqld --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
ports:
- "3306:3306"
networks:
- blognote-network
volumes:
- blognote_mysql_data:/var/lib/mysql
restart: unless-stopped
networks:
blognote-network:
external: true
volumes:
blognote_mysql_data:
Here is what I tried to do:
I cleaned everything on my machine and then run docker compose --env-file .env.docker up but this did work.
I run my server image using docker (not docker compose) and it did work too.
I tried to make a snapshot then connect to it and run node dist/main manually, but this also worked.
So I don't know why I'm still getting this error.
And why do I have a different behavior using docker compose (on my remote instance)?
Am I missing something?
Your docker-compose.yml contains two lines that hide everything the image does:
volumes:
# Replace the image's `/usr/src/app`, including the built
# files, with content from the host.
- .:/usr/src/app
# But: the `node_modules` directory is user-provided content
# that and needs to be persisted separately from the container
# lifecycle. Keep that tree in an anonymous volume and never
# update it, even if it changes in the image or the host.
- /usr/src/app/node_modules
You should delete this entire block.
You will see volumes: blocks like that that try to simulate a local-development environment in an otherwise isolated Docker container. This will work only if the Dockerfile only COPYs the source code into the image without modifying it at all, and the node_modules library tree never changes.
In your case, the Dockerfile produces a /usr/src/app/dist directory in the image which may not be present on the host. Since the first bind mount hides everything in the image's /usr/src/app directory, you don't get to see this built tree; and your image is directly running node on that built application and not trying to simulate a local development environment. The volumes: don't make sense here and cause problems.

Docker container can't find my node build directory, how do I fix it?

I'm using Docker Desktop and I'm working with a Nest.js app, when I build my project with docker-compose build everything is fine since there are no errors in console, however when I run docker-compose up -d my container keeps on failing because it can't find the build directory of my app. The strange thing is that this works perfectly fine in my Windows computer, but my macOS laptop is the one that's failing
Error: Cannot find module '/tmp/dist/main'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:885:15)
at Function.Module._load (internal/modules/cjs/loader.js:730:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
This is my dockerfile:
FROM node:14.17.0-alpine
WORKDIR /tmp
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3005
ENV NODE_TLS_REJECT_UNAUTHORIZED=0
# Run it
ENTRYPOINT ["node", "/tmp/dist/main"]
This is my docker-compose file and the folder structure of the project is pretty basic, after I run the npm run build command a dist folder is created at the root of my project.
version: '3'
services:
my-api:
build: ./my-api
container_name: 'my-api'
restart: always
environment:
NODE_ENV: "docker-compose"
APP_PORT: 3005
ports:
- "3005:3005"
- 9229:9229
depends_on:
- redis
- mysql
As you set WORKDIR /tmp you are currently executing commands in this directory. https://docs.docker.com/engine/reference/builder/#workdir
And from what you provided there is no another tmp directory in you current working directory.
Try to change last command to
ENTRYPOINT ["node", "/dist/main"]

Error: Cannot find module '/usr/src/nuxt-app/nuxt'

I get Error: Cannot find module '/usr/src/nuxt-app/nuxt' when I try to run app on the server. i didn't change anything in dockerfile or circleci config. it had been working before, i don't know what happened. Image is build by circleci. Locally without using docker everything works as it should. What do I do with it?
Error:
throw err;
^
Error: Cannot find module '/usr/src/nuxt-app/nuxt'
at Function.Module._resolveFilename (internal/modules/cjs/loader.js:880:15)
at Function.Module._load (internal/modules/cjs/loader.js:725:27)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:72:12)
at internal/main/run_main_module.js:17:47 {
code: 'MODULE_NOT_FOUND',
requireStack: []
}
Dockerfile:
FROM node:lts-alpine
RUN mkdir -p /usr/src/nuxt-app
WORKDIR /usr/src/nuxt-app
RUN apk update && apk upgrade
RUN apk add python make g++
ADD . /usr/src/nuxt-app/
RUN npm install
RUN npm run build
EXPOSE 5002
CMD [ "nuxt", "start" ]
circleci config.yml:
version: 2.1
orbs:
docker: circleci/docker#1.5.0
node: circleci/node#4.1.0
workflows:
build-deploy:
jobs:
- deploy:
context:
- docker
requires:
- build
filters:
branches:
only: master
jobs:
deploy:
machine: true
steps:
- checkout
- docker/install-docker-tools
- run:
name: Login to Docker
command: docker login -u=$DOCKER_LOGIN -p=$DOCKER_PASSWORD registry.xxx.com
- docker/build:
image: yyy
registry: registry.xxx.com
tag: latest
- docker/push:
image: yyy
registry: registry.xxx.com
tag: latest

docker-compose: Cannot find module 'typeorm'

This may be a duplicate question but I'm not able to figure out. I'm trying to dockerize following:
Postgres + NodeJs(Express, server) + Angular (dashboard)
Both the projects on build produces files in /dist folder
I can build and run the projects individually
My directory structure:
/root
docker-compose.yml
dashboard
server
Here are docker files for server and frontend projects:
dashboard/Dockerfile
# Stage 1 - Build project
FROM node:latest as builder
WORKDIR /app
# Whatever directory you can use
COPY . /app/
RUN npm install
RUN npm run build --prod
# Stage 2 - Deploy in nginx
FROM nginx:alpine
RUN rm -rf /usr/share/nginx/html/*
COPY --from=builder /app/dist/ /usr/share/nginx/html/
COPY ./nginx/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
server/Dockerfile
FROM node:12
WORKDIR /app
# Adding package.json first
# If the package.json file changes, then Docker will re-run the npm install sequence otherwise Docker will use our cache and skip that part.
COPY package*.json /app/
RUN npm install
# Copy source code to docker image
COPY ./dist/* /app/
COPY .env.docker /app/.env
EXPOSE 3000
CMD node server.js
docker-compose
# Specify docker compose version
version: "3.7"
# Specify all the services we want in the container
services:
db:
# Type of database
image: postgres
# Pass values to database
environment:
POSTGRES_DB: 'somedb'
POSTGRES_USER: 'someuser'
POSTGRES_PASSWORD: 'somepassword'
volumes:
# Map path of data directory of postgres to local one
- ./pgdata:/varlib/postgresql/data
ports:
- '5432:5432'
# node server
server:
build: server
links:
- db
ports:
- '3000:3000'
volumes:
- ./server/dist:/app
# front end
dashboard:
build: dashboard
depends_on:
- server
ports:
- '80:80'
volumes:
- ./dashboard/angular/dist:/app
I tried following commands by looking at different suggestions:
docker-compose build
docker-compose down
docker-compose up
OR
docker-compose up --build
In both cases, I'm getting following error:
db_1 | The files belonging to this database system will be owned by user "postgres".
db_1 | This user must also own the server process.
db_1 |
server_1 | internal/modules/cjs/loader.js:834
server_1 | throw err;
server_1 | ^
server_1 |
server_1 | Error: Cannot find module 'typeorm'
server_1 | Require stack:
server_1 | - /app/server.js
server_1 | at Function.Module._resolveFilename (internal/modules/cjs/loader.js:831:15)
server_1 | at Function.Module._load (internal/modules/cjs/loader.js:687:27)
server_1 | at Module.require (internal/modules/cjs/loader.js:903:19)
server_1 | at require (internal/modules/cjs/helpers.js:74:18)
server_1 | at Object.<anonymous> (/app/server.js:15:19)
server_1 | at Module._compile (internal/modules/cjs/loader.js:1015:30)
server_1 | at Object.Module._extensions..js (internal/modules/cjs/loader.js:1035:10)
server_1 | at Module.load (internal/modules/cjs/loader.js:879:32)
server_1 | at Function.Module._load (internal/modules/cjs/loader.js:724:14)
server_1 | at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:60:12) {
server_1 | code: 'MODULE_NOT_FOUND',
server_1 | requireStack: [ '/app/server.js' ]
server_1 | }
db_1 | The database cluster will be initialized with locale "en_US.utf8".
db_1 | The default database encoding has accordingly been set to "UTF8".
db_1 | The default text search configuration will be set to "english".
When I build server individually, I don't get this error.
What am I doing wrong?
Any more information required?
You need to include typeorm in your package.json file and also, run docker-compose build before you run your server using docker-compose up
I had a similar problem. Solved adding "typeorm" explicit dependency in packages.json. Like:
"dependencies": {
"typeorm": "^0.2.29"
...
}

Running a nodejs app on a docker container gives " Error: Cannot find module '/usr/src/app/nodemon' "

Here is my Dockerfile which is at the root of the nodejs application.
# Build from LTS version of node (version 12)
FROM node:12
# Create app directory
RUN mkdir -p /usr/src/app
# Define app diretory inside image
WORKDIR /usr/src/app
# package.json AND package-lock.json are copied where available
COPY package*.json /usr/src/app/
# install modules
RUN npm install
# Bundle app source
COPY . /usr/src/app
# Bind app to port 3000
EXPOSE 3000
# Command to run app
CMD [ "nodemon", "./bin/www" ]
Here is my docker-compose.yml file
version: '2'
services:
mongo:
container_name: mongo
image: 'mongo:3.4.1'
ports:
- "27017:27017"
backend-app:
container_name: school-backend
restart: always
build: ./server
ports:
- "3000:3000"
frontend-app:
container_name: angular-frontend
restart: always
build: ./angular-app
ports:
- "4200:4200"
I execute the command docker-compose up
Then I get this error
school-backend | Error: Cannot find module '/usr/src/app/nodemon'
school-backend | at Function.Module._resolveFilename (internal/modules/cjs/loader.js:966:15)
school-backend | at Function.Module._load (internal/modules/cjs/loader.js:842:27)
school-backend | at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
school-backend | at internal/main/run_main_module.js:17:47 {
school-backend | code: 'MODULE_NOT_FOUND',
school-backend | requireStack: []
school-backend | }
In the Dockerfile, I copy the package.json to the working directory /usr/src/app\.
Then I do npm install which would install nodemon since it is declared in the package.json
But, why is the module given as absent?
It's not globally installed then.
In this case, you have to call the nodemon bin inside the node_modules: ./node_modules/nodemon/bin/nodemon.js.
You can use npx like this CMD [ "npx", "nodemon", "./bin/www" ].
npx will run programs from the node_modules/.bin directory.

Resources