I'm getting a path error from require-ts and I have no idea how to determine what I'm missing. The app works fine locally using node ace serve but not within docker on a production server.
This is the error:
docker exec -it adonis_app node ace list:routes
TypeError
The "path" argument must be of type string. Received undefined
1 Cache.makeCachePath
/home/node/app/node_modules/#adonisjs/require-ts/build/src/Cache/index.js:41
2 Config.getCached
/home/node/app/node_modules/#adonisjs/require-ts/build/src/Config/index.js:139
3 Config.parse
/home/node/app/node_modules/#adonisjs/require-ts/build/src/Config/index.js:156
4 register
/home/node/app/node_modules/#adonisjs/require-ts/build/index.js:82
5 Object.registerForAdonis [as default]
/home/node/app/node_modules/#adonisjs/assembler/build/src/requireHook/index.js:17
6 registerTsHook
/home/node/app/node_modules/#adonisjs/core/build/src/utils/index.js:26
7 App.onFind
/home/node/app/node_modules/#adonisjs/core/build/src/Ignitor/Ace/App/index.js:132
This is my dockerfile:
ARG NODE_IMAGE=node:16.13.1-alpine
FROM $NODE_IMAGE AS base
RUN apk --no-cache add dumb-init
RUN mkdir -p /home/node/app && chown node:node /home/node/app
WORKDIR /home/node/app
USER node
RUN mkdir tmp
FROM base AS dependencies
COPY --chown=node:node ./package*.json ./
COPY --chown=node:node ./tsconfig*.json ./
RUN npm ci
COPY --chown=node:node . .
FROM dependencies AS build
RUN node ace build --production --ignore-ts-errors
FROM base AS production
ENV NODE_ENV=production
ENV PORT=$PORT
ENV HOST=0.0.0.0
COPY --chown=node:node ./package*.json ./
RUN npm ci --production
COPY --chown=node:node --from=build /home/node/app/build .
EXPOSE $PORT
CMD [ "dumb-init", "node", "build/server.js" ]
And docker-compose:
version: '3'
services:
adonis_app:
container_name: adonis_app
restart: always
build:
context: .
#target: dependencies
depends_on:
- postgres
ports:
- ${PORT}:${PORT}
- 9229:9229
env_file:
- .env
volumes:
- ./:/home/node/app
- uploads:/home/node/app/public/uploads
networks:
- adonis
#command: dumb-init node ace serve --watch --node-args="--inspect=0.0.0.0"
#command: dumb-init node build/server.js --node-args="--inspect=0.0.0.0"
postgres:
image: postgres:13.1
healthcheck:
test: [ "CMD", "pg_isready", "-q", "-d", "postgres", "-U", "root" ]
timeout: 45s
interval: 10s
retries: 10
restart: always
environment:
- POSTGRES_USER=${ROOT_DB_USER}
- POSTGRES_PASSWORD=${ROOT_DB_PASS}
- APP_DB_USER=${PG_USER}
- APP_DB_PASS=${PG_PASSWORD}
- APP_DB_NAME=${PG_DB_NAME}
volumes:
#- ./db:/docker-entrypoint-initdb.d/
- ./db:/var/lib/postgresql/data
ports:
- 54325:5432
networks:
- adonis
networks:
adonis:
driver: bridge
volumes:
db:
driver: local
uploads:
driver: local
I receive the same error when running any ace command, such as migration and seeding. What am I missing?
I finally have a solution after watching the adonis compiled code for two hours.
I found out that adonis uses a library called find-cache-dir.
And it seems that it doesn't find any cache directory, so it returns undefined (the famous one).
The solution I opted for is to just specify the cache directory in the environment variables.
For example:
ENV CACHE_DIR /home/node/app/.cache/
Consult the library documentation if you want to specify the cache directory elsewhere.
EDIT: If the library returns undefined it could also be because the node_modules directory is not writable, in this case make it writable or move the cache directory.
Thanks for reading (this is my first answer).
Related
I am setting up Docker for development purposes. With separate Dockerfile.dev for a NextJS front-end and an ExpressJS-powered back-end. For the NextJS application, the volume mounting is done such that the application runs perfectly fine in the dev server, in the container. However, due to empty node_modules directory in the host, I enter the following command in a new terminal instance:
docker-compose exec frontend npm install.
This gives me my needed modules so that I get normal linting while developing.
However, on starting up the Express backend, I have issues with installing mongodb from npm (module not found when running the container). So I resorted to the same strategy by performing docker-compose backend npm install and everything works. However, the node_modules directory in the host is still empty, which is not the same behaviour as when I had done it with the frontend.
Why is this the case?
#Dockerfile.dev (frontend)
FROM node:19-alpine
WORKDIR "/app"
COPY ./package*.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
#Dockerfile.dev (backend)
FROM node:19-alpine
WORKDIR "/app"
RUN npm install -g nodemon
COPY ./package*.json .
RUN npm install
COPY . .
CMD ["npm", "run", "dev"]
docker-compose.yml:
version: '3'
services:
server:
build:
dockerfile: Dockerfile.dev
context: ./server
volumes:
- /app/node_modules
- ./server:/app
ports:
- "5000:5000"
client:
build:
dockerfile: Dockerfile.dev
context: ./client
volumes:
- /app/node_modules
- ./client:/app
ports:
- "3000:3000"
mongodb:
image: mongo:latest
volumes:
- /home/panhaboth/Apps/home-server/data:/data/db
ports:
- "27017:27017"
When I start up my docker container it can't find one of my installed dependencies. I've tried removing my build + node_modules folder, going into my docker container terminal and running yarn list and searching for the dependency there (it is). I've also tried running yarn add <dependency and it says it works, but still throws the error error TS2307: Cannot find module '<dependency>' or its corresponding type declarations.
It works fine locally, just not on docker.
Here is my Dockerfile
FROM node:16-alpine AS dev
WORKDIR /app
COPY yarn.lock ./
COPY package.json ./
COPY . .
WORKDIR /app/api
RUN yarn install
RUN yarn build
EXPOSE 8000
CMD ["yarn", "dev"]
And my docker compose:
services:
api:
profiles: ['api', 'web']
container_name: slots-api
stdin_open: true
build:
context: ./
dockerfile: api/Dockerfile
target: dev
environment:
NODE_ENV: development
ports:
- 8000:8000
- 9229:9229 # for debugging
volumes:
- ./api:/app/api
- /app/node_modules/ # do not mount node_modules
- /app/api/node_modules/ # do not mount node_modules
depends_on:
- database
command: yarn dev
I have dockerized a NestJs application. But running it shows
Error: Error loading shared library /usr/src/app/node_modules/argon2/lib/binding/napi-v3/argon2.node: Exec format error
and sometimes it shows
Cannot find module 'webpack'
Strangely, it works fine on Windows but the errors come up on mac and amazon linux.
Dockerfile
###################
# BUILD FOR LOCAL DEVELOPMENT
###################
FROM node:16-alpine As development
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm ci
COPY . .
###################
# BUILD FOR PRODUCTION
###################
FROM node:16-alpine As build
WORKDIR /usr/src/app
COPY package*.json ./
COPY --from=development /usr/src/app/node_modules ./node_modules
COPY . .
RUN npm run build
ENV NODE_ENV production
RUN npm ci --only=production && npm cache clean --force
USER node
###################
# PRODUCTION
###################
FROM node:16-alpine As production
COPY --from=build /usr/src/app/node_modules ./node_modules
COPY --from=build /usr/src/app/dist ./dist
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: '3.9'
services:
api:
build:
dockerfile: Dockerfile
context: .
# Only will build development stage from our dockerfile
target: development
env_file:
- .env
volumes:
- api-data:/usr/src/app
# Run in dev Mode: npm run start:dev
command: npm run start:dev
ports:
- 3000:3000
depends_on:
- postgres
restart: 'always'
networks:
- prism-network
postgres:
image: postgres:14-alpine
environment:
POSTGRES_DB: 'prism'
POSTGRES_USER: 'postgres'
POSTGRES_PASSWORD: 'mysecretpassword'
volumes:
- postgres-data:/var/lib/postgresql/data
ports:
- 5432:5432
healthcheck:
test:
[
'CMD-SHELL',
'pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}',
]
interval: 10s
timeout: 5s
retries: 5
networks:
- prism-network
networks:
prism-network:
volumes:
api-data:
postgres-data:
I am stumped, why it isn't working.
Change (or check) two things in your setup:
In your docker-compose.yml file, delete the volumes: block that overwrites your application's /usr/src/app directory
services:
api:
build: { ... }
# volumes: <-- delete
# api-data:/usr/src/app <-- delete
volumes:
# api-data: <-- delete
postgres-data: # <-- keep
Create a .dockerignore file next to the Dockerfile, if you don't already have one, and make sure it includes the single line
node_modules
What's going on here? If you don't have the .dockerignore line, then the Dockerfile COPY . . line overwrites the node_modules tree from the RUN npm ci line with your host's copy of it; but if you have a different OS or architecture (for example, a Linux container on a Windows host) it can fail with the sort of error you show.
The volumes: block is a little more subtle. This causes Docker to create a named volume, and the contents of the volume replace the entire /usr/src/app tree in the image – in other words, you're running the contents of the volume and not the contents of the image. But the first time (and only the first time) you run the container, Docker copies the contents of the image into the volume. So it looks like you're running the image, and you have the same files, but they're actually coming out of the volume. If you change the image the volume does not get updated, so you're still running the old code.
Without the volumes: block, you're running the code out of the image, which is a standard Docker setup. You shouldn't need volumes: unless your application needs to store persistent data (as your database container does), or for a couple of other specialized needs like injecting configuration files or reading out logs.
Try this Dockerfile
FROM node:16-alpine
WORKDIR /usr/src/app
COPY yarn.lock ./
COPY package.json ./
RUN yarn install
COPY . .
RUN yarn build
CMD [ "node", "dist/main.js" ]
docker-compose.yml
version: "3.7"
services:
service_name:
container_name: orders_service
image: service_name:latest
build: .
env_file:
- .env
ports:
- "3001:3001"
volumes:
- .:/data
- /data/node_modules
i wouldn't delete volumes because of the hot reloading.
try this in the volumes section to be able to save (persist) data
volumes:
- /usr/src/app/node_modules
- .:/usr/src/app
I am building a Nest.js App using Docker-compose.
The problem is when I tried "docker-compose up prod" then it shows "Error: Cannot find module '/usr/src/app/dist/main."
Thus, I explored the files in the image of the prod, but I could find the dist folder. Also, I run dist/main and it works. However, I tried docker-compose up prod, it shows the above error.
Moreover, when I tried "docker-compose up dev." It works perfectly, making a dist folder to the host machine. The main difference between the dev and prod is the command that dev is using npm run start:dev, but prod is using npm run start:prod.
This is My DockerFile
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install rimraf
RUN npm install --only=development
COPY . .
RUN npm run build
FROM node:12.19.0-alpine3.9 as production
ARG NODE_ENV=production
ENV NODE_ENV=${NODE_ENV}
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . .
COPY --from=development /usr/src/app/dist ./dist
CMD ["node", "dist/main"]
This is my docker-compose.yaml
services:
proxy:
image: nginx:latest # 최신 버전의 Nginx 사용
container_name: proxy # container 이름은 proxy
ports:
- '80:80' # 80번 포트를 host와 container 맵핑
networks:
- nestjs-network
volumes:
- ./proxy/nginx.conf:/etc/nginx/nginx.conf # nginx 설정 파일 volume 맵핑
restart: 'unless-stopped' # 내부에서 에러로 인해 container가 죽을 경우 restart
depends_on:
- prod
dev:
container_name: nestjs_api_dev
image: nestjs-api-dev:1.0.0
build:
context: .
target: development
dockerfile: ./Dockerfile
command: npm run start:dev #node dist/src/main #n
ports:
- 3001:3000
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
prod:
container_name: nestjs_api_prod
image: nestjs-api-prod:1.0.0
build:
context: .
target: production
dockerfile: ./Dockerfile
command: npm run start:prod
# ports:
# - 3000:3000
# - 9229:9229
expose:
- '3000' # 다른 컨테이너에게 3000번 포트 open
networks:
- nestjs-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
restart: unless-stopped
networks:
nestjs-network:```
Ok... I found the solution.
At the docker-compose.yaml, .:/usr/src/app should be removed from the volumes of the service "prod." Since the "dist" folder does not exist in the local machine, if the current local directory is mounted, then it shows Not found error. Guess I should study volume much deeper.
Between the following tutorials;
Dockerizing create-react-app
Developing microservices - Node, react & docker
I have been able to convert my nodejs app to dockerized micro-services which is up and running and connecting to services. However, my app uses Sqlite/Sequelize and this was working perfectly prior to dockerizing.
With the new setup, I get error;
/usr/src/app/node_modules/sequelize/lib/dialects/sqlite/connection-manager.js:31
throw new Error('Please install sqlite3 package manually');
Error: Please install sqlite3 package manually at new ConnectionManager
(/usr/src/app/node_modules/sequelize/lib/dialects/sqlite/connection-manager.js:31:15)
My question is;
Is it possible to use Sqlite3 with Docker
If so, anyone able to share sample docker-compose.yml and Dockerfile combo that works for this please.
My docker-compose.yml
version: '3.5'
services:
user-service:
container_name: user-service
build: ./services/user/
volumes:
- './services/user:/usr/src/app'
- './services/user/package.json:/usr/src/package.json'
ports:
- '9000:9000' # expose ports - HOST:CONTAINER
web-service:
container_name: web-service
build:
context: ./services/web
dockerfile: Dockerfile
volumes:
- './services/web:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- '3000:3000' # expose ports - HOST:CONTAINER
environment:
- NODE_ENV=development
depends_on:
- user-service
My user/ Dockerfile
FROM node:latest
# set working directory
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# add `/usr/src/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
ADD package.json /usr/src/package.json
RUN npm install
# start app
CMD ["npm", "start"]
My web/ Dockerfile
FROM node:latest
# set working directory
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# add `/usr/src/app/node_modules/.bin` to $PATH
ENV PATH /usr/src/app/node_modules/.bin:$PATH
# install and cache app dependencies
COPY package.json /usr/src/app/package.json
RUN npm install
RUN npm install react-scripts#1.1.4
RUN npm install gulp -g
# start app
CMD ["npm", "start"]
Many thanks.
Got it. The issue was that my local node_modules were being copied to the host container. Hence in the sqlite3 lib/binding, node-v57-darwin-x64 was there instead of what is expected - node-v57-linux-x64. Hence the mess.
I updated the Dockerfiles and docker-compose.yml as follows:
My docker-compose.yml
services:
user-service:
container_name: user-service
build:
context: ./services/user/
dockerfile: Dockerfile
volumes:
- './services/user:/usr/src/app'
- '/usr/src/node_modules'
ports:
- '9000:9000' # expose ports - HOST:CONTAINER
web-service:
container_name: web-service
build:
context: ./services/web/
dockerfile: Dockerfile
volumes:
- './services/web:/usr/src/app'
- '/usr/src/app/node_modules'
ports:
- '3000:3000' # expose ports - HOST:CONTAINER
environment:
- NODE_ENV=development
depends_on:
- user-service
My user/ Dockerfile
FROM node:latest
# set working directory
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
# add `/usr/src/node_modules/.bin` to $PATH
ENV PATH /usr/src/node_modules/.bin:$PATH
# install and cache app dependencies
ADD package.json /usr/src/package.json
RUN npm install
# start app
CMD ["npm", "start"]
Helpful posts
Getting npm packages to be installed with docker-compose