Why does docker compose have a different behavior? - node.js

I have a NestJS project that uses TypeORM with a MySQL database.
I dockerized it using docker compose, and everything works fine on my machine (Mac).
But when I run it from my remote instance (Ubuntu 22.04) I got the following error:
server | yarn run v1.22.19
server | $ node dist/main
server | node:internal/modules/cjs/loader:998
server | throw err;
server | ^
server |
server | Error: Cannot find module '/usr/src/app/dist/main'
server | at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
server | at Module._load (node:internal/modules/cjs/loader:841:27)
server | at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
server | at node:internal/main/run_main_module:23:47 {
server | code: 'MODULE_NOT_FOUND',
server | requireStack: []
server | }
server |
server | Node.js v18.12.0
server | error Command failed with exit code 1.
server | info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
server exited with code 1
Here is my Dockerfile:
FROM node:18-alpine AS development
# Create app directory
WORKDIR /usr/src/app
# Copy files needed for dependencies installation
COPY package.json yarn.lock ./
# Disable postinstall script that tries to install husky
RUN npx --quiet pinst --disable
# Install app dependencies
RUN yarn install --pure-lockfile
# Copy all files
COPY . .
# Increase the memory limit to be able to build
ENV NODE_OPTIONS=--max_old_space_size=4096
ENV GENERATE_SOURCEMAP=false
# Entrypoint command
RUN yarn build
FROM node:18-alpine AS production
# Set env to production
ENV NODE_ENV=production
# Create app directory
WORKDIR /usr/src/app
# Copy files needed for dependencies installation
COPY package.json yarn.lock ./
# Disable postinstall script that tries to install husky
RUN npx --quiet pinst --disable
# Install app dependencies
RUN yarn install --production --pure-lockfile
# Copy all files
COPY . .
# Copy dist folder generated in development stage
COPY --from=development /usr/src/app/dist ./dist
# Entrypoint command
CMD ["node", "dist/main"]
And here is my docker-compose.yml file:
version: "3.9"
services:
server:
container_name: blognote_server
image: bladx/blognote-server:latest
build:
context: .
dockerfile: ./Dockerfile
target: production
environment:
RDS_HOSTNAME: ${MYSQL_HOST}
RDS_USERNAME: ${MYSQL_USER}
RDS_PASSWORD: ${MYSQL_PASSWORD}
JWT_SECRET: ${JWT_SECRET}
command: yarn start:prod
ports:
- "3000:3000"
networks:
- blognote-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
links:
- mysql
depends_on:
- mysql
restart: unless-stopped
mysql:
container_name: blognote_database
image: mysql:8.0
command: mysqld --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
ports:
- "3306:3306"
networks:
- blognote-network
volumes:
- blognote_mysql_data:/var/lib/mysql
restart: unless-stopped
networks:
blognote-network:
external: true
volumes:
blognote_mysql_data:
Here is what I tried to do:
I cleaned everything on my machine and then run docker compose --env-file .env.docker up but this did work.
I run my server image using docker (not docker compose) and it did work too.
I tried to make a snapshot then connect to it and run node dist/main manually, but this also worked.
So I don't know why I'm still getting this error.
And why do I have a different behavior using docker compose (on my remote instance)?
Am I missing something?

Your docker-compose.yml contains two lines that hide everything the image does:
volumes:
# Replace the image's `/usr/src/app`, including the built
# files, with content from the host.
- .:/usr/src/app
# But: the `node_modules` directory is user-provided content
# that and needs to be persisted separately from the container
# lifecycle. Keep that tree in an anonymous volume and never
# update it, even if it changes in the image or the host.
- /usr/src/app/node_modules
You should delete this entire block.
You will see volumes: blocks like that that try to simulate a local-development environment in an otherwise isolated Docker container. This will work only if the Dockerfile only COPYs the source code into the image without modifying it at all, and the node_modules library tree never changes.
In your case, the Dockerfile produces a /usr/src/app/dist directory in the image which may not be present on the host. Since the first bind mount hides everything in the image's /usr/src/app directory, you don't get to see this built tree; and your image is directly running node on that built application and not trying to simulate a local development environment. The volumes: don't make sense here and cause problems.

Related

Have node_modules in host

I have a NuxtJS project (which obviously contains a node_modules directory) running on Docker.
The main subject is that I don't want to install npm on my host (dev) machine. I want my container to npm install itself, and get the node_modules into my dev machine, but I can't figure it out...
This is my docker-compose.yml:
version: '3.7'
services:
nuxtjs:
build:
context: .
dockerfile: docker/nuxtjs/Dockerfile
environment:
API_BASE_URL: ${API_BASE_URL}
ports:
- "8000:80"
container_name: ${NUXTJS_CONTAINER_NAME}
docker-compose.dev.yml (which is executed there on my dev machine)
version: '3.7'
services:
nuxtjs:
volumes:
- ./front:/usr/src/app
- /usr/src/app/node_modules
And the dockerfile:
FROM node:lts
# create destination directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# copy the app, note .dockerignore
COPY ./front /usr/src/app
RUN yarn
# build necessary, even if no static files are needed,
# since it builds the server as well
RUN yarn build
# expose 5000 on container
EXPOSE 80
# set app serving to permissive / assigned
ENV NUXT_HOST=0.0.0.0
# set app port
ENV NUXT_PORT=80
# start the app
CMD [ "yarn", "dev" ]
The node_modules directory is created, but nothing is installed into it.

Unable to enable hot reload for React on docker

I'm new to docker so I'm sure I'm missing something.
I'm trying to create a container with a react app. I'm using docker on a Windows 10 machine.
This is my docker file
FROM node:latest
EXPOSE 3000
ENV PATH /app/node_modules/.bin:$PATH
# install app dependencies
COPY package.json ./
COPY package-lock.json ./
RUN npm install --silent
RUN npm install react-scripts#3.4.1 -g --silent
COPY . /app
WORKDIR /app
CMD ["npm","run", "start"]
and this is my docker compose
version: '3.7'
services:
sample:
container_name: prova-react1
build:
context: .
dockerfile: Dockerfile
volumes:
- '.:/app'
- '/app/node_modules'
ports:
- 3000:3000
environment:
- CHOKIDAR_USEPOLLING=true
- COMPOSE_CONVERT_WINDOWS_PATHS=1
When i start the container if I go on the browser everything is working fine but when i go back to Visual Studio Code and I make a modification to the files and save nothing occurs to the container and even to the website

Django/react Dockerfile stages not working together

I have a multistage Dockerfile for a Django/React app that is creating the following error upon running docker-compose up --build:
backend_1 | File "/code/myapp/manage.py", line 17
backend_1 | ) from exc
backend_1 | ^
backend_1 | SyntaxError: invalid syntax
backend_1 exited with code 1
As it stands now, only the frontend container can run with the two below files:
Dockerfile:
FROM python:3.7-alpine3.12
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY . /code
RUN pip install -r ./myapp/requirements.txt
FROM node:10
RUN mkdir /app
WORKDIR /app
# Copy the package.json file into our app directory
# COPY /myapp/frontend/package.json /app
COPY /myapp/frontend /app
# Install any needed packages specified in package.json
RUN npm install
EXPOSE 3000
# COPY /myapp/frontend /app
# COPY /myapp/frontend/src /app
CMD npm start
docker-compose-yml:
version: "2.0"
services:
backend:
build: .
command: python /code/myapp/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
networks:
- reactdrf
web:
build: .
depends_on:
- backend
restart: always
ports:
- "3000:3000"
stdin_open: true
networks:
- reactdrf
networks:
reactdrf:
Project structure (relevant parts):
project (top level directory)
api (the django backend)
frontend
public
src
package.json
myapp
manage.py
requirements.txt
docker-compose.yml
Dockerfile
The interesting thing is when commenting out one service of the Dockerfile or docker-compose.yml or the other, the part left runs fine ie. each service can run perfectly fine on its own with their given syntaxes. Only when combined do I get the error above.
I thought this might be a docker network issue (as in the containers couldn't see each other) but the error seems more like a location issue yet each service can run fine on their own?
Not sure which direction to take this, any insight appreciated.
Update:
Misconfigured my project heres what I did to resolve it:
Separate the frontend and backend into their own Dockerfiles such that the 1st part of my Dockerfile above (backend) is the only part of the Dockerfile remaining in the root directory and the rest of it is in its own Dockerfile in /frontend.
Updated the docker-compose.yml web service build path to point to that new location,
...
web:
build: ./myapp/frontend
...
seems to build now sending data from backend to frontend.
What I learned - multiple Dockerfiles for multiple services. Gets glued together in docker-compose.yml instead of combining them into one Dockerfile.

Node application won't start in Docker if there's a shared volume

I generated a Gatsby website localy. Then I setup a Docker container to run it. It's a standard setup : copying package.json, installing the modules, copying the local files and starting the dev script with a shared volume.
But when it starts, I run into an error :
gatsby_1 | ERROR
gatsby_1 |
gatsby_1 | There was a problem loading the local develop command. Gatsby may not be installed in your site's "node_modules" directory. Perhaps you need to run "npm install"? You might need to delete your "package-lock.json" as well.
Here is the Dockerfile :
FROM node:14.11.0
RUN npm install -g gatsby-cli
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 8000
CMD ["gatsby", "develop", "-H", "0.0.0.0"]
And the docker-compose.yml :
version: "3"
services:
db:
image: mariadb:latest
environment:
MYSQL_ROOT_PASSWORD: root
volumes:
- ./db:/var/lib/mysql
adminer:
image: adminer:latest
ports:
- 8082:8080
wordpress:
image: wordpress:5.5.1-php7.4-apache
ports:
- 8081:80
volumes:
- ./wordpess/plugins:/var/www/html/wp-content/plugins
- ./wordpess/themes:/var/www/html/wp-content/themes
gatsby:
build: ./gatsby
ports:
- 8080:8000
volumes:
- ./gatsby:/app
But, if I remove the volume for the Gatsby container, everything rune well.
So my guess is there's something wrong with permissions somewhere, but I can't figure it out.
Here is a repo with the whole project.

docker container crashes with exited with code 139

I have this docker file build/and run an node application
# ---- Base Node ----
FROM node:10 AS base
# Create app directory
WORKDIR /app
# ---- Dependencies ----
FROM base AS dependencies
COPY package.json ./
# install app dependencies including 'devDependencies'
RUN npm install
# ---- Copy Files/Build ----
FROM dependencies AS build
WORKDIR /app
COPY . /app
# Build the app
RUN npm run build
WORKDIR /app/dist
# install npm models in dist
RUN npm install --only=production
# --- Release with Alpine ----
FROM node:10-alpine AS release
# Create app directory
WORKDIR /app
# optional
ENV NODE_ENV=development
ENV MONGO_HOST=mongodb://localhost/chronas-api
ENV MONGO_PORT=27017
ENV PORT=80
# copy app from build
COPY --from=build /app/dist/ ./
CMD ["node", "index.js"]
and using this docker-compose file
version: '3'
services:
database:
image: mongo
container_name: mongo
ports:
- "27017:27017"
app:
build: .
container_name: chronas_api
ports:
- "80:80"
- "5858:5858"
links:
- database
environment:
- JWT_SECRET='placeholder'
- MONGO_HOST=mongodb://database/chronas-api
- APPINSIGHTS_INSTRUMENTATIONKEY='placeholder'
- TWITTER_CONSUMER_KEY=placeholder
- TWITTER_CONSUMER_SECRET=placeholder
- TWITTER_CALLBACK_URL=placeholder
- PORT=80
depends_on:
- database
stdin_open: true
tty: true
always when I try to write to the mongodb the node container crashes with this error:
exited with code 139
can anyone help? When I run the application only with docker it works fine
The issue may be caused by the fact that node:10-alpine is a tag where the underlying image may change with updates, so when you are building the same app without using compose it won't pull the most recent image, docker-compose will do a pull from the docker hub instead.
Images based on alpine may have some dependency issues that are quite hard to debug from one version to another, you can find some possibilities for this particular issue here
I was using the tag node:8-alpine in my application and I found out that the current latest node:8.15.1-alpine is causing the Exited with code 139 issue that was not present in the previous image node:8.15.0-alpine. Downgrading may be the easiest solution to solve this kind of issue, check if you are using bcrypt too.
Another option is to use a debian based image that will be less likely to have this kind of issues (just consider it's slighly bigger in size).

Resources