Trying to get a node app running and reloading from a volume inside docker, using docker-compose.
The goal is to have the app running inside the container, without losing the ability to edit/reload the code outside the container.
I've been through PM2's docker integration advice and using keymetrics/pm2-docker-alpine:latest as a base image.
The docker-compose.yml file defines a simple web service.
version: '2'
services:
web:
build: .
ports:
- "${HOST_PORT}:${APP_PORT}"
volumes:
- .:/code
Which uses a fairly simple Dockerfile.
FROM keymetrics/pm2-docker-alpine:latest
ADD . /code
WORKDIR /code
RUN npm install
CMD ["npm", "start"]
Which calls npm start:
{
"start": "pm2-docker process.yml --watch"
}
Which refers to process.yml:
apps:
- script: './index.js'
name: 'server'
Running npm start locally works fineāPM2 gets the node process running and watching for changes to the code.
However, as soon as I try and run it inside a container instead, I get the following error on startup:
Attaching to app_web_1
web_1 |
web_1 |
web_1 | [PM2] Spawning PM2 daemon with pm2_home=/root/.pm2
web_1 | [PM2] PM2 Successfully daemonized
web_1 |
web_1 | error: missing required argument `file|json|stdin|app_name|pm_id'
web_1 |
app_web_1 exited with code 1
Can't find any good examples for a hello world with the pm2-docker binary, and I've got no idea why pm2-docker would refuse to work, especially as it's running above the official pm2-docker-alpine image.
To activate the --watch option, instead of passing the --watch option to pm2-docker, just set the watch option to true in the yml configuration file:
apps:
- script: './index.js'
name: 'server'
watch : true
Related
I have a NestJS project that uses TypeORM with a MySQL database.
I dockerized it using docker compose, and everything works fine on my machine (Mac).
But when I run it from my remote instance (Ubuntu 22.04) I got the following error:
server | yarn run v1.22.19
server | $ node dist/main
server | node:internal/modules/cjs/loader:998
server | throw err;
server | ^
server |
server | Error: Cannot find module '/usr/src/app/dist/main'
server | at Module._resolveFilename (node:internal/modules/cjs/loader:995:15)
server | at Module._load (node:internal/modules/cjs/loader:841:27)
server | at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:81:12)
server | at node:internal/main/run_main_module:23:47 {
server | code: 'MODULE_NOT_FOUND',
server | requireStack: []
server | }
server |
server | Node.js v18.12.0
server | error Command failed with exit code 1.
server | info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
server exited with code 1
Here is my Dockerfile:
FROM node:18-alpine AS development
# Create app directory
WORKDIR /usr/src/app
# Copy files needed for dependencies installation
COPY package.json yarn.lock ./
# Disable postinstall script that tries to install husky
RUN npx --quiet pinst --disable
# Install app dependencies
RUN yarn install --pure-lockfile
# Copy all files
COPY . .
# Increase the memory limit to be able to build
ENV NODE_OPTIONS=--max_old_space_size=4096
ENV GENERATE_SOURCEMAP=false
# Entrypoint command
RUN yarn build
FROM node:18-alpine AS production
# Set env to production
ENV NODE_ENV=production
# Create app directory
WORKDIR /usr/src/app
# Copy files needed for dependencies installation
COPY package.json yarn.lock ./
# Disable postinstall script that tries to install husky
RUN npx --quiet pinst --disable
# Install app dependencies
RUN yarn install --production --pure-lockfile
# Copy all files
COPY . .
# Copy dist folder generated in development stage
COPY --from=development /usr/src/app/dist ./dist
# Entrypoint command
CMD ["node", "dist/main"]
And here is my docker-compose.yml file:
version: "3.9"
services:
server:
container_name: blognote_server
image: bladx/blognote-server:latest
build:
context: .
dockerfile: ./Dockerfile
target: production
environment:
RDS_HOSTNAME: ${MYSQL_HOST}
RDS_USERNAME: ${MYSQL_USER}
RDS_PASSWORD: ${MYSQL_PASSWORD}
JWT_SECRET: ${JWT_SECRET}
command: yarn start:prod
ports:
- "3000:3000"
networks:
- blognote-network
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
links:
- mysql
depends_on:
- mysql
restart: unless-stopped
mysql:
container_name: blognote_database
image: mysql:8.0
command: mysqld --default-authentication-plugin=mysql_native_password
environment:
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
MYSQL_USER: ${MYSQL_USER}
MYSQL_PASSWORD: ${MYSQL_PASSWORD}
MYSQL_DATABASE: ${MYSQL_DATABASE}
ports:
- "3306:3306"
networks:
- blognote-network
volumes:
- blognote_mysql_data:/var/lib/mysql
restart: unless-stopped
networks:
blognote-network:
external: true
volumes:
blognote_mysql_data:
Here is what I tried to do:
I cleaned everything on my machine and then run docker compose --env-file .env.docker up but this did work.
I run my server image using docker (not docker compose) and it did work too.
I tried to make a snapshot then connect to it and run node dist/main manually, but this also worked.
So I don't know why I'm still getting this error.
And why do I have a different behavior using docker compose (on my remote instance)?
Am I missing something?
Your docker-compose.yml contains two lines that hide everything the image does:
volumes:
# Replace the image's `/usr/src/app`, including the built
# files, with content from the host.
- .:/usr/src/app
# But: the `node_modules` directory is user-provided content
# that and needs to be persisted separately from the container
# lifecycle. Keep that tree in an anonymous volume and never
# update it, even if it changes in the image or the host.
- /usr/src/app/node_modules
You should delete this entire block.
You will see volumes: blocks like that that try to simulate a local-development environment in an otherwise isolated Docker container. This will work only if the Dockerfile only COPYs the source code into the image without modifying it at all, and the node_modules library tree never changes.
In your case, the Dockerfile produces a /usr/src/app/dist directory in the image which may not be present on the host. Since the first bind mount hides everything in the image's /usr/src/app directory, you don't get to see this built tree; and your image is directly running node on that built application and not trying to simulate a local development environment. The volumes: don't make sense here and cause problems.
I have a multistage Dockerfile for a Django/React app that is creating the following error upon running docker-compose up --build:
backend_1 | File "/code/myapp/manage.py", line 17
backend_1 | ) from exc
backend_1 | ^
backend_1 | SyntaxError: invalid syntax
backend_1 exited with code 1
As it stands now, only the frontend container can run with the two below files:
Dockerfile:
FROM python:3.7-alpine3.12
ENV PYTHONUNBUFFERED=1
RUN mkdir /code
WORKDIR /code
COPY . /code
RUN pip install -r ./myapp/requirements.txt
FROM node:10
RUN mkdir /app
WORKDIR /app
# Copy the package.json file into our app directory
# COPY /myapp/frontend/package.json /app
COPY /myapp/frontend /app
# Install any needed packages specified in package.json
RUN npm install
EXPOSE 3000
# COPY /myapp/frontend /app
# COPY /myapp/frontend/src /app
CMD npm start
docker-compose-yml:
version: "2.0"
services:
backend:
build: .
command: python /code/myapp/manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
networks:
- reactdrf
web:
build: .
depends_on:
- backend
restart: always
ports:
- "3000:3000"
stdin_open: true
networks:
- reactdrf
networks:
reactdrf:
Project structure (relevant parts):
project (top level directory)
api (the django backend)
frontend
public
src
package.json
myapp
manage.py
requirements.txt
docker-compose.yml
Dockerfile
The interesting thing is when commenting out one service of the Dockerfile or docker-compose.yml or the other, the part left runs fine ie. each service can run perfectly fine on its own with their given syntaxes. Only when combined do I get the error above.
I thought this might be a docker network issue (as in the containers couldn't see each other) but the error seems more like a location issue yet each service can run fine on their own?
Not sure which direction to take this, any insight appreciated.
Update:
Misconfigured my project heres what I did to resolve it:
Separate the frontend and backend into their own Dockerfiles such that the 1st part of my Dockerfile above (backend) is the only part of the Dockerfile remaining in the root directory and the rest of it is in its own Dockerfile in /frontend.
Updated the docker-compose.yml web service build path to point to that new location,
...
web:
build: ./myapp/frontend
...
seems to build now sending data from backend to frontend.
What I learned - multiple Dockerfiles for multiple services. Gets glued together in docker-compose.yml instead of combining them into one Dockerfile.
I am using Docker with live reloading for local development with a React frontend, a Node.js backend, and an Nginx server, on Windows 10. I can get these three services started and detecting changes but the client fails to compile every time no matter what I do. Here's the error code:
Failed to compile.
client_1 | ./src/main.scss (./node_modules/css-loader/dist/cjs.js??ref--6-oneOf-5-1!./node_modules/postcss-loader/src??postcss!./node_modules/resolve-url-loader??ref--6-oneOf-5-3!./node_modules/sass-loader/dist/cjs.js??ref--6-oneOf-5-4!./src/main.scss)
client_1 | Error: Missing binding /client/node_modules/node-sass/vendor/linux_musl-x64-64/binding.node
client_1 | Node Sass could not find a binding for your current environment: Linux/musl 64-bit with Node.js 10.x
client_1 |
client_1 | Found bindings for the following environments:
client_1 | - Windows 64-bit with Node.js 12.x
client_1 |
client_1 | This usually happens because your environment has changed since running `npm install`.
client_1 | Run `npm rebuild node-sass` to download the binding for your current environment.
I am first installing the packages locally and then creating the containers and building the images with docker-compose.
client/Dockerfile.dev
FROM node:lts
WORKDIR /client
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.dev.yml (with the other services stripped off for brevity)
version: "3"
services:
client:
build:
context: client
dockerfile: Dockerfile.dev
image: myapp-client
environment:
- NODE_ENV=development
volumes:
- ./client:/
- ./client:/node_modules
ports:
- "3000:3000"
tty: true
I have tried these Node.js images to no avail:
node:10
node:latest
node:lts
node:10.16.3
node:8.1.0-alpine
I have read about using RUN npm rebuild node-sass in my Dockerfile after RUN npm install but that would only work in production because for local development I am installing my packages from a script in package.json
That's what worked for me
sudo ln -s /home/user/.nvm/versions/node/v12.18.2/bin/node /usr/bin/node
I have faced the same scenario today. I am using sass in my React client and that needs node-sass. The next step was to dockerize the client that's when I stumbled on this issue. My Dockerfile and docker-compose.yml is very much similar to the one provided in the Question except for the volumes inside docker-compose.yml which looks like below -
- ./client:/app
- /app/node_modules
My WORKDIR inside Dockerfile is /app
I'm relatively new to docker and I've been having a really strange problem.
The docker setup I have below runs perfectly, however though there seems to be an instance that is always running even after stopping and removing all containers and quitting the docker application.
When I access localhost in my browser, My app is always live and running.
I've tried running docker-compose stop ; docker-compose rm to stop and remove all container.
'docker-compose ps' and 'docker ps' both show no containers running at all. But whenever I access localhost, my app is there live and running.
Like i said i have tried quitting the docker application (I'm running on mac). i tried restarting the machine and the app would still be running.
The weird thing is when i check to see which if any processes are using port 80 (thus making my app accessible via localhost) by running 'sudo lsof -i tcp:80' the list is empty.
I'm new to docker and I know there must be something I'm overlooking.
Thanks in advance, any help and ideas are welcomed.
Here is my folder structure: screenshot
The Dockerfile for my app:
FROM node:10
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
EXPOSE 3000
ENV NUXT_HOST=0.0.0.0
ENV NUXT_PORT=3000
CMD [ "npm", "start" ]
docker-compose.yml
version: '3'
services:
nuxt:
build: ./app/
container_name: nuxt
restart: always
ports:
- '1880:1880'
command: 'npm run start'
nginx:
image: nginx:1.13
container_name: nginx
ports:
- '80:80'
volumes:
- ./nginx:/etc/nginx/conf.d
depends_on:
- nuxt
I'm following this guide and use my low docker knowledge to get a dev environment up and running. I've hit a wall I cannot solve. This is my docker-compose.yml:
version: '2'
services:
redis:
image: redis:3.2
mongo:
image: mongo:3.2
app:
build: .
ports:
- '3000:3000'
command: './node_modules/.bin/nodemon ./index.js'
environment:
NODE_ENV: development
volumes:
- .:/home/app/cardcreator
- /home/app/cardcreator/node_modules
depends_on:
- redis
- mongo
links:
- redis
- mongo
and this is my Dockerfile:
FROM node:6.3.1
RUN useradd --user-group --create-home --shell /bin/false app
ENV HOME=/home/app
COPY package.json npm-shrinkwrap.json $HOME/cardcreator/
RUN chown -R app:app $HOME/*
USER app
WORKDIR $HOME/cardcreator
RUN npm install
USER root
COPY . $HOME/cardcreator/
RUN chown -R app:app $HOME/*
USER app
CMD ["node", "index.js"]
When I try to start the app via docker-compose up, I get the error
app_1 | Usage: nodemon [nodemon options] [script.js] [args]
app_1 | See "nodemon --help" for more.
I then removed the command line of my docker-compose.yml, only leaving node index.js to start. I get an error saying index.js cannot be found.
The file is in my project folder, it is there and it has content. I can't figure out why this setup doesn't work, I did similar setups for tails and it worked fine.
Can anyone tell me what I'm doing wrong here?
Whatever you are mounting in your compose file here:
- .:/home/app/cardcreator
Is going to mount on top of whatever you built in $HOME/cardcreator/ in your Dockerfile.
So basically you seem to have conflicting volumes -- it's an order of operations issue -- the build is going to happen first and the volume mount happens later when the container runs, so your container will no longer have access to the files built in the Dockerfile.
You could try to use
docker exec -it app_1 bash
to go into the container, trying to execute the
node index.js
command manually and see what's going on. Not 100% sure if the 'node' docker images have bash installed though..