Hi I'm trying to dockerize an app im currently working on. It uses nodejs and mariadb. I have some difficulties with figuring out how to make nodemon work.
I tried using --legacy-watch or -L which is the short form but it didn't change the result.
NPM installs all dependecies correct i even get the nodemon text but it doesn't restart the server when i make changes.
Would be gal if anyone could help
package.json:
{
"name": "nodejs_mariadb_docker_test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "node src/index.js",
"dev": "nodemon -L src/index.js"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.17.2",
"mariadb": "^2.5.5",
"nodemon": "^2.0.15"
}
}
Dockerfile for nodejs:
# Specifies the image of your engine
FROM node:16.13.2
# The working directory inside your container
WORKDIR /app
# Get the package.json first to install dependencies
COPY package.json /app
# This will install those dependencies
RUN npm install
# Copy the rest of the app to the working directory
COPY . /app
# Run the container
CMD ["npm", "run", "dev"]
and the docker compose file:
version: "3"
services:
node:
build: .
container_name: express-api
ports:
- "80:8000"
depends_on:
- mysql
mysql:
image: mariadb:latest
ports:
- "3306:3306"
environment:
MYSQL_ROOT_PASSWORD: "password"
volumes:
- mysqldata:/var/lib/mysql
- ./mysql-dump:/docker-entrypoint-initdb.d
volumes:
mysqldata: {}
So the obvious problem is that you do not mount your code into the container. That is why nodemon cannot see any changes, and react to them.
Additionally, it may be more straight forward to develop the application locally and only use docker as a mean to package/ship it.
If you still want to go down this route, I would suggest something like this.
services:
express-api:
build: ./
# overwrite the prod command
command: npm run dev
ports:
- "80:8000"
volumes:
# mount your code folder into the app folder
- .:/app
# mysql stuff ...
In your dockerfile you can swap the command for the production one, since in development, compose will override it.
FROM node:16.13.2
WORKDIR /app
COPY package.json package-lock.json ./
# use ci to install from the lock file,
# to avoid suprises in prod
RUN npm ci
COPY . ./
# use the prod command
CMD ["npm", "run", "start"]
This will do a bit of redundant work in development, like copying the code, but it should be OK.
Additionally, you may want to use a .dockerignore to ignore the mysqldump for example. Otherwise, it will be copied into the image, which is probably not desirable.
Also note that through npm ci your dependencies are locked, and won't update automatically. It will also throw errors if your lock file is not in sync with package.json. This is what you want for production. If you develop locally, you can run npm install locally or via docker exec to bump the dependencies, if required. Then you can check if nothing is broken, and be sure that for your prod image it will be fine since it's used from the lock file again.
Related
I am trying to create a container that is just running cli scripts. At the moment I do not have the scripts written I'm just trying to get the container started first. The following set up creates the container fine, but it keeps restarting and after spending hours trying to find a solution on line I've not got any closer to a solution:
DockerFile
FROM node:18-alpine
WORKDIR /src
COPY package*.json /
EXPOSE 3000
RUN npm install -g nodemon && npm install
COPY . /
docker-compose.yml
version: '3.8'
services:
calculations-script:
restart: unless-stopped
build:
context: .
dockerfile: DockerFile
command: npm run dev
volumes:
- ./:/src
ports:
- "3000:3000"
package.json
{
"name": "test-nodejs-script",
"version": "1.0.0",
"description": "Test script",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"dev": "node index.js"
},
"author": "Testing",
"license": "ISC"
}
You container is restarting because of the restart field in your manifest which is set at unless-stopped. unless-stopped ensure that container is always restarted unless manually stopped.
What you probably want is no or on-failure, docs for more details
I am getting a npm ERR! enoent ENOENT: no such file or directory, open '/usr/src/app/package.json' error currently with the below docker setup or a error TS2307: Cannot find module 'Actions' or its corresponding type declarations- i think its a case that the paths are not found in tsconfig.json during the build or i am not COPYing the correct directory/volume as part of the Dockerfile. Have spent multiple days working through different path configs / setups, any help getting this to build would be greatly appreciated.
Would love to see a node / TS / docker / mysql project example if there are any in the community to share - have found it difficult to find opensource projects to compare this to for hints.
...
"paths": {
"Actions/*": [
"Actions/*"
],
}
docker-compose
version: '3.8'
services:
app:
image: app:latest
container_name: balanced-money-backend
build:
context: .
dockerfile: Dockerfile
# TODO investigate uid and gid, how does it get in - from a startup script? Think it needs to be added like user: $UID:$GID if my cmd calls a setup to id on host machine. Needs more investigation.
depends_on:
db:
condition: service_healthy
env_file:
- .env
restart: always
volumes:
- .:/var/www/
command: npm start
ports:
- $NODE_LOCAL_PORT:$NODE_DOCKER_PORT
environment:
- DB_HOST=$MYSQL_HOST
- DB_USERNAME=$MYSQL_USER
- DB_PORT=$MYSQL_DOCKER_PORT
- DB_PASSWORD=$MYSQL_PASSWORD
- DB_DATABASE=$MYSQL_DATABASE
db:
image: mysql:5.7
restart: always
container_name: balanced-money-database
environment:
- MYSQL_ROOT_PASSWORD=$MYSQL_ROOT_PASSWORD
- MYSQL_USER=$MYSQL_USER
- MYSQL_PASSWORD=$MYSQL_PASSWORD
- MYSQL_DATABASE=$MYSQL_DATABASE
ports:
- $MYSQL_LOCAL_PORT:$MYSQL_DOCKER_PORT
volumes:
- db:/var/lib/mysql
healthcheck: # mysql does not start immediatly, app needs to wait for mysql to start, having condition: service_healthy on app and a healthcheck makes sure db has started before app... i think.
test: mysqladmin ping -h 127.0.0.1 -u $$MYSQL_USER --password=$$MYSQL_PASSWORD
timeout: 20s
retries: 10
volumes:
db:
Dockerfile
############### Stage 1 - build the project
# use alpine version of node to keep the image size small as possible
FROM node:16-alpine AS build
# node docs recommend this
WORKDIR /usr/src/app
# docker caches per row as it builds, so copy those files which do not change often to the container first and following builds will not need copy as they are already cached by Docker.
COPY package*.json ./
COPY src tsconfig.json ./
RUN npm install
RUN npm run build
# TODO not sure about the stages - can i have a test / dev stage so test / dev is run in docker too.
############### Stage 2 - run the project
FROM build AS prod
EXPOSE 4000
# from stage 1, i.e. build take the code in the dist / package.json and copy to the container
COPY --from=build /usr/src/app/dist ./dist/
COPY --from=build /usr/src/app/package*.json ./
# npm ci will install exact versions from a package-lock file, and --production will only install dependencies, not dev dependencies.
RUN npm ci --production && npm cache clean --force
# make sure user is not root which could have security consequences.
USER node
CMD ["node", "dist/index.js"]
package.json scripts
"scripts": {
"build": "tsc",
"start": "node ./dist/index.js",
"node": "./dist/index.js",
"dev": "NODE_ENV=development DOTENV_CONFIG_PATH=.env.dev nodemon ts-node src/index.ts",
"format:prettier": "prettier --config .prettierrc 'src/**/*.ts' --write",
"lint": "eslint . --ext .ts",
"lint:fix": "eslint . --ext .ts --fix",
"test": "DOTENV_CONFIG_PATH=.env.test NODE_ENV=test jest --runInBand",
"test:coverage": "DOTENV_CONFIG_PATH=.env.test NODE_ENV=test jest --coverage",
},
I am using prisma, postgres, docker, kubernets.
npx prisma migrate dev working.
and npx prisma generate produce below output:
✔ Generated Prisma Client (2.23.0) to ./node_modules/#prisma/client in 68ms
You can now start using Prisma Client in your code. Reference: https://pris.ly/d/client
import { PrismaClient } from '#prisma/client'
const prisma = new PrismaClient()
but when I tried to use in my route file produce the error:
new-route.ts
import { PrismaClient } from '#prisma/client';
const prisma = new PrismaClient();
my docker file:
FROM node:alpine
WORKDIR /app
COPY package.json .
RUN npm install --only=prod
COPY . .
CMD ["npm", "start"]
I know that this has been marked as solved, but I just wanted to share my setup for anyone interested.
Dockerfile
# Build image
FROM node:16.13-alpine as builder
WORKDIR /app
# Not sure if you will need this
# RUN apk add --update openssl
COPY package*.json ./
RUN npm ci --quiet
COPY ./prisma prisma
COPY ./src src
RUN npm run build
# Production image
FROM node:16.13-alpine
WORKDIR /app
ENV NODE_ENV production
COPY package*.json ./
RUN npm ci --only=production --quiet
COPY --chown=node:node --from=builder /app/prisma /app/prisma
COPY --chown=node:node --from=builder /app/src /app/src
USER node
EXPOSE 8080
CMD ["node", "src/index.js"]
package.json
{
"name": "example",
"description": "",
"version": "0.1.0",
"scripts": {
"generate": "npx prisma generate",
"deploy": "npx prisma migrate deploy",
"dev": "npm run generate && nodemon --watch \"src/**\" --ext \"js,json\" --exec \"node src/index.js\"",
"build": "npm run generate",
"start": "npm run build && node build/index.js"
},
"prisma": {
"schema": "prisma/schema.prisma"
},
"dependencies": {
"#prisma/client": "^3.6.0"
},
"devDependencies": {
"#tsconfig/node16": "^1.0.2",
"#types/node": "^16.11.12",
"nodemon": "^2.0.15",
"prisma": "^3.6.0"
}
}
I run this in Kubernetes. To make things smooth with database and migrations I run an initContainer that runs the prisma migrate deploy.
apiVersion: apps/v1
kind: Deployment
metadata:
name: EXAMPLE
spec:
replicas: 1
selector:
matchLabels:
app: EXAMPLE
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
app: EXAMPLE
spec:
containers:
image: DOCKER_IMAGE
imagePullPolicy: IfNotPresent
name: SERVICE_NAME
ports:
- containerPort: 8080
name: http
protocol: TCP
initContainers:
- command:
- npm
- run
- deploy
image: DOCKER_IMAGE
imagePullPolicy: IfNotPresent
name: database-migrate-deploy
(This is a live service I just copied and stripped away anything non essential)
I hope this could be of use to someone
I usually don't use docker for this while developing, but I have this issue every time I change something in my schema.prisma and have to use npx prisma generate. The solution for me is to restart the node application running npm start again. Maybe if you restart your containers it might work.
if you are inside kubernets pod then access the pod using terminal then give generate command
kubectl exec -it pod_name sh
npx prisma generate
Here is another way to solve this.
Since the .prisma folder is needed by the prisma client as shown in the picture below or in the docs another way is to make sure it ships with your code. You can do this as follows.
Wrong way: ship the generated folder
You would think you can just include the generated files in the image by adding an exception rule for the .prisma folder to your .dockerignore (notice the exclamation point)
node_modules/
!nodes_modules/.prisma
But the query engine used by prisma is different for each operating system so you could run into troubles.
Correct way : generate the files with the image
Just add RUN npx prisma generate to your Dockerfile before the build command. This way the files are generated during the image creation and you don't have to run the prisma generate command on every container. The downside of this method is that your docker image will be larger. If this is an issue you can try with the other answers.
You forgot to copy the prisma directory, since generating the Prisma Client requires the schema.prisma file. You should copy the whole prisma directory in case you also need the migrations.
Your final Dockerfile should contain the following:
WORKDIR /app
COPY package*.json .
COPY prisma ./prisma/
RUN npm install --only=prod
This error happens because docker doesn't install devDependencies, which means that it doesn't pick up the Prisma CLI.
To remedy this you can add the prisma CLI package to your dependencies instead of the devDependencies. (Making sure to run npm install afterward to update the package-lock.json)
For me, I just changed my start under scripts in my package.json so that my app would generate prisma types before running:
"start": "npx prisma generate && nodemon server.ts"
This worked for me – I just wanted to drop this here for anyone who runs into the same issue.
I read many threads sabout this but no one solves anything.
Some say you have to add --legacy-watch (or -L) to the nodemon command.
Others shows several different configurations and apparently nodody really knows what you gotta do to achieve server restarting when a file change at the volume inside a docker container.
Here my configuration so far:
Dockerfile:
FROM node:latest
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# install nodemon globally
RUN npm install nodemon -g
# Install dependencies
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . /usr/src/app
# Exports
EXPOSE 3000
CMD ["npm", "start"]
docker-compose.yml
version: '3.1'
services:
node:
build: .
user: "node"
volumes:
- ./:/usr/src/app
ports:
- 3000:3000
depends_on:
- mongo
working_dir: /usr/src/app
environment:
- NODE_ENV=production
expose:
- "3000"
mongo:
image: mongo
expose:
- 27017
volumes:
- ./data/db:/data/db
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
package.json
{
"name": "node-playground",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "nodemon -L"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"ejs": "^2.7.1",
"express": "^4.17.1",
"mongoose": "^5.7.1"
},
"devDependencies": {
"nodemon": "^1.19.2"
}
}
I tried many different setups as well. Like not installing globally nodemon but only as a project dependency. And also running the command at the docker-compse.yml, and i believe many others I don't remember right now. Nothing.
If someone has any cetainty about this, please help. Thanks!!!!
Try it!
This worked for me:
Via the CLI, use either --legacy-watch or -L for short. More informations here.
I went ahead and created an example container and repo to show how you can achieve this..
Just follow the steps below, which outline how to use nodemon inside of a Docker container.
Docker Container: at DockerHub
Source Code: at GitHub
package.json:
{
"name": "nodemon-docker-test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start:express": "node ./index.js",
"start": "nodemon"
},
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.17.1"
},
"devDependencies": {
"nodemon": "^1.19.2"
}
}
Dockerfile:
FROM node:slim
WORKDIR /app
COPY package*.json ./
RUN apt-get update
RUN npm install
COPY . /app
# -or-
# COPY . .
EXPOSE 1337
CMD ["npm", "start"]
docker-compose.yml: (if you are using it)
version: "3"
services:
nodemon-test:
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
How to reproduce:
Step 1 USING DOCKER RUN: SKIP IF YOU ARE USING DOCKER COMPOSE (go to step 1 below if you are) pull down example docker container
docker run -d --name "nodemon-test" -p 1337:1337 oze4/nodemon-docker-test
Step 1 USING DOCKER-COMPOSE:
See the docker-compose.yml file above for configuration
cd /path/to/dir/that/has/your/compose/file
docker-compose up -d
Step 2: verify the app works
http://localhost:1337
Step 3: check the container logs, to get a baseline
docker logs nodemon-test
Step 4: I have included a bash script to make editing a file as simple as possible. We need to pop a shell on the container, and run the bash script (change.sh)
docker exec -it nodemon-test /bin/bash
bash change.sh
exit
Step 5: check the logs again to verify changes were made and that nodemon restarted
docker logs nodemon-test
As you can see by the last screenshot, nodemon successfully restarted after changes were made!
All right
Thanks a lot to MattOestreich for your answer.
Now i got it working, I don't know what it was, i did follow your set up but of course i'm using docker-compose and i also stripped some things out of it. I'm also not calling mongo image anymore since i setup the db in an Mongodb atlas cluster.
my actual config:
Dockerfile:
FROM node:12.10
WORKDIR /app
COPY package*.json ./
RUN apt-get update
RUN npm install
COPY . /app
EXPOSE 3000
CMD ["npm", "start"]
docker-compse.yml
version: '3.1'
services:
node:
build: .
volumes:
- ./:/app
ports:
- 3000:3000
working_dir: /app
expose:
- "3000"
package.json
{
"name": "node-playground",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "nodemon"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"body-parser": "^1.19.0",
"dotenv": "^8.1.0",
"ejs": "^2.7.1",
"express": "^4.17.1",
"mongoose": "^5.7.1"
},
"devDependencies": {
"nodemon": "^1.19.2"
}
}
thanks Matt again and i hope this thread helps people in need like me.
Nodemon depends on Chokidar and a potential solution is to make it use polling by setting CHOKIDAR_USEPOLLING environment variable to true.
For example you can do this in docker-compose.yml:
services:
api1:
build:
context: .
dockerfile: Dockerfile
volumes:
- /app/node_modules
- ${PWD}:/app
ports:
- 80:3000
environment:
- CHOKIDAR_USEPOLLING=true
Change in Docker file
CMD ["npm", "start"]
Change start script
"start": "nodemon -L server.js"
Build Command
docker build . -t <containername>
Use this command to run the docker container
docker run -v $(pwd):/app -p 8080:8080 -it <container Id>
-v = Volumes . the preferred mechanism for persisting data generated by and used by Docker containers.
/app = WORKDIR /app
$(pwd) = PWD is a variable set to the present working directory. So $(pwd) gets the value of the present working directory
I am trying to mount my working node code from my host into a docker container and run it using nodemon using docker-compose.
But container doesn't seems to be able to find nodemon.
Note: My host machine does not has node or npm installed on it.
Here are the files in the root folder of my project (test). (This is only a rough draft)
Dockerfile
FROM surenderthakran/nodejs:v4
ADD . /test
WORKDIR /test
RUN make install
CMD make run
Makefile
SHELL:=/bin/bash
PWD:=$(shell pwd)
export PATH:= $(PWD)/node_modules/.bin:$(PWD)/bin:$(PATH)
DOCKER:=$(shell grep docker /proc/1/cgroup)
install:
#echo Running make install......
#npm config set unsafe-perm true
#npm install
run:
#echo Running make run......
# Check if we are inside docker container
ifdef DOCKER
#echo We are dockerized!! :D
#nodemon index.js
else
#nodemon index.js
endif
.PHONY: install run
docker-compose.yml
app:
build: .
command: make run
volumes:
- .:/test
environment:
NODE_ENV: dev
ports:
- "17883:17883"
- "17884:17884"
package.json
{
"name": "test",
"version": "1.0.0",
"description": "test",
"main": "index.js",
"dependencies": {
"express": "^4.13.3",
"nodemon": "^1.8.0"
},
"devDependencies": {},
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"keywords": [
"api",
"nodejs",
"express"
],
"author": "test",
"license": "ISC"
}
index.js
'use strict';
var express = require('express');
I build my image using docker-compose build. It finishes successfully.
But when I try to run it using docker-compose up, I get:
Creating test_app_1...
Attaching to test_app_1
app_1 | Running make run......
app_1 | We are dockerized!! :D
app_1 | /bin/bash: nodemon: command not found
app_1 | make: *** [run] Error 127
test_app_1 exited with code 2
Gracefully stopping... (press Ctrl+C again to force)
Can anyone please advice?
Note: The Dockerfile for my base image surenderthakran/nodejs:v4 can be found here: https://github.com/surenderthakran/dockerfile_nodejs/blob/master/Dockerfile
The issue has been resolved. The issue boiled down to me not having node_modules in the mounted volume.
Basically, while doing docker-compose build the image was build correctly with the actual code being added to the image and creating the node_modules folder by npm install in the project root. But with docker-compose up the code was being mounted in the project root and it was overriding the earlier added code including the newly created node_modules folder.
So as a solution I compromised to install nodejs on my host and do a npm install on my host. So when the code my being mounted I still got my node_modules folder in my project root because it was also getting mounted from my host.
Not a very elegant solution but since it is a development setup I am ready for the compromise. On production I would be setting up using docker build and docker run and won't be using nodemon anyways.
If anyone can suggest me a better solution I will be greatful.
Thanks!!
I believe you should use a preinstall script in your package.json.
So, in the script section, just add script:
"scritpts": {
"preinstall": "npm i nodemon -g",
"start": "nodemon app.js",
}
And you should good to go :)
Pretty late for an answer. But you could use something called as named volumes to mount your node_modules in the docker volumes space. That way it would hide your bind mount.
You need to set the node_modules as a mounted volume in the docker container.
e.g
docker-compose.yml
app:
build: .
command: make run
volumes:
- .:/test
- /test/node_modules
environment:
NODE_ENV: dev
ports:
- "17883:17883"
- "17884:17884"
I've figured out how to do this without a Dockerfile, in case that's useful to anyone...
You can run multiple commands in the docker-compose.yml command line by using sh -c.
my-server:
image: node:18-alpine
build: .
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
working_dir: /usr/src/app
ports:
- "3100:3100"
command: sh -c "npm install -g nodemon && npm install && npm run dev "
environment:
NODE_ENV: development
PORT: 3100