Enabling webpack hot-reload in a docker application - node.js

I have a docker app with the following containers
node - source code of the project. it serves up the html page situated in the public folder.
webpack - watches files in the node container and updates the public folder (from the node container) on the event of change in the code.
database
this is the webpack/node container setup
web:
container_name: web
build: .
env_file: .env
volumes:
- .:/usr/src/app
- node_modules:/usr/src/app/node_modules
command: npm start
environment:
- NODE_ENV=development
ports:
- "8000:8000"
webpack:
container_name: webpack
build: ./webpack/
depends_on:
- web
volumes_from:
- web
working_dir: /usr/src/app
command: webpack --watch
So currently , the webpack container monitors and updates the public folder. i have to manually refresh the browser to see my changes.
I'm now trying to incorporate webpack-dev-server to enable automatic refresh in the browser
these are my changes to the webpack config file
module.exports = {
entry:[
'webpack/hot/dev-server',
'webpack-dev-server/client?http://localhost:8080',
'./client/index.js'
],
....
devServer:{
hot: true,
proxy: {
'*': 'http://localhost:8000'
}
}
}
and the new docker-compose file file webpack
webpack:
container_name: webpack
build: ./webpack/
depends_on:
- web
volumes_from:
- web
working_dir: /usr/src/app
command: webpack-dev-server --hot --inline
ports:
- "8080:8080"
i seem to be getting an error when running the app
Invalid configuration object. Webpack has been initialised using a configuration object that does not match the API schema.
webpack | - configuration.entry should be one of these:
webpack | object { <key>: non-empty string | [non-empty string] } | non-empty string | [non-empty string] | function
webpack | The entry point(s) of the compilation.
webpack | Details:
webpack | * configuration.entry should be an object.
webpack | * configuration.entry should be a string.
webpack | * configuration.entry should NOT have duplicate items (items ## 1 and 2 are identical) ({
webpack | "keyword": "uniqueItems",
webpack | "dataPath": ".entry",
webpack | "schemaPath": "#/definitions/common.nonEmptyArrayOfUniqueStringValues/uniqueItems",
webpack | "params": {
webpack | "i": 2,
webpack | "j": 1
webpack | },
webpack | "message": "should NOT have duplicate items (items ## 1 and 2 are identical)",
webpack | "schema": true,
webpack | "parentSchema": {
webpack | "items": {
webpack | "minLength": 1,
webpack | "type": "string"
webpack | },
webpack | "minItems": 1,
webpack | "type": "array",
webpack | "uniqueItems": true
webpack | },
webpack | "data": [
webpack | "/usr/src/app/node_modules/webpack-dev-server/client/index.js?http://localhost:8080",
webpack | "webpack/hot/dev-server",
webpack | "webpack/hot/dev-server",
webpack | "webpack-dev-server/client?http://localhost:8080",
webpack | "./client/index.js"
webpack | ]
webpack | }).
webpack | [non-empty string]
webpack | * configuration.entry should be an instance of function
webpack | function returning an entry object or a promise..
As you can see , my entry object doesnt have any duplicate items.
Is there something additional i should be doing? anything i missed?
webpack-dev-server should basically proxy all requests to the node server.

I couldn't make webpack or webpack-dev-server watch (--watch) mode work even after mounting my project folder into container.
To fix this you need to understand how webpack detects file changes within a directory.
It uses one of 2 softwares that add OS level support for watching for file changes called inotify and fsevent. Standard Docker images usually don't have these (specially inotify for linux) preinstalled so you have to install it in your Dockerfile.
Look for inotify-tools package in your distro's package manager and install it. fortunately all alpine, debian, centos have this.

Docker & webpack-dev-server can be fully operational without any middleware or plugins, proper configuration is the deal:
devServer: {
port: 80, // use any port suitable for your configuration
host: '0.0.0.0', // to accept connections from outside container
watchOptions: {
aggregateTimeout: 500, // delay before reloading
poll: 1000 // enable polling since fsevents are not supported in docker
}
}
Use this config only if your docker container does not support fsevents.
For performance efficient way check out HosseinAgha answer #42445288: Enabling webpack hot-reload in a docker application

try doing this:
Add watchOptions.poll = true in webpack config.
watchOptions: {
poll: true
},
Configure host in devServer config
host:"0.0.0.0",

Hot Module Reload is the coolest development mode, and a tricky one to set up with Docker. In order to bring it to life you'll need 8 steps to follow:
For Webpack 5 install, in particular, these NPM packages:
npm install webpack webpack-cli webpack-dev-server --save-dev --save-exact
Write this command into 'scripts' section in 'package.json' file:
"dev": "webpack serve --mode development --host 0.0.0.0 --config webpack.config.js"
Add this property to 'webpack.config.js' file (it'll enable webpack's hot module reloading)
devServer: {
port: 8080,
hot: "only",
static: {
directory: path.join(__dirname, './'),
serveIndex: true,
},
},
Add this code to the very bottom of your 'index.js' (or whatever), which is the entry point to your app:
if (module.hot) {
module.hot.accept()
}
Expose ports in 'docker-compose.yml' to see the app at http://localhost:8080
ports:
- 8080:8080
Sync your app's /src directory with 'src' directory within a container. To do this use volumes in 'docker-compose.yml'. In my case, directory 'client' is where all my frontend React's files sit, including 'package.json', 'webpack.config.js' & Dockerfile. While 'docker-compose.yml' is placed one leve up.
volumes:
- ./client/src:/client/src
Inside the volume group you'd better add the ban to syncronize 'node_modules' directory ('.dockerignore' is of no help here).
volumes:
...
- /client/node_modules
Fire this whole bundling from Docker's CMD, not from RUN command inside your 'docker-compose.yml'.
WORKDIR /client
CMD ["npm", "run", "dev"]
P.S. If you use Webpack dev server, you don't need other web servers like Nginx in your development. Another important thing to keep in mind is that Webpack dev server does not recompile files from '/src' folder into '/disc' one. It performs the compilation in memory.

Had this trouble from a windows device. Solved it by setting WATCHPACK_POLLING to true in environment of docker compose.
frontend:
environment:
- WATCHPACK_POLLING=true

I had the same problem. it was more my fault and not webpack nor docker. In fact you have to check that the workdir in your Dockerfile is the target for your bindmount in docker-compose.yml
Dockerfile
FROM node
...
workdir /myapp
...
on your docker-compose.yml
web:
....
-volumes:
./:/myapp
It should work if you configure the reloading on your webpack.config.js

Related

Can I make docker an environment without saving code

I'm new to docker and I wonder that, can I use the docker as an application environment only?
I have the Dockerfile which let me build a Docker image and let other team-mates and server able to run my project.
FROM node:10.15.3
ADD . /app/
WORKDIR /app
RUN npm install
RUN npm run build
ENV HOST 0.0.0.0
ENV PORT 3000
EXPOSE 3000
CMD ["npm", "run","start"]
The project can be built and ran. All the thing is perfect.
However, I found that all the files will be zip into the image files. My source code and all node_modules. It makes the files too big.
And I remember that in my previous project, I will create the Linux VM and bind my project folder to the guest OS. Then I can keep developing and using the vm as a server.
Can docker do something like this? The docker only needs to load my project folder (which will pass the path when running the command).
Then it runs npm install, npm start/dev. All the library will save into my local directory. OR I run the npm start manually then the docker load my files and host.
I just need docker to be my application server to make sure I can get the same result like deployed to the Production server.
Can Docker do this?
============================== Update ================================
I try to use the bind mount to do this.
Then I create the docker-compose
version: "3.7"
services:
web:
build: .
volumes:
- type: bind
source: C:\myNodeProject
target: /src/
ports:
- '8888:3000'
and I update the dockerfile
FROM node:10.15.3
# Install dependencies
WORKDIR /src/
# I ran 'CMD ls' then confirm that the directory is blinded
# Expose the app port
EXPOSE 3000
# Start the app
CMD yarn dev
and I get the error
web_1 | yarn run v1.13.0
web_1 | $ cross-env NODE_ENV=development nodemon server/index.js --watch server
web_1 | [nodemon] 1.18.11
web_1 | [nodemon] to restart at any time, enter `rs`
web_1 | [nodemon] watching: /src/server/**/*
web_1 | [nodemon] starting `node server/index.js`
web_1 | [nodemon] app crashed - waiting for file changes before starting...
index.js
const express = require('express')
const consola = require('consola')
const { Nuxt, Builder } = require('nuxt')
const app = express()
// Import and Set Nuxt.js options
const config = require('../nuxt.config.js')
config.dev = !(process.env.NODE_ENV === 'production')
async function start() {
// Init Nuxt.js
const nuxt = new Nuxt(config)
const { host, port } = nuxt.options.server
// Build only in dev mode
if (config.dev) {
const builder = new Builder(nuxt)
await builder.build()
} else {
await nuxt.ready()
}
// Give nuxt middleware to express
app.use(nuxt.render)
// Listen the server
app.listen(port, host)
consola.ready({
message: `Server listening on http://${host}:${port}`,
badge: true
})
}
start()
Docker can also work the way you've suggested using Volume Bind from Host OS it's useful in development while you can edit your codes and Docker container can immediately run that code.
However, in production, you don't want to follow the same practice.
Main principles of Docker containers is that an image is immutable
Once you built, it’s unchangeable, and if you want to make changes, you’ll need to build a new image as a result.
And for you're a concern that Docker can load all the necessary dependencies in production same as local them this thing managed by package.lock.json which will make sure whenever someone run npm install it'll install same dependencies.
For production mode, you're Docker Container needs to be lighted weighted so there'll be your code and node_modules and it's good practice to remove npm cache after installation to keep your Docker images size minimum as possible. Keeping size less give less space for security hole and fast deployment.

PM2 Won't Start Inside Docker

Trying to get a node app running and reloading from a volume inside docker, using docker-compose.
The goal is to have the app running inside the container, without losing the ability to edit/reload the code outside the container.
I've been through PM2's docker integration advice and using keymetrics/pm2-docker-alpine:latest as a base image.
The docker-compose.yml file defines a simple web service.
version: '2'
services:
web:
build: .
ports:
- "${HOST_PORT}:${APP_PORT}"
volumes:
- .:/code
Which uses a fairly simple Dockerfile.
FROM keymetrics/pm2-docker-alpine:latest
ADD . /code
WORKDIR /code
RUN npm install
CMD ["npm", "start"]
Which calls npm start:
{
"start": "pm2-docker process.yml --watch"
}
Which refers to process.yml:
apps:
- script: './index.js'
name: 'server'
Running npm start locally works fine—PM2 gets the node process running and watching for changes to the code.
However, as soon as I try and run it inside a container instead, I get the following error on startup:
Attaching to app_web_1
web_1 |
web_1 |
web_1 | [PM2] Spawning PM2 daemon with pm2_home=/root/.pm2
web_1 | [PM2] PM2 Successfully daemonized
web_1 |
web_1 | error: missing required argument `file|json|stdin|app_name|pm_id'
web_1 |
app_web_1 exited with code 1
Can't find any good examples for a hello world with the pm2-docker binary, and I've got no idea why pm2-docker would refuse to work, especially as it's running above the official pm2-docker-alpine image.
To activate the --watch option, instead of passing the --watch option to pm2-docker, just set the watch option to true in the yml configuration file:
apps:
- script: './index.js'
name: 'server'
watch : true

Production vs Development Docker setup for Node (Express & Mongo) App

I'm attempting to convert a Node app to using Docker but running into a few issues/questions I'm unable to answer.
But for simplicity I've included some very basic example files to keep the question on target. In fact the example below merely links to a Mongo container but doesn't use it in the code to keep it even simpler.
Primarily, what Dockerfile and docker-compose.yml setup is required to successfully use Docker on a Node + Express + Mongo app on both local (OS X) development and for Production builds?
Dockerfile
FROM node:6.3.0
# Create new user to avoid using root - is this correct practise?
RUN useradd --user-group --create-home --shell /bin/false app
COPY package.json /home/app/code/
RUN chown -R app:app /home/app/*
USER app
WORKDIR /home/app/code
# Should this even be set here or use docker-compose instead?
# And should there be:
# - docker-compose.yml setting it to production by default
# - docker-compose.dev.yml setting it to production?
# Or reverse it? (docker-compose.prod.yml instead with default being development?)
# Commenting below out or it will always run as production
#ENV NODE_ENV production
RUN npm install
USER root
COPY . /home/app/code
# Running chown to ensure new 'app' user owns files
RUN chown -R app:app /home/app/*
USER app
EXPOSE 3000
# What CMD should be here to ensure development versus production is simple?
# Development - Restart server and refresh browser on file changes
# Production - Ensure uptime.
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
web:
build: .
# I would normally use a .env file but for this example will set explicitly
# env_file: .env
environment:
- NODE_ENV=production
volumes:
- ./:/home/app/code
- /home/app/code/node_modules
ports:
- "3000:3000"
links:
- mongo
mongo:
image: mongo
ports:
- "27017:27017"
docker-compose.dev.yml
version: "2"
services:
web:
# I would normally use a .env file but for this example will set explicitly
# env_file: .env
environment:
- NODE_ENV=development
package.json
{
"name": "docker-node-test",
"version": "1.0.0",
"description": "",
"main": "app.js",
"scripts": {
"start": "nodemon app.js"
},
"dependencies": {
"express": "^4.14.0",
"mongoose": "^4.6.1",
"nodemon": "^1.10.2"
},
"devDependencies": {
"mocha": "^3.0.2"
}
}
1. How to handle the different NODE_ENV (dev, production, staging)?
This is my primary question and conundrum.
In the example I’ve used the NODE_ENV is set in the Dockerfile as production and there are two docker-compose files:
docker-compose.yml sets the defaults include NODE_ENV to production
docker-compose.dev.yml overrides the NODE_ENV and sets it to development
1.1. Is it advised to rather switch that order around and have development settings as the default and instead use a docker-compose.prod.yml for overrides?
1.2. How do you handle the node_modules directory?
I'm really not sure how to handle the node_modules directory at all between local development needs and then running for Production. (Perhaps I have a fundamental misunderstanding though?)
Edit:
I added a .dockerignore file and included the node_modules directory as a line. This ensures the node_modules dir is ignored during the copy, etc.
I then edited the docker-compose.yml to include the node_modules as a volume.
volumes:
- ./:/home/app/code
- /home/app/code/node_modules
I have also put the above change into the full docker-compose.yml at the start of the question for completeness.
Is this even a solution?
Doing the above ensured I could have my local development npm install included dev-dependencies. And when running docker-compose up it pulls in the production only node modules inside the Docker container (since the default docker-compose.yml is set to NODE_ENV=production).
But it seems the NODE_ENV set inside the 2 docker-compose files aren't taken into account when running docker-compose -f docker-compose.yml build :/ I expected it to send NODE_ENV=production but ALL of the node_modules are re-installed (including the dev-dependencies).
Do we instead use 2 Dockerfiles? (Dockerfile for Prod; Dockerfile.dev for local development)
(I feel like that is a fundamental piece of logic/knowledge I am missing in the setup)
2. Nodemon vs PM2
How would one use nodemon on the local development machine but PM2 on the Production build?
3. Should you create a user inside the docker containers and then set that user to be used in the Dockerfile?
It uses root user by default but I’ve not seen many articles talking about creating a dedicated user within the container. Am I correct in what I’ve done for security? I certainly wouldn’t feel comfortable running an app as root on a non-Docker build.
Thank you for reading. Any and all assistance appreciated :)
I can share my experience, not saying it is the best solution.
I have Dockerfile and dockerfile.dev. In dockerfile.dev I install nodemon and run the app with nodemon, the NODE_ENV doesn't seem to have any impact. As for users you should not use root for security reasons. My dev version:
FROM node:16.14.0-alpine3.15
ENV NODE_ENV=development
# install missing libs and python3
RUN apk update && apk add -U unzip zip curl && rm -rf
/var/cache/apk/* && npm i node-gyp#8.4.1 nodemon#2.0.15 -g
WORKDIR /node
COPY package.json package-lock.json ./
RUN mkdir /app && chown -R node:node .
USER node
RUN npm install && npm cache clean --force
WORKDIR /node/app
COPY --chown=node:node . .
# local development
CMD ["nodemon", "server.js" ]
in Production I run the app with node:
FROM node:16.14.0-alpine
ENV NODE_ENV=production
# install missing libs and python3
RUN apk update && apk add -U unzip zip curl && rm -rf /var/cache/apk/* \
&& npm i node-gyp#8.4.1 -g
WORKDIR /node
COPY package.json package-lock.json ./
RUN mkdir /app && chown -R node:node .
USER node
RUN npm install && npm cache clean --force
WORKDIR /node/app
COPY --chown=node:node . .
CMD ["node", "server.js" ]
I have two separate versions of docker-compose. In docker-compose.dev.yml I set the dockerfile to dockerfile.dev:
app:
depends_on:
mongodb:
condition: service_healthy
build:
context: .
dockerfile: Dockerfile.dev
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:5000" ]
interval: 180s
timeout: 10s
retries: 5
restart: always
env_file: ./.env
ports:
- "5000:5000"
environment:
...
volumes:
- /node/app/node_modules
In production docker-compose.yml there is the dockerfile set to Dockerfile.
Nodemon vs PM2. I used pm2 before dockerizing the app. I cannot see any benefit of having it in docker, the restart: always takes care about restarting on error. You should better use restart: unless_stopped but I prefer the always option. Initially I used nodemon also on production so that the app reflected the volumes changes but I skipped this because the restart didn't work well (it was waiting for some code changes..).
Users: You can see it in my example. I took a course for docker + nodejs and setting a non-root user was recommended so I do it and I have no problems.
I hope I explained well enough and it can help you. Good luck.
Either, it doesn't matter too much, I prefer to have development details then overwrite with production details.
I don't commit them to my repo, then I have "npm install" in my dockerfile.
You can set rules in the dockerfile to which one to build based on build settings.
It is typical to build everything via root, and run the main program via root. You can set up other users, but for most uses it is not needed as the idea of docker containers is to isolate each process in individual docker containers.

Chokidar isn't picking up file changes inside docker container

I'm using docker for mac version 1.12.0-rc2 for a react project. My workflow is this:
src/ folder on OS X is mounted to the container
When a developer modifies a file in src/ it gets converted to ES5 and placed in public/ (this works).
When a file is changed in public/ another watcher triggers hot reloading (works on my localhost but not in the container).
Here's my step watcher code from step 3:
// root = "/src"
const watcher = chokidar.watch(root, {
usePolling: true,
awaitWriteFinish: {
pollInterval: 100,
stabilityThreshold: 250
},
ignored: /\.(git|gz|map)|node_modules|jspm_packages|src/,
ignoreInitial: true,
persistent: true
})
.on("change", function fileWatcher(filename) {
const modulePath = filename.replace(`${root}/`, "");
wss.clients.forEach(function sendFileChange(client) {
send("filechange", modulePath, client);
});
if (cache[filename]) {
wss.clients.forEach(function sendCacheFlush(client) {
send("cacheflush", filename, client);
});
delete cache[filename];
}
});
And my docker-compose.yml file:
version: '2'
services:
wildcat:
build:
context: .
args:
JSPM_GITHUB_AUTH_TOKEN:
image: "nfl/react-wildcat-example:latest"
environment:
NODE_ENV: development
PORT: 3000
STATIC_PORT: 4000
COVERAGE:
LOG_LEVEL:
NODE_TLS_REJECT_UNAUTHORIZED: 0
CHOKIDAR_USEPOLLING: 'true'
volumes:
- ./src:/src/src
- ./api:/src/api
ports:
- "3000:3000"
- "4000:4000"
ulimits:
nproc: 65535
entrypoint: "npm run"
command: "dev"
After spending a day or two on this I found the issue, the line:
ignored: /\.(git|gz|map)|node_modules|jspm_packages|src/,
Was causing chokidar to ignore /src which was the folder I was copying all the source code to in the docker container. I changed this path in the Dockerfile and docker-compose.yml to /code instead and everything worked as expected.
As a hopefully helpful note to those coming here by the title, Chokidar docs states:
It is typically necessary to set this (polling) to true to successfully watch files over a network, and it may be necessary to successfully watch files in other non-standard situations.
On Docker Desktop for Mac 4.0.0, I got Firebase Emulators (that uses Chokidar) to see host-side file changes by:
environments:
- CHOKIDAR_USEPOLLING=true

My meanjs server takes 3-6 minutes to start

My mean.js app is based off the yoeman meanjs generator, with some tweaks (e.g. separating the front end and backend so they can be deployed separately).
I'm launching the app using fig (see fig.yml below).
When I set the command to "node server.js", the server takes 6 seconds to starts.
When I startup using "grunt", which runs nodemon and watch, it takes about 6 minutes. I've tried various things but can't really understand why nodemon would cause things to run so much slower
fig.yml:
web:
build: .
links:
- db:mongo.local
ports:
- "3000:3000"
volumes:
- .:/home/abilitie
command: grunt
#command: node server.js # much faster but you don't get the restart stuff
environment:
NODE_ENV: development
db:
image: dockerfile/mongodb
ports:
- "27017:27017"
Gruntfile (excerpt)
concurrent: {
default: ['nodemon', 'watch'],
old_default: ['nodemon', 'watch'],
debug: ['nodemon:debug', 'watch', 'node-inspector'],
options: {
logConcurrentOutput: true,
limit: 10
}
},
jshint: {
all: {
src: watchFiles.serverJS,
options: {
jshintrc: true
}
}
},
grunt.registerTask('lint', ['jshint']);
// Default task(s).
grunt.registerTask('default', ['lint', 'concurrent:default']);
It's because your first approach simply run Express server by $ node server.js. But I don't understand why i it takes 6 seconds to start? Maybe you have a slow hardware...
In order to understand why the second approach takes 6 minutes you need to understand what grunt do after launching:
Lint all this JavaScript files
serverJS: ['gruntfile.js', 'server.js', 'config/**/*.js']
clientJS: ['public/js/*.js', 'public/modules/**/*.js']
Starts two parallel processes: watch & nodemon
If watch is clear (it watching for files from stetting and after editing them restart the server) what do the nodemon? More precisely, what is the difference between starting the server by nodejs and nodemon.
From official github documentation:
nodemon will watch the files in the directory in which nodemon was started, and if any files change, nodemon will automatically restart your node application.
If you have a package.json file for your app, you can omit the main script entirely and nodemon will read the package.json for the main property and use that value as the app.
It's watching for all the files from node_modules directory and in my meanjs v0.4.0 its ~41,000 files. In your case buffering all of this files takes about 6 minutes. Try to add to your gruntfile.js grunt.initConfig > nodemon > dev > option ignore
nodemon: {
dev: {
script: 'server.js',
options: {
nodeArgs: ['--debug'],
ext: 'js,html',
watch: watchFiles.serverViews.concat(watchFiles.serverJS),
ignore: 'node_modules/*' // or '/node_modules'
}
}
},
You need to determine exactly where the problem is. Try to start the server by three different ways and to measure the time
NODE_ENV=development nodejs server.js
NODE_ENV=development nodemon server.js
NODE_ENV=development nodemon server.js --ignore node_modules/
NFS saved the day.
The VirtualBox shared folder is super slow. Using this vagrant image instead of boot2docker is much faster.
https://vagrantcloud.com/yungsang/boxes/boot2docker
Also, make sure to disable UDP, or NFS may hang. You may do so by putting this in your Vagrantfile:
config.vm.synced_folder ".", "/vagrant", type: "nfs", nfs_udp: false

Resources