My mean.js app is based off the yoeman meanjs generator, with some tweaks (e.g. separating the front end and backend so they can be deployed separately).
I'm launching the app using fig (see fig.yml below).
When I set the command to "node server.js", the server takes 6 seconds to starts.
When I startup using "grunt", which runs nodemon and watch, it takes about 6 minutes. I've tried various things but can't really understand why nodemon would cause things to run so much slower
fig.yml:
web:
build: .
links:
- db:mongo.local
ports:
- "3000:3000"
volumes:
- .:/home/abilitie
command: grunt
#command: node server.js # much faster but you don't get the restart stuff
environment:
NODE_ENV: development
db:
image: dockerfile/mongodb
ports:
- "27017:27017"
Gruntfile (excerpt)
concurrent: {
default: ['nodemon', 'watch'],
old_default: ['nodemon', 'watch'],
debug: ['nodemon:debug', 'watch', 'node-inspector'],
options: {
logConcurrentOutput: true,
limit: 10
}
},
jshint: {
all: {
src: watchFiles.serverJS,
options: {
jshintrc: true
}
}
},
grunt.registerTask('lint', ['jshint']);
// Default task(s).
grunt.registerTask('default', ['lint', 'concurrent:default']);
It's because your first approach simply run Express server by $ node server.js. But I don't understand why i it takes 6 seconds to start? Maybe you have a slow hardware...
In order to understand why the second approach takes 6 minutes you need to understand what grunt do after launching:
Lint all this JavaScript files
serverJS: ['gruntfile.js', 'server.js', 'config/**/*.js']
clientJS: ['public/js/*.js', 'public/modules/**/*.js']
Starts two parallel processes: watch & nodemon
If watch is clear (it watching for files from stetting and after editing them restart the server) what do the nodemon? More precisely, what is the difference between starting the server by nodejs and nodemon.
From official github documentation:
nodemon will watch the files in the directory in which nodemon was started, and if any files change, nodemon will automatically restart your node application.
If you have a package.json file for your app, you can omit the main script entirely and nodemon will read the package.json for the main property and use that value as the app.
It's watching for all the files from node_modules directory and in my meanjs v0.4.0 its ~41,000 files. In your case buffering all of this files takes about 6 minutes. Try to add to your gruntfile.js grunt.initConfig > nodemon > dev > option ignore
nodemon: {
dev: {
script: 'server.js',
options: {
nodeArgs: ['--debug'],
ext: 'js,html',
watch: watchFiles.serverViews.concat(watchFiles.serverJS),
ignore: 'node_modules/*' // or '/node_modules'
}
}
},
You need to determine exactly where the problem is. Try to start the server by three different ways and to measure the time
NODE_ENV=development nodejs server.js
NODE_ENV=development nodemon server.js
NODE_ENV=development nodemon server.js --ignore node_modules/
NFS saved the day.
The VirtualBox shared folder is super slow. Using this vagrant image instead of boot2docker is much faster.
https://vagrantcloud.com/yungsang/boxes/boot2docker
Also, make sure to disable UDP, or NFS may hang. You may do so by putting this in your Vagrantfile:
config.vm.synced_folder ".", "/vagrant", type: "nfs", nfs_udp: false
Related
Attempting to add clustering ability via PM2 and deploy via my Node/Express application.
I've set up the following command:
pm2 start build/server/app.js -i max
The above works fine locally. I'm testing the functionality on a staging environment on Heroku via Performance 1X.
The above shows the log for the command but attempting 1 instance rather than max. It shows typical info after successfully running pm2 start however you can see app immediately crashes afterward.
Any advice or guidance is appreciated.
I ended up using the following documentation: https://pm2.keymetrics.io/docs/integrations/heroku/
Using a ecosystem.config.js with the following:
module.exports = {
apps : [
{
name: `app-name`,
script: 'build/server/app.js',
instances: "max",
exec_mode: "cluster",
env: {
NODE_ENV: "localhost"
},
env_development: {
NODE_ENV: process.env.NODE_ENV
},
env_staging: {
NODE_ENV: process.env.NODE_ENV
},
env_production: {
NODE_ENV: process.env.NODE_ENV
}
}
],
};
Then the following package.json script handles the deployment per the environment I am looking to deploy e.g. production:
"deploy:cluster:prod": "pm2-runtime start ecosystem.config.js --env production --deep-monitoring",
I got the same error but I fixed it by adding
{
"preinstall":"npm I -g pm2",
"start":"pm2-runtime start build/server/app.js -i 1"
}
To my package.json file
This is advised for production environment
But running
pm2 start build/server/app.js -i max
Is for development purpose
I'm getting used to with Docker. Here is my current code in DockerFile:
FROM node:12-alpine AS builder
ARG NODE_ENV
ENV NODE_ENV ${NODE_ENV}
RUN npm run build
CMD ["sh","-c","./start-docker.sh ${NODE_ENV}"]
And I'm using pm2 to manage cluster in Nodejs, here is my start-docker.sh:
NODE_PATH=. pm2-runtime ./ecosystem.config.js --env $NODE_ENV
In my ecosystem.config.js, I define an env:
env_dev: {
NODE_ENV: 'development'
}
Everything is oke, but on my server, the NODE_ENV=''. I think there is something wrong when I pass in my CMD but can not find out what's wrong
Okay in my mind there is another way to do this, please try this way. this will not be actual code, it will just be an idea.
ecosystem.config.js
module.exports = {
apps : [{
name: "app",
script: "./app.js",
env: {
NODE_ENV: "development",
},
env_production: {
NODE_ENV: "production",
}
}]
}
And your dockerfile
dockerfile
FROM node:12-alpine
RUN npm run build
CMD ["pm2","start","ecosystem.config.js"]
As described in PM2 CLI documentation you just need to run command to start the application using the command pm2 start ecosystem.config.js this is automatically accessing the ENV variable described in ecosystem.config.js
https://pm2.keymetrics.io/docs/usage/application-declaration/#cli
Please try this, you might have new problems, but hope problems with some error logs, so that we can debug more. But I am sure that this could work and resolve your problem
My ecosystem.config.js looks like this:
module.exports = {
apps: [{
name: 'production',
script: '/home/username/sites/Website/source/server.js',
env: { NODE_ENV: 'PRODUCTION' },
args: '--run-server'
}, {
name: 'staging',
script: '/home/username/sites/WebsiteStaging/source/server.js',
env: { NODE_ENV: 'STAGING' },
args: '--run-server'
}],
deploy: {
production: {
user: 'username',
host: ['XXX.XXX.XX.XXX'],
ref: 'origin/production',
repo: 'git#github.com:ghuser/Website.git',
path: '/home/username/sites/Website',
'post-deploy': 'npm install && pm2 reload ecosystem.config.js --only production',
env: { NODE_ENV: 'PRODUCTION' }
},
staging: {
user: 'username',
host: ['XXX.XXX.XX.XXX'],
ref: 'origin/staging',
repo: 'git#github.com:ghuser/Website.git',
path: '/home/username/sites/WebsiteStaging',
'post-deploy': 'npm install && pm2 reload ecosystem.config.js --only staging',
env: { NODE_ENV: 'STAGING' }
}
}
};
When I deploy the application, I expect to see two processes - one called 'production' and one called 'staging'. These run code from the same repo, but from different branches.
I do see two processes, however, when I run pm2 desc production I can see that the script path is /home/username/sites/WebsiteStaging/source/server.js. This path should be /home/username/sites/Website/source/server.js as per the config file.
I've tried setting the script to ./server.js and using the cwd parameter but the result has been the same.
The deploy commands I am using are pm2 deploy production and pm2 deploy staging and I have verified that both the Website and the WebsiteStaging folders are present on my server.
Is there something I'm missing here? Why would it be defaulting to the staging folder like this?
What worked for me was to delete the pm2 application and start it.
pm2 delete production
pm2 start production
When I ran pm2 desc production, I saw that the script path was incorrect, and nothing I did seemed to correct that path, short of the above.
I had the same issue.
Seems it happend due to old dump.pm2 that was not updated after changes to ecosystem.config.js were made.
Updating the startup script solved the issue
pm2 save
pm2 unstartup
pm2 startup
I try to reload my node js app code inside docker container. I use pm2 as process manager. Here is my configurations:
Dockerfile
FROM node:6.9.5
LABEL maintainer "denis.ostapenko2#gmail.com"
RUN mkdir -p /usr/src/koa2-app
COPY . /usr/src/koa2-app
WORKDIR /usr/src/koa2-app
RUN npm i -g pm2
RUN npm install
EXPOSE 9000
CMD [ "npm", "run", "production"]
ecosystem.prod.config.json (aka pm2 config)
{
"apps" : [
{
"name" : "koa2-fp",
"script" : "./bin/www.js",
"watch" : true,
"merge_logs" : true,
"log_date_format": "YYYY-MM-DD HH:mm Z",
"env": {
"NODE_ENV": "production",
"PROTOCOL": "http",
"APP_PORT": 3000
},
"instances": 4,
"exec_mode" : "cluster_mode",
"autorestart": true
}
]
}
docker-compose.yaml
version: "3"
services:
web:
build: .
volumes:
- ./:/koa2-app
ports:
- "3000:3000"
npm run production -
pm2 start --attach ecosystem.prod.config.json
I run 'docker-compose up' in the CLI and it works, I'm able to interact with my app on localhost:3000. But if make some change to code it will not show up in web. How can I configure code reloading inside docker?
P.S. And best practices question: Is it really OK to develop using docker stuff? Or docker containers is most preferable for production use.
It seems that you COPY your code in one place and the volume is in another place.
Try:
version: "3"
services:
web:
build: .
volumes:
- ./:/usr/src/koa2-app
ports:
- "3000:3000"
Then, when you change the js code outside container (your IDE), now the PM2 is able to see the changes, and therefore reload the application (you need to be sure of that part).
Regarding the use of Docker in development environment: it is a really good thing to do because of many reasons. For instance, you manage the same app installation for different environments reducing a lot of bugs, etc.
I have a docker app with the following containers
node - source code of the project. it serves up the html page situated in the public folder.
webpack - watches files in the node container and updates the public folder (from the node container) on the event of change in the code.
database
this is the webpack/node container setup
web:
container_name: web
build: .
env_file: .env
volumes:
- .:/usr/src/app
- node_modules:/usr/src/app/node_modules
command: npm start
environment:
- NODE_ENV=development
ports:
- "8000:8000"
webpack:
container_name: webpack
build: ./webpack/
depends_on:
- web
volumes_from:
- web
working_dir: /usr/src/app
command: webpack --watch
So currently , the webpack container monitors and updates the public folder. i have to manually refresh the browser to see my changes.
I'm now trying to incorporate webpack-dev-server to enable automatic refresh in the browser
these are my changes to the webpack config file
module.exports = {
entry:[
'webpack/hot/dev-server',
'webpack-dev-server/client?http://localhost:8080',
'./client/index.js'
],
....
devServer:{
hot: true,
proxy: {
'*': 'http://localhost:8000'
}
}
}
and the new docker-compose file file webpack
webpack:
container_name: webpack
build: ./webpack/
depends_on:
- web
volumes_from:
- web
working_dir: /usr/src/app
command: webpack-dev-server --hot --inline
ports:
- "8080:8080"
i seem to be getting an error when running the app
Invalid configuration object. Webpack has been initialised using a configuration object that does not match the API schema.
webpack | - configuration.entry should be one of these:
webpack | object { <key>: non-empty string | [non-empty string] } | non-empty string | [non-empty string] | function
webpack | The entry point(s) of the compilation.
webpack | Details:
webpack | * configuration.entry should be an object.
webpack | * configuration.entry should be a string.
webpack | * configuration.entry should NOT have duplicate items (items ## 1 and 2 are identical) ({
webpack | "keyword": "uniqueItems",
webpack | "dataPath": ".entry",
webpack | "schemaPath": "#/definitions/common.nonEmptyArrayOfUniqueStringValues/uniqueItems",
webpack | "params": {
webpack | "i": 2,
webpack | "j": 1
webpack | },
webpack | "message": "should NOT have duplicate items (items ## 1 and 2 are identical)",
webpack | "schema": true,
webpack | "parentSchema": {
webpack | "items": {
webpack | "minLength": 1,
webpack | "type": "string"
webpack | },
webpack | "minItems": 1,
webpack | "type": "array",
webpack | "uniqueItems": true
webpack | },
webpack | "data": [
webpack | "/usr/src/app/node_modules/webpack-dev-server/client/index.js?http://localhost:8080",
webpack | "webpack/hot/dev-server",
webpack | "webpack/hot/dev-server",
webpack | "webpack-dev-server/client?http://localhost:8080",
webpack | "./client/index.js"
webpack | ]
webpack | }).
webpack | [non-empty string]
webpack | * configuration.entry should be an instance of function
webpack | function returning an entry object or a promise..
As you can see , my entry object doesnt have any duplicate items.
Is there something additional i should be doing? anything i missed?
webpack-dev-server should basically proxy all requests to the node server.
I couldn't make webpack or webpack-dev-server watch (--watch) mode work even after mounting my project folder into container.
To fix this you need to understand how webpack detects file changes within a directory.
It uses one of 2 softwares that add OS level support for watching for file changes called inotify and fsevent. Standard Docker images usually don't have these (specially inotify for linux) preinstalled so you have to install it in your Dockerfile.
Look for inotify-tools package in your distro's package manager and install it. fortunately all alpine, debian, centos have this.
Docker & webpack-dev-server can be fully operational without any middleware or plugins, proper configuration is the deal:
devServer: {
port: 80, // use any port suitable for your configuration
host: '0.0.0.0', // to accept connections from outside container
watchOptions: {
aggregateTimeout: 500, // delay before reloading
poll: 1000 // enable polling since fsevents are not supported in docker
}
}
Use this config only if your docker container does not support fsevents.
For performance efficient way check out HosseinAgha answer #42445288: Enabling webpack hot-reload in a docker application
try doing this:
Add watchOptions.poll = true in webpack config.
watchOptions: {
poll: true
},
Configure host in devServer config
host:"0.0.0.0",
Hot Module Reload is the coolest development mode, and a tricky one to set up with Docker. In order to bring it to life you'll need 8 steps to follow:
For Webpack 5 install, in particular, these NPM packages:
npm install webpack webpack-cli webpack-dev-server --save-dev --save-exact
Write this command into 'scripts' section in 'package.json' file:
"dev": "webpack serve --mode development --host 0.0.0.0 --config webpack.config.js"
Add this property to 'webpack.config.js' file (it'll enable webpack's hot module reloading)
devServer: {
port: 8080,
hot: "only",
static: {
directory: path.join(__dirname, './'),
serveIndex: true,
},
},
Add this code to the very bottom of your 'index.js' (or whatever), which is the entry point to your app:
if (module.hot) {
module.hot.accept()
}
Expose ports in 'docker-compose.yml' to see the app at http://localhost:8080
ports:
- 8080:8080
Sync your app's /src directory with 'src' directory within a container. To do this use volumes in 'docker-compose.yml'. In my case, directory 'client' is where all my frontend React's files sit, including 'package.json', 'webpack.config.js' & Dockerfile. While 'docker-compose.yml' is placed one leve up.
volumes:
- ./client/src:/client/src
Inside the volume group you'd better add the ban to syncronize 'node_modules' directory ('.dockerignore' is of no help here).
volumes:
...
- /client/node_modules
Fire this whole bundling from Docker's CMD, not from RUN command inside your 'docker-compose.yml'.
WORKDIR /client
CMD ["npm", "run", "dev"]
P.S. If you use Webpack dev server, you don't need other web servers like Nginx in your development. Another important thing to keep in mind is that Webpack dev server does not recompile files from '/src' folder into '/disc' one. It performs the compilation in memory.
Had this trouble from a windows device. Solved it by setting WATCHPACK_POLLING to true in environment of docker compose.
frontend:
environment:
- WATCHPACK_POLLING=true
I had the same problem. it was more my fault and not webpack nor docker. In fact you have to check that the workdir in your Dockerfile is the target for your bindmount in docker-compose.yml
Dockerfile
FROM node
...
workdir /myapp
...
on your docker-compose.yml
web:
....
-volumes:
./:/myapp
It should work if you configure the reloading on your webpack.config.js