pm2 --watch is logging every 3 seconds irrespective of config file - node.js

We have the below config file for pm2:
module.exports = {
apps: [
{
script: 'index.js',
// ------------------------------------ watch options - begin
watch: ['/testing'],
watch_delay: 10000,
ignore_watch: ['node_modules', 'logs'],
watch_options: {
followSymlinks: false,
},
// ------------------------------------ watch options - end
error_file: 'logs/err.log',
out_file: 'logs/out.log',
log_file: 'logs/combined.log',
time: true,
},
],
deploy: {
production: {
user: 'SSH_USERNAME',
host: 'SSH_HOSTMACHINE',
ref: 'origin/master',
repo: 'GIT_REPOSITORY',
path: 'DESTINATION_PATH',
'pre-deploy-local': '',
'post-deploy':
'npm install && pm2 reload ecosystem.config.js --env production',
'pre-setup': '',
},
},
};
With this in place, and a single index.js file:
console.log(`testing`);
We get 'testing printed to the log file every 3 seconds;
2021-05-31T12:02:39: testing
2021-05-31T12:02:42: testing
2021-05-31T12:02:45: testing
2021-05-31T12:02:48: testing
2021-05-31T12:02:51: testing
2021-05-31T12:02:55: testing
There are no changes to the files, and this isn't the logs directory or files being monitored as they're excluded with ignore_watch: ['node_modules', 'logs'],.
Why isn't --watch only monitoring for file changes, and instead logging every 3 seconds?

PM2 is detecting that your app is exiting and restarting your app.
You can choose what PM2 should do when it detects when your app exists, such as --no-autorestart.
Example log output:
$ pm2 --watch --no-autorestart --ignore-watch=node_modules start index.js
$ pm2 logs -f
PM2 | App name:index id:0 online
0|index | testing
PM2 | App [index] with id [0] and pid [48907], exited with code [0] via signal [SIGINT]
# Modify index.js to log `testing 2`.
PM2 | Change detected on path index.js for app index - restarting
PM2 | App name:index id:0 online
0|index | testing 2
PM2 | App [index] with id [0] and pid [48910], exited with code [0] via signal [SIGINT]

Related

Connecting Redis with Docker with Bull with Throng with Node

I have a Heroku app that has a single process. I'm trying to change it so that it has several worker processes in a dedicated queue to handle incoming webhooks. To do so, I am using a Node.JS backend with the Bull and Throng packages, which use Redis. All of this is deployed on Docker.
I've found various tutorials that cover some of this combination, but not all of it so I'm not sure how to continue. When I spin up Docker, the main server runs, but when the worker process tries to start, it just logs Killed, which isn't that detailed of an error message.
Most of the information I found is here
My worker process file is worker.ts:
import { bullOptions, RedisData } from '../database/redis';
import throng from 'throng';
import { Webhooks } from '#octokit/webhooks';
import config from '../config/main';
import { configureWebhooks } from '../lib/github/webhooks';
import Bull from 'bull';
// Spin up multiple processes to handle jobs to take advantage of more CPU cores
// See: https://devcenter.heroku.com/articles/node-concurrency for more info
const workers = 2;
// The maximum number of jobs each worker should process at once. This will need
// to be tuned for your application. If each job is mostly waiting on network
// responses it can be much higher. If each job is CPU-intensive, it might need
// to be much lower.
const maxJobsPerWorker = 50;
const webhooks = new Webhooks({
secret: config.githubApp.webhookSecret,
});
configureWebhooks(webhooks);
async function startWorkers() {
console.log('starting workers...');
const queue = new Bull<RedisData>('work', bullOptions);
try {
await queue.process(maxJobsPerWorker, async (job) => {
console.log('processing...');
try {
await webhooks.verifyAndReceive(job.data);
} catch (e) {
console.error(e);
}
return job.finished();
});
} catch (e) {
console.error(`Error processing worker`, e);
}
}
throng({ workers: workers, start: startWorkers });
In my main server, I have the file Redis.ts:
import Bull, { QueueOptions } from 'bull';
import { EmitterWebhookEvent } from '#octokit/webhooks';
export const bullOptions: QueueOptions = {
redis: {
port: 6379,
host: 'cache',
tls: {
rejectUnauthorized: false,
},
connectTimeout: 30_000,
},
};
export type RedisData = EmitterWebhookEvent & { signature: string };
let githubWebhooksQueue: Bull.Queue<RedisData> | undefined = undefined;
export async function addToGithubQueue(data: RedisData) {
try {
await githubWebhooksQueue?.add(data);
} catch (e) {
console.error(e);
}
}
export function connectToRedis() {
githubWebhooksQueue = new Bull<RedisData>('work', bullOptions);
}
(Note: I invoke connectToRedis() before the worker process begins)
My dockerfile is
# We can change the version of ndoe by replacing `lts` to anything found here: https://hub.docker.com/_/node
FROM node:lts
ENV PORT=80
WORKDIR /usr/src/app
# Install dependencies
COPY package*.json ./
COPY yarn.lock ./
RUN yarn
RUN yarn global add npm-run-all
# Bundle app source
COPY . .
# Expose the web port
EXPOSE 80
EXPOSE 9229
EXPOSE 6379
CMD npm-run-all --parallel start start-notification-server start-github-server
and my docker-compose.yml is
version: '3.7'
services:
redis:
image: redis
container_name: cache
expose:
- 6379
api:
links:
- redis
image: instantish/api:latest
environment:
REDIS_URL: redis://cache
command: npm-run-all --parallel dev-debug start-notification-server-dev start-github-server-dev
depends_on:
- mongo
env_file:
- api/.env
- api/flags.env
ports:
- 2000:80
- 9229:9229
- 6379:6379
volumes:
# Activate if you want your local changes to update the container
- ./api:/usr/src/app:cached
Finally, the relevant NPM scripts for my project are
"dev-debug": "nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js\"",
"start-github-server-dev": "MONGOOSE_DEBUG=false nodemon --watch \"**/**\" --ext \"js,ts,json\" --exec \"ts-node ./scripts/worker.ts\"",
The docker container logs are:
> instantish#1.0.0 start-github-server-dev /usr/src/app
> MONGOOSE_DEBUG=false nodemon --watch "**/**" --ext "js,ts,json" --exec "ts-node ./scripts/worker.ts"
> instantish#1.0.0 dev-debug /usr/src/app
> nodemon --watch "**/**" --ext "js,ts,json" --exec "node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js"
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `ts-node ./scripts/worker.ts`
[nodemon] 1.19.1
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: **/**
[nodemon] starting `node --inspect=0.0.0.0:9229 -r ts-node/register ./index.js`
worker.ts
Killed
[nodemon] app crashed - waiting for file changes before starting...

getting spawn node ENOENT in pm2

I was trying to setup strapi on AWS
I was following the instructions listed on their site: https://strapi.io/documentation/3.0.0-beta.x/deployment/amazon-aws.html
Here is my folder structure
And this would be my ecoystem.config.js file
module.exports = {
apps: [
{
name: 'my-project',
cwd: '/home/ubuntu/Strapi',
script: 'npm',
args: 'start',
env: {
NODE_ENV: 'production',
DATABASE_HOST: 'r123-strapi-database.ce7f.us-east-2.rds.amazonaws.com', // database Endpoint under 'Connectivity & Security' tab
DATABASE_PORT: '5432',
DATABASE_NAME: 'r123_Strapi_db', // DB name under 'Configuration' tab
DATABASE_USERNAME: 'postgres', // default username
DATABASE_PASSWORD: 'r123_strapi_pasW',
},
},
],
};
Is aws Master username equivalent to DATABASE_USERNAME in the above? because master username is r123_strapi_101
When I run pm2 start ecosystem.config.js i get this error
PM2 | 2020-09-07T20:04:11: PM2 error: Error: spawn node ENOENT
PM2 | at Process.ChildProcess._handle.onexit (internal/child_process.js:267:19)
PM2 | at onErrorNT (internal/child_process.js:469:16)
PM2 | at processTicksAndRejections (internal/process/task_queues.js:84:21)
Can someone please help me in fixing this or what I could be doing wrong?
I managed to fix it by reinstalling Node.js and renaming my app in my ecoystem.config.js file, from "strapi-test" to "strapi-test-app". For some reason, the first config never started.
Console output pm2 list:

docker-compose: nodejs container not communicating with postgres container

I did find a few people with a slightly different setup but with the same issue. So I hope this doesn't feel like a duplicated question.
My setup is pretty simple and straight-forward. I have a container for my node app and a container for my Postgres database. When I run docker-compose up and I see the log both containers are up and running. The problem is my node app is not connecting to the database.
I can connect to the database using Postbird and it works as it should.
If I create a docker container only for the database and run the node app directly on my machine everything works fine. So it's not and issue with the DB or the app but with the setup.
Here's a few useful information:
Running a docker just for the DB (connects and works perfectly):
> vigna-backend#1.0.0 dev /Users/lucasbittar/Dropbox/Code/vigna/backend
> nodemon src/server.js
[nodemon] 2.0.2
[nodemon] to restart at any time, enter `rs`
[nodemon] watching dir(s): *.*
[nodemon] watching extensions: js,mjs,json
[nodemon] starting `node -r sucrase/register src/server.js`
Initializing database...
Connecting to DB -> vignadb | PORT: 5432
Executing (default): SELECT 1+1 AS result
Connection has been established successfully -> vignadb
Running a container for each using docker-compose:
Creating network "backend_default" with the default driver
Creating backend_db_1 ... done
Creating backend_app_1 ... done
Attaching to backend_db_1, backend_app_1
db_1 |
db_1 | PostgreSQL Database directory appears to contain a database; Skipping initialization
db_1 |
db_1 | 2020-07-24 13:23:32.875 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
db_1 | 2020-07-24 13:23:32.876 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1 | 2020-07-24 13:23:32.876 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1 | 2020-07-24 13:23:32.881 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1 | 2020-07-24 13:23:32.955 UTC [27] LOG: database system was shut down at 2020-07-23 13:21:09 UTC
db_1 | 2020-07-24 13:23:32.999 UTC [1] LOG: database system is ready to accept connections
app_1 |
app_1 | > vigna-backend#1.0.0 dev /usr/app
app_1 | > npx sequelize db:migrate && npx sequelize db:seed:all && nodemon src/server.js
app_1 |
app_1 |
app_1 | Sequelize CLI [Node: 14.5.0, CLI: 5.5.1, ORM: 5.21.3]
app_1 |
app_1 | Loaded configuration file "src/config/database.js".
app_1 |
app_1 | Sequelize CLI [Node: 14.5.0, CLI: 5.5.1, ORM: 5.21.3]
app_1 |
app_1 | Loaded configuration file "src/config/database.js".
app_1 | [nodemon] 2.0.2
app_1 | [nodemon] to restart at any time, enter `rs`
app_1 | [nodemon] watching dir(s): *.*
app_1 | [nodemon] watching extensions: js,mjs,json
app_1 | [nodemon] starting `node -r sucrase/register src/server.js`
app_1 | Initializing database...
app_1 | Connecting to DB -> vignadb | PORT: 5432
My database class:
class Database {
constructor() {
console.log('Initializing database...');
this.init();
}
async init() {
let retries = 5;
while (retries) {
console.log(`Connecting to DB -> ${databaseConfig.database} | PORT: ${databaseConfig.port}`);
const sequelize = new Sequelize(databaseConfig);
try {
await sequelize.authenticate();
console.log(`Connection has been established successfully -> ${databaseConfig.database}`);
models
.map(model => model.init(sequelize))
.map( model => model.associate && model.associate(sequelize.models));
break;
} catch (err) {
console.log(`Error: ${err.message}`);
retries -= 1;
console.log(`Retries left: ${retries}`);
// Wait 5 seconds before trying again
await new Promise(res => setTimeout(res, 5000));
}
}
}
}
Dockerfile:
FROM node:alpine
WORKDIR /usr/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3333
CMD ["npm", "start"]
docker-compose.yml:
version: "3"
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
POSTGRES_USER: postgres
POSTGRES_DB: vignadb
volumes:
- ./pgdata:/var/lib/postgresql/data
ports:
- "5432:5432"
app:
build: .
depends_on:
- db
ports:
- "3333:3333"
volumes:
- .:/usr/app
command: npm run dev
package.json (scrips only):
"scripts": {
"dev-old": "nodemon src/server.js",
"dev": "npx sequelize db:migrate && npx sequelize db:seed:all && nodemon src/server.js",
"build": "sucrase ./src -d ./dist --transforms imports",
"start": "node dist/server.js"
},
.env:
# Database
DB_HOST=db
DB_USER=postgres
DB_PASS=postgres
DB_NAME=vignadb
DB_PORT=5432
database config:
require('dotenv/config');
module.exports = {
dialect: 'postgres',
host: process.env.DB_HOST,
username: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
port: process.env.DB_PORT,
define: {
timestamp: true,
underscored: true,
underscoredAll: true,
},
};
I know I'm messing up something I just don't know where.
Let me know if I can provide more information.
Thanks!
You should put your 2 containers in the same network https://docs.docker.com/compose/networking/
And call your db service inside your nodejs connexion string.
Something like: postgres://db:5432/vignadb

pm2 not start apps on server with Process config loading failed []

I cant start PM2 on DigitalOcean (CentOS7)
[PM2] Process config loading failed []
Full command ..
bash-4.3# pm2 start ecosystem.config.js --env production
[PM2][WARN] You are starting -1 processes in fork_mode without load balancing. To enable it remove -x option.
[PM2][WARN] Applications api not running, starting...
[PM2] Process config loading failed []
┌──────────┬────┬──────┬─────┬────────┬─────────┬────────┬─────┬─────┬──────────┐
│ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ watching │
└──────────┴────┴──────┴─────┴────────┴─────────┴────────┴─────┴─────┴──────────┘
Use `pm2 show <id|
My ecosystem is ..
const nodeEnv = process.env.NODE_ENV || 'development';
if (nodeEnv === 'development') {
require('dotenv').config();
}
const maxMemory = process.env.WEB_MEMORY || 80;
const base = {
source_map_support: true,
node_args: [
'--optimize_for_size',
'--max_old_space_size=400',
'--gc_interval=100',
],
instances: process.env.WEB_CONCURRENCY || -1,
exec_mode: 'fork',
max_memory_restart: `${maxMemory}M`,
};
module.exports = {
apps: [
Object.assign({}, {
name: 'api',
script: 'api/index.js',
env: {
NODE_ENV: nodeEnv,
PORT: process.env.PORT || 3000,
API_MONGO_URL: process.env.API_MONGO_URL || 'mongodb://localhost',
JWT_SECRET: process.env.JWT_SECRET || 'AA',
SALT_WORK_FACTOR: process.env.SALT_WORK_FACTOR || 8,
},
env_production: {
NODE_ENV: "production",
API_MONGO_URL: `mongodb://${process.env.MONGODB_PORT_27017_TCP_ADDR}:${process.env.MONGODB_PORT_27017_TCP_PORT}/swarmbot`,
},
}, base),
],
};
And nothing on logs..
bash-4.3# pm2 logs
[TAILING] Tailing last 15 lines for [all] processes (change the value with --lines option)
/root/.pm2/pm2.log last 15 lines:
PM2 | 2017-06-22 02:45:54: ===============================================================================
PM2 | 2017-06-22 02:45:54: --- New PM2 Daemon started ----------------------------------------------------
PM2 | 2017-06-22 02:45:54: Time : Thu Jun 22 2017 02:45:54 GMT+0000 (UTC)
PM2 | 2017-06-22 02:45:54: PM2 version : 2.5.0
PM2 | 2017-06-22 02:45:54: Node.js version : 7.10.0
PM2 | 2017-06-22 02:45:54: Current arch : x64
PM2 | 2017-06-22 02:45:54: PM2 home : /root/.pm2
PM2 | 2017-06-22 02:45:54: PM2 PID file : /root/.pm2/pm2.pid
PM2 | 2017-06-22 02:45:54: RPC socket file : /root/.pm2/rpc.sock
PM2 | 2017-06-22 02:45:54: BUS socket file : /root/.pm2/pub.sock
PM2 | 2017-06-22 02:45:54: Application log path : /root/.pm2/logs
PM2 | 2017-06-22 02:45:54: Process dump file : /root/.pm2/dump.pm2
PM2 | 2017-06-22 02:45:54: Concurrent actions : 2
PM2 | 2017-06-22 02:45:54: SIGTERM timeout : 1600
PM2 | 2017-06-22 02:45:54: ===============================================================================
[STREAMING] Now streaming realtime logs for [all] processes

How to watch and reload an ExpressJS app with pm2

I'm developing an ExpressJS app.
I use pm2 to load it:
myapp$ pm2 start bin/www
This works fine, except that adding the --watch flag doesn't seem to work; every time I change the JS source I need to explicitly restart it for my changes to take effect:
myapp$ pm2 restart www
What am I doing wrong? I've tried the --watch flag with a non-ExpressJS app and it worked as expected.
See this solution in Stack Overflow
The problem is relative to the path where pm2 is watching, and if it is relative to the execution file or the actual root path of the project.
2021 Feb.
Things have changed a bit now. Gave a full example below from my project. Below works:
1 . Create config file. File: ecosystem.config.js
module.exports = {
apps: [
{
name: 'api',
script: './bin/www', // --------------- our node start script here like index.js
// ------------------------------------ watch options - begin
watch: ['../'],
watch_delay: 1000,
ignore_watch: ['node_modules'],
watch_options: {
followSymlinks: false,
},
// ------------------------------------ watch options - end
env: {
NODE_ENV: 'development',
PORT: 3001,
DEBUG: 'api:*',
MONGODB_URI:
'mongodb://localhost:27017/collection1?readPreference=primary&ssl=false',
},
env_production: {
NODE_ENV: 'production',
},
},
],
deploy: {
production: {
// user: "SSH_USERNAME",
// host: "SSH_HOSTMACHINE",
},
},
};
2 . Run server (dev/ prod)
pm2 start ecosystem.config.js
pm2 start ecosystem.config.js --env production
3 . More information :
https://pm2.keymetrics.io/docs/usage/watch-and-restart/
You need to specify the app location to the --watch option
myapp$ pm2 start bin/www --watch /your/location/to/app
I never managed to make default watch settings work in Ubuntu, however using polling via advanced watch options worked:
"watch": true,
"ignore_watch" : ["node_modules"],
"watch_options": {
"usePolling": true,
"interval": 1000
}
More info:
https://github.com/buunguyen/PM2/blob/master/ADVANCED_README.md#watch--restart
https://github.com/paulmillr/chokidar#api

Resources