PM2 ignoring ecosystem file - node.js

I have the following file
ecosystem.js:
module.exports = {
apps: [
{
name: 'my-app',
cwd: '/test,
script: './myapp.js',
instances: 'max', // match the number of CPUs on the machine
exec_mode: 'cluster', // Run multiple child processes
args: 'start',
env: {
NODE_ENV: 'production'
}
},
],
};
I expect to see a cluster of node processes running. But it seems to start it in fork mode, ignoring my settings entirely.
I start like this:
pm2 start ecosystem.js
Output:
Starting ecosystem.js in fork_mode (1 instance)
─────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼─────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ ecosystem │ default │ 1.0.0 │ fork │ 156879 │ 0s │ 0 │ online │ 0% │ 14.3mb │ -… │ disabled │
└─────┴─────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
What could be causing this?

Perhaps the missing trailing ' closing quote on cwd.
And if not that, I just stumbled on Problem running express app with pm2 using ecosystem config file. which solved the issue for me.
File name must end with .config.js.

Related

Is there a way to manually set pm2's app version meta-data

I have a simple pm2 config containing some app's;
{
name: "App1",
script: "/home/scripts/websockets/app1-websocket.js",
instances: 1,
log_date_format: "YYYY-MM-DD HH:mm",
out_file : "/dev/null",
restart_delay : 30000,
max_restarts : 5,
exp_backoff_restart_delay: 500,
namespace: "APPS",
},
{
name: "App2",
script: "/home/scripts/websockets/app2-websocket.js",
instances: 1,
log_date_format: "YYYY-MM-DD HH:mm",
out_file : "/dev/null",
restart_delay : 30000,
max_restarts : 5,
exp_backoff_restart_delay: 500,
namespace: "APPS",
}
This is all local files and I'm not looking to bundle them into git or similar for such a small project. I'd like to know if there is a way to manually set the version meta-data field shown in pm2 ls as that way we can still set & track what's running.
Current:
│ 0 │ App1 │ APPS │ N/A │ fork │ 1234 │ 7m │ 0 │ online │ 0% │ 33.2mb │ lusr │ disabled │
│ 1 │ App2 │ APPS │ N/A │ fork │ 1235 │ 14h │ 0 │ online │ 0% │ 9.4mb │ lusr │ disabled │
Desired:
│ 0 │ App1 │ APPS │ 3.1.2 │ fork │ 1234 │ 7m │ 0 │ online │ 0% │ 33.2mb │ lusr │ disabled │
│ 1 │ App2 │ APPS │ 2.1.0 │ fork │ 1235 │ 14h │ 0 │ online │ 0% │ 9.4mb │ lusr │ disabled │
After much being distracted with packaging app's and all that other stuff it's not actually needed for most basic setup and I wish the doc's where more clear.
All you need to do is put the script or app you are executing into it's own directory along with a file called package.json and update the config with the new path. It doesn't matter what the thing is your running from pm2 long as it's in it's own folder;
{
name: "App1",
script: "/home/scripts/websockets/app1/app1-websocket.js",
instances: 1,
log_date_format: "YYYY-MM-DD HH:mm",
out_file : "/dev/null",
restart_delay : 30000,
max_restarts : 5,
exp_backoff_restart_delay: 500,
namespace: "APPS",
},
Then in /home/scripts/websockets/app1/ we have the app1.js & package.json files.
Inside the package.json all you need to specify is;
{
"name" : "App1",
"version" : "1.1.0",
"description": "App Scripts"
}
Then delete and reload the process with pm2 delete 0 pm2 start config.js --only app1
pm2 info App1
version │ 1.1.0
Now it shows my desired version number... BUT REMEMBER... As this is linked to a static file you will need to manually update the package.json file with any changes to the version number.

Google Cloud Storage NodeJS multiple read requests loading too slow

how do you do?
I'm trying to figure out why some requests to my Images API (usually the last ones) are taking over than 1 minute to load. The first ones are basically instantaneous. Search all over the internet, but had no apropriate answer yet. I am using Google Cloud to storage the images and NodeJS at the server, who is providing the images as a bufferized writing head to the browser.
You can check what am I saying accessing the website (18 year old content):
https://divinasacompanhantes.com/
As you can see, some images just don't load properly. I'm worried because this website is expected to have thousands more profiles, all over the world.
I am using PM2 to handle the services at the server side (2GB available memory). Here's the table:
┌─────┬─────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼─────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 7 │ ServiceAfiliado │ default │ 1.0.0 │ fork │ 31312 │ 20m │ 3 │ online │ 0% │ 55.9mb │ root │ disabled │
│ 0 │ ServiceAvaliacao │ default │ 1.0.0 │ fork │ 31249 │ 20m │ 3 │ online │ 0% │ 55.2mb │ root │ disabled │
│ 8 │ ServiceBlog │ default │ 1.0.0 │ fork │ 31330 │ 20m │ 3 │ online │ 0% │ 61.1mb │ root │ disabled │
│ 1 │ ServiceChat │ default │ 1.0.0 │ fork │ 31256 │ 20m │ 3 │ online │ 0% │ 57.3mb │ root │ disabled │
│ 9 │ ServiceConfig │ default │ 1.0.0 │ fork │ 31337 │ 20m │ 3 │ online │ 0% │ 56.2mb │ root │ disabled │
│ 10 │ ServiceImage │ default │ 1.0.0 │ fork │ 31904 │ 0s │ 13 │ online │ 0% │ 19.1mb │ root │ disabled │
│ 2 │ ServiceLead │ default │ 1.0.0 │ fork │ 31269 │ 20m │ 3 │ online │ 0% │ 54.8mb │ root │ disabled │
│ 3 │ ServiceMail │ default │ 1.0.0 │ fork │ 31276 │ 20m │ 3 │ online │ 0% │ 43.3mb │ root │ disabled │
│ 4 │ ServicePagamento │ default │ 1.0.0 │ fork │ 31289 │ 20m │ 3 │ online │ 0% │ 42.5mb │ root │ disabled │
│ 5 │ ServiceParceiro │ default │ 1.0.0 │ fork │ 31296 │ 20m │ 3 │ online │ 0% │ 60.1mb │ root │ disabled │
│ 6 │ ServicePerfil │ default │ 1.0.0 │ fork │ 31309 │ 20m │ 3 │ online │ 0% │ 69.7mb │ root │ disabled │
└─────┴─────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
The route handling this specific request:
router.get('/image/:imageId', async function (req, res) {
try {
let imageId = req.param('imageId')
let returnImage = await cloudController.getImageFromBucket('fotos_perfil', imageId)
res.writeHead(200, {'Content-Type': 'image/jpg'});
returnImage.on('data', (data) => {
res.write(data)
})
returnImage.on('error', (error) => {
res.status(400).send('Erro lendo a imagem')
console.error(error)
})
returnImage.on('end', () => {
res.end()
})
} catch (err) {
res.status(500).send('Internal Server Error')
}
})
And the controller associated:
async function getImageFromBucket(bucket, imageId) {
return new Promise((resolve, reject) => {
try {
let imageInfo = storage.bucket(bucket).file(imageId).createReadStream()
resolve(imageInfo)
} catch (e) {
reject(e)
}
})
}
Can anyone provide me some ideas to solve this? I've read the official Google documentation and the tip is to use fast-crc32c, but only. No clues on how to configure...

PM2 Catching Errored State Signal

I am trying to catch the process before it goes into the state of errored. The process I am running is erroring and restarting correctly. After 15 attempts of restarting it will go into a state of errored, as shown for the process with an ID of 0 below.
┌─────┬─────────────────────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼─────────────────────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 1 │ a58a1e0d-3a6f-4512-8b83-4dcfd2f9e408 │ default │ 1.0.0 │ fork │ 3139 │ 8s │ 0 │ online │ 0% │ 61.6mb │ warren │ disabled │
│ 0 │ e95ff617-4800-4059-906b-2cde63bcb4b6 │ default │ 1.0.0 │ fork │ 0 │ 0 │ 15 │ errored │ 0% │ 0b │ warren │ disabled │
└─────┴─────────────────────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
Before it goes into a state of errored what signal (if any) is sent to the process?
For example when I issue a pm2 stop <PROCESS_NAME> I can can intercept the SIGINT message and log something to my log file as in the example below.
process.on('SIGINT', function() {
logger.info("I HAVE BEEN KILLED")
})
I need something like this but the signal sent to the process when it switches to an errored state is listened for.
You can just use the local library of pm2 to catch the state of errored processes from logs.
npm install pm2 on your project directory
Go into the file where process is initiated
Add the following command into the file, it will add an event listener your process and will catch if error occured
const pm2 = require('pm2')
pm2.launchBus(function(err, bus) {
bus.on('log:err', function(e) {
//When a task throws errors
});
});

pm2 watch argument doesn't watch the files

I am using PM2 with the following configuration:
module.exports = {
apps : [{
name: 'sandbox',
script: 'index.js',
args: ["PORT=8084", "--color"],
instances: 1,
autorestart: true,
watch: 'index.js',
out_file: "logs/out.log",
node_args: "--trace-warnings"
}]
};
All works well except that changes in index.js doesn't trigger restart.
I have tried many things:
adding the absolute path in script and in the watch
adding cwd with the absolute path
Using variations in the watch like ./index.js or ../ or ./ or true
Removing autorestart
Additional info:
My app use express
The status shows that watch is enabled:
│ status │ online
│ name │ sandbox
│ version │ 1.0.0
│ restarts │ 0
│ uptime │ 8m │
│ script path │ /var/www/api/index.js │
│ script args │ PORT=8084 --color │
│ error log path │ /home/ubuntu/.pm2/logs/sandbox-error-10.log │
│ out log path │ /var/www/api/logs/out-10.log │
│ pid path │ /home/ubuntu/.pm2/pids/sandbox-10.pid │
│ interpreter │ node │
│ interpreter args │ --trace-warnings │
│ script id │ 10 │
│ exec cwd │ /var/www/api │
│ exec mode │ cluster_mode │
│ node.js version │ 11.10.0 │
│ node env │ N/A │
│ watch & reload │ ✔ │
│ unstable restarts │ 0 │
│ created at │ 2019-11-30T10:45:14.704Z

Spawning a new process on node on an headless Raspberry

I'm currently trying to spawn a process inside my node server to take a screenshot of the only screen attached to my raspberry with this command :
var scrot = childProcess.spawn(path.join(__dirname, "bin", "scrot", "scrot"), [options.output]);
This command work on my local machine but I get a code 2 response when I try to run it on my headless raspberry under Debian. I suspect it is because my node process is spawned at the beginning of the boot routine, before the x server is started.
The pstree command show me this :
systemd─┬─avahi-daemon───avahi-daemon
├─bluetoothd
├─cron
├─2*[dbus-daemon]
├─dbus-launch
├─dhcpcd
├─hciattach
├─login───startx───xinit─┬─Xorg───{InputThread}
│ └─openbox─┬─openbox-autosta───sh───chromium-browse─┬─ch+
│ │ ├─ch+
│ │ ├─{A+
│ │ ├─{B+
│ │ ├─{C+
│ │ ├─{C+
│ │ ├─{C+
│ │ ├─{C+
│ │ ├─{C+
│ │ ├─{D+
│ │ ├─{N+
│ │ ├─2*+
│ │ ├─3*+
│ │ ├─{T+
│ │ ├─7*+
│ │ ├─{c+
│ │ ├─{e+
│ │ ├─{g+
│ │ ├─{i+
│ │ ├─{r+
│ │ └─{s+
│ └─ssh-agent
├─node───9*[{node}]
Is there a way to add a child process to the x server context ?
Thanks for any help in advance,
C.

Resources