PM2 EPERM, Operation not permitted on call initgroups - node.js

I have install pm2 globally
sudo pm2 install -g
pm2 start server.js
pm2 status (gives this output)
┌──────────┬────┬──────┬───────┬────────┬─────────┬────────┬─────┬───────────┬─────────┬──────────┐
│ App name │ id │ mode │ pid │ status │ restart │ uptime │ cpu │ mem │ user │ watching │
├──────────┼────┼──────┼───────┼────────┼─────────┼────────┼─────┼───────────┼─────────┼──────────┤
│ server │ 0 │ fork │ 10094 │ online │ 0 │ 85s │ 0% │ 44.7 MB │ ubuntu │ disabled │
└──────────┴────┴──────┴───────┴────────┴─────────┴────────┴─────┴───────────┴─────────┴──────────┘
When pm2 log 0
we get following error
1|server | 2018-01-23 14:35 +00:00: Tue, 23 Jan 2018 14:35:03 GMT zap2it:server Server now running on localhost:4040
1|server | 2018-01-23 14:35 +00:00: Tue, 23 Jan 2018 14:35:03 GMT zap2it:server spawning worker #53
1|server | 2018-01-23 14:35 +00:00: EPERM, Operation not permitted on call initgroups
1|server | 2018-01-23 14:35 +00:00: ubuntu is not accessible
What permission I need for pm2 to run or where I can look for errors?
Can I install and run pm2 using root?

you must execute pm2 update like:
sudo npm install -g pm2
pm2 update
pm2 start server.js

PM2 logs can be found at <HOME>/.pm2/logs/ and you should be able to install and run it using root, although this is not recommended (as stated in the comments by savior123).
I've just run into the same issue and error messages as you a while ago - although not running PM2 with sudo - and solved it by updating PM2 version (from 2.9.2 to 2.9.3), as commented by Unitech

I solved this issue on windows by going to the services and stop the pm2 service from running and at my terminal, I did the pm2 start app.js and everything ran fine

Related

Laravel + Inertia SSR how to change default port? Error: listen EADDRINUSE: address already in use :::13714

So I have a production site and a staging site. Both are on Laravel and uses Server-Side Rendering (SSR) + Node. The server is Ubuntu 22.04.1 LTS. I use PM2 as the production process manager for node.js. When I run
pm2 start /var/www/example.com/public/build/server/ssr.mjs --name ssr_example --watch it works:
┌─────┬──────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ ssr_example │ default │ N/A │ fork │ 168259 │ 50s │ 0 │ online │ 0% │ 65.9mb │ user │ enabled │
└─────┴──────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
But when I want to do the same for the staging version of the website pm2 start /var/www/staging.example.com/public/build/server/ssr.mjs --name ssr_staging_example --watch I got this:
┌─────┬──────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ ssr_example │ default │ N/A │ fork │ 168259 │ 59s │ 0 │ online │ 0% │ 65.1mb │ user │ enabled │
│ 1 │ ssr_staging_example │ default │ N/A │ fork │ 0 │ 0 │ 15 │ errored │ 0% │ 0b │ user │ enabled │
└─────┴──────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
When I look at the log files pm2 logs it shows:
1|ssr_stag | at Server.setupListenHandle [as _listen2] (node:net:1380:16)
1|ssr_stag | at listenInCluster (node:net:1428:12)
1|ssr_stag | at Server.listen (node:net:1516:7)
1|ssr_stag | at Object._default [as default] (/var/www/staging.example.com/node_modules/#inertiajs/server/lib/index.js:52:6)
1|ssr_stag | at file:///var/www/staging.example.com/public/build/server/ssr.mjs:617:21
1|ssr_stag | at ModuleJob.run (node:internal/modules/esm/module_job:198:25)
1|ssr_stag | at async Promise.all (index 0)
1|ssr_stag | at async ESMLoader.import (node:internal/modules/esm/loader:409:24)
1|ssr_witt | at async importModuleDynamicallyWrapper (node:internal/vm/module:435:15) {
1|ssr_stag | code: 'EADDRINUSE',
1|ssr_stag | errno: -98,
1|ssr_stag | syscall: 'listen',
1|ssr_stag | address: '::',
1|ssr_stag | port: 13714
1|ssr_stag | }
PM2 | App [ssr_staging_example:1] exited with code [1] via signal [SIGINT]
PM2 | Script /var/www/staging.example.com/public/build/server/ssr.mjs had too many unstable restarts (16). Stopped. "errored"
I know it is beceause both are using the same port so I went to config/inertia.php and changed the default port, 13714, to 13715
<?php
return [
/*
|--------------------------------------------------------------------------
| Server Side Rendering
|--------------------------------------------------------------------------
|
| These options configures if and how Inertia uses Server Side Rendering
| to pre-render the initial visits made to your application's pages.
|
| Do note that enabling these options will NOT automatically make SSR work,
| as a separate rendering service needs to be available. To learn more,
| please visit https://inertiajs.com/server-side-rendering
|
*/
'ssr' => [
'enabled' => true,
'url' => 'http://127.0.0.1:13715/render',
],
...
...
...
But it still doesn't work and I keep getting the same errors. Should I change the port somewhere else, in another (config) file? Or am I doing it wrong? Is there another approach?
Thanks in advance!
I had the same issue today and I found a solution.
Basically you'll have to change the port both for the SSR server (which is configured when running npm run build) and the Laravel runtime. You did the latter in the config file. In order to do the former, pass the port as a second parameter to the createServer() in the ssr.js file. For example, put 8080 as port:
createServer(page =>
createInertiaApp({
// Config here
}),
8080
)
After the change, you'll have to run npm run build to make the SSR server actually start on 8080. Also make sure the port in config/inertia.php matches.
I wrote a complete explanation here.
Even with the port setup in both the config/inertia.php and the ssr.js, keep in mind that the port for the SSR server is baked into the build when running npm run build. So just setting the port at runtime will not change the actual port the server runs on as long as you do not recreate the production build.

PM2 - Why am I getting EADDRINUSE address already in use message in index-out.log?

I'm running a NodeJS application on ubuntu LTS 20.4 managed by PM2. Application is running fine but when I check the logs I see lots of EADDRINUSE address already in use message.
I started the server using the command sudo pm2 start index.js
Error: listen EADDRINUSE: address already in use :::8000
at Server.setupListenHandle [as _listen2] (node:net:1432:16)
at listenInCluster (node:net:1480:12)
at Server.listen (node:net:1568:7)
at file:///home/ubuntu/wapi/index.js:105:10
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
code: 'EADDRINUSE',
errno: -98,
syscall: 'listen',
address: '::',
port: 8000
}
cleanup
Stack trace is pointing to line number 105 of the file below.
https://github.com/billbarsch/myzap/blob/myzap2.0/index.js
What I don't understand is why PM2 is trying to start the server almost every second (because this message appears in the log every second) when the service is already running?
And sudo pm2 ls is listing 2 processes
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
│ 0 │ index │ default │ 1.0.0 │ fork │ 1673211 │ 103s │ 130 │ online │ 0% │ 111.8mb │ root │ disabled │
│ 1 │ index │ default │ 1.0.0 │ fork │ 1673848 │ 2s │ 450… │ online │ 66.7% │ 120.3mb │ root │ disabled │
Really appreciate some help.
Thanks
It appears that you already have another process of pm2 which is running the same application. That is why you are seeing EADDRINUSE.
And the reason you are getting the same log every second is that pm2 tends to restart the application when it errors out.
You can stop all the processes using
pm2 stop all
And then try to re-run your process.
Your error tell that another process already use the specified port.
That can be every process on your server, not only a node process running under PM2.
To determine what process already use the port, you can issue the netstat command:
netstat -ano -p -t | grep 8000
This will print out ALL process connected to this port, server as client. To identify server process look for LISTEN.
If not logged as privileged user, use sudo:
sudo netstat -ano -p -t | grep 8000

Childprocess.exec function giving an error when service is inactive

I am using CentOS 7 server, node version 10.23.0, child-process version 6.14.9 and I should watch the status of the given services.
For that, I'm using Childprocess.exec function in format systemctl status servicename. This command is working properly when service is active, but giving an error when service is inactive. In command line, despite status of the service, all is working good.
I've tried to use systemctl is-active servicename but there is also error. I don't know what is the reason. Error message is
{Error: Command failed: systemctl status crond
at ChildProcess.exithandler (child_process.js:294:12)
at ChildProcess.emit (events.js:198:13)
at maybeClose (internal/child_process.js:982:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:259:5)
killed: false,
code: 3,
signal: null,
cmd: 'systemctl status crond' }
NOTE: I should use child-process.
Systemctl's exit codes are documented in man systemctl as:
EXIT STATUS
On success, 0 is returned, a non-zero failure code otherwise.
systemctl uses the return codes defined by LSB, as defined in LSB 3.0.0[2].
Table 3. LSB return codes
┌──────┬───────────────────────────┬──────────────────────────┐
│Value │ Description in LSB │ Use in systemd │
├──────┼───────────────────────────┼──────────────────────────┤
│0 │ "program is running or │ unit is active │
│ │ service is OK" │ │
├──────┼───────────────────────────┼──────────────────────────┤
│1 │ "program is dead and │ unit not failed (used by │
│ │ /var/run pid file exists" │ is-failed) │
├──────┼───────────────────────────┼──────────────────────────┤
│2 │ "program is dead and │ unused │
│ │ /var/lock lock file │ │
│ │ exists" │ │
├──────┼───────────────────────────┼──────────────────────────┤
│3 │ "program is not running" │ unit is not active │
├──────┼───────────────────────────┼──────────────────────────┤
│4 │ "program or service │ no such unit │
│ │ status is unknown" │ │
In your output you have code: 3, so it's telling you what you already know, that the service is not active, but since it is exiting with a non-zero code exec() thinks it is an error.
When you say it runs fine on the command line, it's actually operating the exact same way, but you wouldn't notice the exit code was 3 unless you checked the variable $? afterwards.
You can parse the error in your callback against systemctl's documented exit codes to determine if it was an actual error or not given your use case.

Uploading files with nginx, node and express under pm2 getting Error: EACCES: permission denied mkdir '/var/virtual/upload/38'

On my dev machine (Windows 10) this works just fine running the server with nodemon. On the production machine, Ubuntu 18, using pm2 to fork the process under my user account "greg" I get:
Error: EACCES: permission denied, mkdir '/var/virtual/upload/38'
The folder permissions were all set to me and 766 folder permissions.
greg#node:/var/virtual$ ll
drwxr-xr-x 4 root root 4096 Jul 15 07:46 ./
drwxr-xr-x 15 root root 4096 May 4 11:21 ../
drwxr-xr-x 5 www-data www-data 4096 Jul 13 13:24 portal/
drwxr-xr-x 2 greg greg 4096 Jul 15 07:46 uploads/
greg#node:/var/virtual$ pm2 list
┌─────┬───────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ │ status │ cpu │ mem │ user │ watching │
├─────┼───────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ server │ default │ 1.0.0 │ fork │ 21125 │ 38m │ 0 │ online │ 0.4% │ 63.6mb │ greg │ disabled │
└─────┴───────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
But any attempt to upload fails. Is there some other user process that this needs to run under? Is it node or nginx that is actually saving the file??

Node.js pm2 keeps restarting almost every second

I have deployed an express.js app on a Azure server. I use pm2 for process management.
The issue is pm2 keeps restarting almost every seconds.
staging#Server:/srv/apps/myapp/current$ pm2 list
┌──────────┬────┬──────┬───────┬────────┬─────────┬────────┬─────────────┬──────────┐
│ App name │ id │ mode │ pid │ status │ restart │ uptime │ memory │ watching │
├──────────┼────┼──────┼───────┼────────┼─────────┼────────┼─────────────┼──────────┤
│ app │ 0 │ fork │ 35428 │ online │ 0 │ 0s │ 20.465 MB │ disabled │
└──────────┴────┴──────┴───────┴────────┴─────────┴────────┴─────────────┴──────────┘
Use `pm2 show <id|name>` to get more details about an app
staging#Server:/srv/apps/myapp/current$ pm2 list
┌──────────┬────┬──────┬───────┬────────┬─────────┬────────┬─────────────┬──────────┐
│ App name │ id │ mode │ pid │ status │ restart │ uptime │ memory │ watching │
├──────────┼────┼──────┼───────┼────────┼─────────┼────────┼─────────────┼──────────┤
│ app │ 0 │ fork │ 35492 │ online │ 7 │ 0s │ 59.832 MB │ disabled │
└──────────┴────┴──────┴───────┴────────┴─────────┴────────┴─────────────┴──────────┘
Use `pm2 show <id|name>` to get more details about an app
staging#Server:/srv/apps/myapp/current$ pm2 list
┌──────────┬────┬──────┬───────┬────────┬─────────┬────────┬─────────────┬──────────┐
│ App name │ id │ mode │ pid │ status │ restart │ uptime │ memory │ watching │
├──────────┼────┼──────┼───────┼────────┼─────────┼────────┼─────────────┼──────────┤
│ app │ 0 │ fork │ 35557 │ online │ 13 │ 0s │ 21.816 MB │ disabled │
└──────────┴────┴──────┴───────┴────────┴─────────┴────────┴─────────────┴──────────┘
~/.pm2/pm2.log
2016-05-10 17:39:34: Starting execution sequence in -fork mode- for app name:start id:0
2016-05-10 17:39:34: App name:start id:0 online
2016-05-10 17:39:35: App [start] with id [0] and pid [3149], exited with code [255] via signal [SIGINT]
2016-05-10 17:39:35: Starting execution sequence in -fork mode- for app name:start id:0
2016-05-10 17:39:35: App name:start id:0 online
2016-05-10 17:39:35: App [start] with id [0] and pid [3158], exited with code [255] via signal [SIGINT]
2016-05-10 17:39:35: Starting execution sequence in -fork mode- for app name:start id:0
2016-05-10 17:39:35: App name:start id:0 online
2016-05-10 17:39:36: App [start] with id [0] and pid [3175], exited with code [255] via signal [SIGINT]
2016-05-10 17:39:36: Starting execution sequence in -fork mode- for app name:start id:0
I am using coffee script in my application. And starting the app using pm2 start app.coffee
package.json
{
"name": "myapp",
"version": "0.0.0",
"private": true,
"scripts": {
"start": "gulp start-server"
},
"dependencies": {
"bcrypt-nodejs": "0.0.3",
"body-parser": "~1.13.2",
"co": "^4.6.0",
"coffee-script": "^1.10.0",
"connect-mongo": "^1.1.0",
"cookie-parser": "~1.3.5",
"debug": "~2.2.0",
"express": "~4.13.1",
"express-session": "^1.13.0",
"gulp": "^3.9.1",
"mongoose": "^4.4.14",
"morgan": "~1.6.1",
"newrelic": "^1.26.2",
"passport": "^0.3.2",
"passport-local": "^1.0.0",
"pm2": "^1.1.3",
"pug": "^2.0.0-alpha6",
"request": "^2.72.0",
"serve-favicon": "~2.3.0"
},
"devDependencies": {
"shipit-cli": "^1.4.1",
"shipit-deploy": "^2.1.3",
"shipit-npm": "^0.2.0",
"shipit-pm2-nginx": "^0.1.8"
}
}
I am new to node.js. May be I am not seeing the obvious. Please help me out.
Check if your app modifies a file in the project folder (such as a log file). A change to any of the files triggers restart if watch flag is enabled.
To prevent this, use a process file and add watch_ignore flag in it.
Here's a documentation on how to use the process file:
PM2 - Process File
pm2 writes application logs to ~/.pm2/logs and pm2 specific logs to pm2.log by default. We need to check both the locations to debug the issue.
One other way to debug application is by starting the application manually, ie., something like npm run start or node path/yo/your/bin.js
It should give you the missing piece of information to fix the problem and move on.
We also faced a similar problem where pm2 was restarting a process to start a node.js web application almost every second.
We found that MongoDB was not running, and then the web application would try to connect to the database on start up but would fail. This would prompt pm2 to restart the process over and over, causing a restart every second.
If this is your issue, try to start MongoDB with mongod or mongod --dbpath [your db path]?
Applicable if you have packaged and started your app with NPM.
I simply had to change the "script" file in the ecosystem.configure.js ( or the json file if you are using ).
app.js will not work, I had to replace it with ./bin/www and then it worked.
Be sure to look at the logs to see what is going wrong (pm2 describe {process} will show you where they are saved). Also, see if you can run the express app without pm2 by stopping the pm2 process and running your app manually (i.e. npm run start).
If you can run the app manually but it doesn't work with pm2, it might be that the app is not being run from the correct directory (you can modify this with the pm2 cwd argument).
Another common issue is that the correct environment variables are not set, so check your json or ecosystem file. You can also look at the environment that pm2 is running with pm2 prettylist.
I know this is kinda late and everything but for anyone scrolling over this, I found n actual solution, after hours of researching.
So I wanted to share this cheatsheet found: https://devhints.io/pm2
pm2 start app.js --no-autorestart
Just ran into this error too. I ran dmesg and that told me my process was getting killed by the Linux kernel, as it was using more memory than I had given the Docker container it was running inside.
Allocating more memory to the container fixed the problem in this case.

Resources