starting node.js app with pm2 : bad gateway from proxy_verse server - node.js

on my server1 , I run nginx as reverse_proxy to server2 which is running a node.js app on port 3000 ( full MEAN stack)
when I start server2 with grunt, everything is running fine
-- server2 ---
cd /opt/mean
grunt # running server.js , MEAN app
as a next learning step, I am trying to use pm2 on server2 to monitor my test web app . I installed pm2 and run
-- server2 ---
cd /opt/mean
pm2 start server.js
and got
[PM2] restartProcessId process id 0
[PM2] Process successfully started
┌──────────┬────┬──────┬──────┬────────┬─────────┬────────┬─────────────┬──────────┐
│ App name │ id │ mode │ pid │ status │ restart │ uptime │ memory │ watching │
├──────────┼────┼──────┼──────┼────────┼─────────┼────────┼─────────────┼──────────┤
│ server │ 0 │ fork │ 2182 │ online │ 14 │ 0s │ 10.867 MB │ disabled │
└──────────┴────┴──────┴──────┴────────┴─────────┴────────┴─────────────┴──────────┘
Use `pm2 show <id|name>` to get more details about an app
yves#gandalf:/opt/mean$ pm2 show server
Describing process with id 0 - name server
┌───────────────────┬─────────────────────────────────────────┐
│ status │ errored │
│ name │ server │
│ id │ 0 │
│ path │ /opt/mean/server.js │
│ args │ │
│ exec cwd │ / │
│ error log path │ /home/yves/.pm2/logs/server-error-0.log │
│ out log path │ /home/yves/.pm2/logs/server-out-0.log │
│ pid path │ /home/yves/.pm2/pids/server-0.pid │
│ mode │ fork_mode │
│ node v8 arguments │ │
│ watch & reload │ ✘ │
│ interpreter │ node │
│ restarts │ 28 │
│ unstable restarts │ 0 │
│ uptime │ 0 │
│ created at │ N/A │
└───────────────────┴─────────────────────────────────────────┘
Process configuration
Revision control metadata
┌──────────────────┬─────────────────────────────────────────────┐
│ revision control │ git │
│ remote url │ https://github.com/meanjs/mean.git │
│ repository root │ /opt/mean │
│ last update │ 2015-09-04T15:02:21.894Z │
│ revision │ 3890aaedf407151fd6b50d72ad55d5d7566a539b │
│ comment │ Merge pull request #876 from codydaig/0.4.1 │
│ branch │ master │
└──────────────────┴─────────────────────────────────────────────┘
When I try to request my app in the browser , I get now an error from server1
502 Bad Gateway
nginx/1.4.6 (Ubuntu)
Do I have to add or update anything into the nginx default config, as the proxy_pass directive is pointing to http://:3000
many thanks for your feedback and Happy New Year 2016 !

That "pm2 show" is showing that your node server errored, so it's probably not running. What do you see if you tail the error log? It should have some details in there about the issue
The proxy error I think is possibly because node isn't running

Related

Laravel + Inertia SSR how to change default port? Error: listen EADDRINUSE: address already in use :::13714

So I have a production site and a staging site. Both are on Laravel and uses Server-Side Rendering (SSR) + Node. The server is Ubuntu 22.04.1 LTS. I use PM2 as the production process manager for node.js. When I run
pm2 start /var/www/example.com/public/build/server/ssr.mjs --name ssr_example --watch it works:
┌─────┬──────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ ssr_example │ default │ N/A │ fork │ 168259 │ 50s │ 0 │ online │ 0% │ 65.9mb │ user │ enabled │
└─────┴──────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
But when I want to do the same for the staging version of the website pm2 start /var/www/staging.example.com/public/build/server/ssr.mjs --name ssr_staging_example --watch I got this:
┌─────┬──────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ ssr_example │ default │ N/A │ fork │ 168259 │ 59s │ 0 │ online │ 0% │ 65.1mb │ user │ enabled │
│ 1 │ ssr_staging_example │ default │ N/A │ fork │ 0 │ 0 │ 15 │ errored │ 0% │ 0b │ user │ enabled │
└─────┴──────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
When I look at the log files pm2 logs it shows:
1|ssr_stag | at Server.setupListenHandle [as _listen2] (node:net:1380:16)
1|ssr_stag | at listenInCluster (node:net:1428:12)
1|ssr_stag | at Server.listen (node:net:1516:7)
1|ssr_stag | at Object._default [as default] (/var/www/staging.example.com/node_modules/#inertiajs/server/lib/index.js:52:6)
1|ssr_stag | at file:///var/www/staging.example.com/public/build/server/ssr.mjs:617:21
1|ssr_stag | at ModuleJob.run (node:internal/modules/esm/module_job:198:25)
1|ssr_stag | at async Promise.all (index 0)
1|ssr_stag | at async ESMLoader.import (node:internal/modules/esm/loader:409:24)
1|ssr_witt | at async importModuleDynamicallyWrapper (node:internal/vm/module:435:15) {
1|ssr_stag | code: 'EADDRINUSE',
1|ssr_stag | errno: -98,
1|ssr_stag | syscall: 'listen',
1|ssr_stag | address: '::',
1|ssr_stag | port: 13714
1|ssr_stag | }
PM2 | App [ssr_staging_example:1] exited with code [1] via signal [SIGINT]
PM2 | Script /var/www/staging.example.com/public/build/server/ssr.mjs had too many unstable restarts (16). Stopped. "errored"
I know it is beceause both are using the same port so I went to config/inertia.php and changed the default port, 13714, to 13715
<?php
return [
/*
|--------------------------------------------------------------------------
| Server Side Rendering
|--------------------------------------------------------------------------
|
| These options configures if and how Inertia uses Server Side Rendering
| to pre-render the initial visits made to your application's pages.
|
| Do note that enabling these options will NOT automatically make SSR work,
| as a separate rendering service needs to be available. To learn more,
| please visit https://inertiajs.com/server-side-rendering
|
*/
'ssr' => [
'enabled' => true,
'url' => 'http://127.0.0.1:13715/render',
],
...
...
...
But it still doesn't work and I keep getting the same errors. Should I change the port somewhere else, in another (config) file? Or am I doing it wrong? Is there another approach?
Thanks in advance!
I had the same issue today and I found a solution.
Basically you'll have to change the port both for the SSR server (which is configured when running npm run build) and the Laravel runtime. You did the latter in the config file. In order to do the former, pass the port as a second parameter to the createServer() in the ssr.js file. For example, put 8080 as port:
createServer(page =>
createInertiaApp({
// Config here
}),
8080
)
After the change, you'll have to run npm run build to make the SSR server actually start on 8080. Also make sure the port in config/inertia.php matches.
I wrote a complete explanation here.
Even with the port setup in both the config/inertia.php and the ssr.js, keep in mind that the port for the SSR server is baked into the build when running npm run build. So just setting the port at runtime will not change the actual port the server runs on as long as you do not recreate the production build.

PM2 - Why am I getting EADDRINUSE address already in use message in index-out.log?

I'm running a NodeJS application on ubuntu LTS 20.4 managed by PM2. Application is running fine but when I check the logs I see lots of EADDRINUSE address already in use message.
I started the server using the command sudo pm2 start index.js
Error: listen EADDRINUSE: address already in use :::8000
at Server.setupListenHandle [as _listen2] (node:net:1432:16)
at listenInCluster (node:net:1480:12)
at Server.listen (node:net:1568:7)
at file:///home/ubuntu/wapi/index.js:105:10
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
code: 'EADDRINUSE',
errno: -98,
syscall: 'listen',
address: '::',
port: 8000
}
cleanup
Stack trace is pointing to line number 105 of the file below.
https://github.com/billbarsch/myzap/blob/myzap2.0/index.js
What I don't understand is why PM2 is trying to start the server almost every second (because this message appears in the log every second) when the service is already running?
And sudo pm2 ls is listing 2 processes
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
│ 0 │ index │ default │ 1.0.0 │ fork │ 1673211 │ 103s │ 130 │ online │ 0% │ 111.8mb │ root │ disabled │
│ 1 │ index │ default │ 1.0.0 │ fork │ 1673848 │ 2s │ 450… │ online │ 66.7% │ 120.3mb │ root │ disabled │
Really appreciate some help.
Thanks
It appears that you already have another process of pm2 which is running the same application. That is why you are seeing EADDRINUSE.
And the reason you are getting the same log every second is that pm2 tends to restart the application when it errors out.
You can stop all the processes using
pm2 stop all
And then try to re-run your process.
Your error tell that another process already use the specified port.
That can be every process on your server, not only a node process running under PM2.
To determine what process already use the port, you can issue the netstat command:
netstat -ano -p -t | grep 8000
This will print out ALL process connected to this port, server as client. To identify server process look for LISTEN.
If not logged as privileged user, use sudo:
sudo netstat -ano -p -t | grep 8000

Determine what process requested pinentry

Is there a way to figure out what process triggered pinentry prompt?
In other words, imagine the prompt pops up, and you have no idea why (what process, what action triggered it). How would you figure it out?
Another question is more general - what is signalling mechanism behind such dialogs (dbus? gpg unix socket? something else?).
P.S.
Unfortunately process tree does not help:
├─systemd,138622 --user
│ ├─(sd-pam),138623
│ ├─dbus-daemon,138647 --session --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
│ ├─dconf-service,139188
│ │ ├─{gdbus},139190
│ │ └─{gmain},139189
│ ├─gpg-agent,139317 --supervised
│ │ ├─pinentry,139327 --display :0
│ │ │ ├─{QDBusConnection},139349
│ │ │ └─{QXcbEventQueue},139330

Childprocess.exec function giving an error when service is inactive

I am using CentOS 7 server, node version 10.23.0, child-process version 6.14.9 and I should watch the status of the given services.
For that, I'm using Childprocess.exec function in format systemctl status servicename. This command is working properly when service is active, but giving an error when service is inactive. In command line, despite status of the service, all is working good.
I've tried to use systemctl is-active servicename but there is also error. I don't know what is the reason. Error message is
{Error: Command failed: systemctl status crond
at ChildProcess.exithandler (child_process.js:294:12)
at ChildProcess.emit (events.js:198:13)
at maybeClose (internal/child_process.js:982:16)
at Process.ChildProcess._handle.onexit (internal/child_process.js:259:5)
killed: false,
code: 3,
signal: null,
cmd: 'systemctl status crond' }
NOTE: I should use child-process.
Systemctl's exit codes are documented in man systemctl as:
EXIT STATUS
On success, 0 is returned, a non-zero failure code otherwise.
systemctl uses the return codes defined by LSB, as defined in LSB 3.0.0[2].
Table 3. LSB return codes
┌──────┬───────────────────────────┬──────────────────────────┐
│Value │ Description in LSB │ Use in systemd │
├──────┼───────────────────────────┼──────────────────────────┤
│0 │ "program is running or │ unit is active │
│ │ service is OK" │ │
├──────┼───────────────────────────┼──────────────────────────┤
│1 │ "program is dead and │ unit not failed (used by │
│ │ /var/run pid file exists" │ is-failed) │
├──────┼───────────────────────────┼──────────────────────────┤
│2 │ "program is dead and │ unused │
│ │ /var/lock lock file │ │
│ │ exists" │ │
├──────┼───────────────────────────┼──────────────────────────┤
│3 │ "program is not running" │ unit is not active │
├──────┼───────────────────────────┼──────────────────────────┤
│4 │ "program or service │ no such unit │
│ │ status is unknown" │ │
In your output you have code: 3, so it's telling you what you already know, that the service is not active, but since it is exiting with a non-zero code exec() thinks it is an error.
When you say it runs fine on the command line, it's actually operating the exact same way, but you wouldn't notice the exit code was 3 unless you checked the variable $? afterwards.
You can parse the error in your callback against systemctl's documented exit codes to determine if it was an actual error or not given your use case.

PM2: what is 'version' and why is it always '0.1.0'?

I am using pm2 to control some node applications.
When I check pm2 apps using pm2 ps I get an output similar to this one:
┌─────┬─────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼─────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ dlnk │ default │ 0.1.0 │ fork │ 32210 │ 9s │ 0 │ online │ 7.4% │ 61.7mb │ fabio │ enabled │
└─────┴─────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
What is the version column for? Is it possible to change it using some field in the ecosystem.conf.js file?
It shows the version number in your package.json file.

Resources