nginx nodejs faye performance issue - node.js

I am working on nginx load balancing for multiple Faye Chat Servers.
I am able to see significant performance on normal http requests. But, websocket connection performance is very low when comparing the results without nginx.
Here is my nginx cofiguration.
upstream backend {
server 127.0.0.1:4000;
server 127.0.0.1:4002;
server 127.0.0.1:4003;
server 127.0.0.1:4004;
}
server {
listen 4001;
root /var/www/html/laughing-robot;
index index.html index.htm;
server_name backend;
location /faye {
proxy_pass http://backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
location / {
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-NginX-Proxy true;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_redirect off;
proxy_pass http://backend;
}
}
I am using websocket-bench for benchmarking Faye connections(websocket).
Here is the result with out nginx:
user#machine:/etc/nginx/sites-enabled$ websocket-bench -a 2000 -c 500 -t faye http://127.0.0.1:4000/faye
Launch bench with 2000 total connection, 500 concurent connection
0 message(s) send by client
1 worker(s)
WS server : faye
trying : 500 ...
trying : 1000 ...
trying : 1500 ...
trying : 2000 ...
#### steps report ####
┌────────┬─────────────┬────────┬──────────────┐
│ Number │ Connections │ Errors │ Duration(ms) │
├────────┼─────────────┼────────┼──────────────┤
│ 500 │ 500 │ 0 │ 2488 │
├────────┼─────────────┼────────┼──────────────┤
│ 1000 │ 500 │ 0 │ 2830 │
├────────┼─────────────┼────────┼──────────────┤
│ 1500 │ 500 │ 0 │ 2769 │
├────────┼─────────────┼────────┼──────────────┤
│ 2000 │ 500 │ 0 │ 2144 │
└────────┴─────────────┴────────┴──────────────┘
#### total report ####
┌────────┬─────────────┬────────┬──────────────┬──────────────┬──────────────┐
│ Number │ Connections │ Errors │ Message Send │ Message Fail │ Duration(ms) │
├────────┼─────────────┼────────┼──────────────┼──────────────┼──────────────┤
│ 2000 │ 2000 │ 0 │ 0 │ 0 │ 5150 │
└────────┴─────────────┴────────┴──────────────┴──────────────┴──────────────┘
Total duration is under 6000 ms.
Here is the results with nginx load balancer:
user#machine:/etc/nginx/sites-enabled$ websocket-bench -a 2000 -c 500 -t faye http://127.0.0.1:4001/faye
Launch bench with 2000 total connection, 500 concurent connection
0 message(s) send by client
1 worker(s)
WS server : faye
trying : 500 ...
trying : 1000 ...
trying : 1500 ...
trying : 2000 ...
#### steps report ####
┌────────┬─────────────┬────────┬──────────────┐
│ Number │ Connections │ Errors │ Duration(ms) │
├────────┼─────────────┼────────┼──────────────┤
│ 500 │ 500 │ 0 │ 6452 │
├────────┼─────────────┼────────┼──────────────┤
│ 1000 │ 500 │ 0 │ 9394 │
├────────┼─────────────┼────────┼──────────────┤
│ 1500 │ 500 │ 0 │ 12772 │
├────────┼─────────────┼────────┼──────────────┤
│ 2000 │ 500 │ 0 │ 16163 │
└────────┴─────────────┴────────┴──────────────┘
#### total report ####
┌────────┬─────────────┬────────┬──────────────┬──────────────┬──────────────┐
│ Number │ Connections │ Errors │ Message Send │ Message Fail │ Duration(ms) │
├────────┼─────────────┼────────┼──────────────┼──────────────┼──────────────┤
│ 2000 │ 2000 │ 0 │ 0 │ 0 │ 19173 │
└────────┴─────────────┴────────┴──────────────┴──────────────┴──────────────┘
For total 2000 connections & 500 concurrent connections, nginx loadbalancer's performance is very low.
I have also configured nofile & file-max:
/etc/security/limits.conf
* soft nofile 2048
* hard nofile 65536
/etc/sysctl.conf
fs.file-max = 100000
On Fedora, I am getting a lots of connection refused error on /var/log/nginx/error.log. But on Ubuntu 13.04 no errors.
It would be greatly appreciable, if someone able to put me towards right direction.
Thanks!

do you know https://github.com/observing/balancerbattle ?
If you receive no errors on ubuntu, what is the performance there?
(hopefully the systems are comparable in performance)
Anyhow, take a look at the kernel tuning part, you could also try the nginx.conf they used in their tests and see if that yeilds the same results.
Also you should if possible try multiple load tests. The tests you did are on the server which runs nginx, how do the tests compare from an acutal external ip over your domain?
I would advice to run the load test on a dev machine and not on the actual server.
Also what does top say cpu/memory wise for both nginx and the node processes?
Is nginx maby starving one of your processes and/or the test itself?
Websockets are more stable over ssl, it might be worth wile to test that also.
They used thor for the tests, does that give you the same results?
https://github.com/observing/balancerbattle/blob/master/results/messaging/HAproxy/1k.md

Related

Laravel + Inertia SSR how to change default port? Error: listen EADDRINUSE: address already in use :::13714

So I have a production site and a staging site. Both are on Laravel and uses Server-Side Rendering (SSR) + Node. The server is Ubuntu 22.04.1 LTS. I use PM2 as the production process manager for node.js. When I run
pm2 start /var/www/example.com/public/build/server/ssr.mjs --name ssr_example --watch it works:
┌─────┬──────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ ssr_example │ default │ N/A │ fork │ 168259 │ 50s │ 0 │ online │ 0% │ 65.9mb │ user │ enabled │
└─────┴──────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
But when I want to do the same for the staging version of the website pm2 start /var/www/staging.example.com/public/build/server/ssr.mjs --name ssr_staging_example --watch I got this:
┌─────┬──────────────────────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────────────────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ ssr_example │ default │ N/A │ fork │ 168259 │ 59s │ 0 │ online │ 0% │ 65.1mb │ user │ enabled │
│ 1 │ ssr_staging_example │ default │ N/A │ fork │ 0 │ 0 │ 15 │ errored │ 0% │ 0b │ user │ enabled │
└─────┴──────────────────────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
When I look at the log files pm2 logs it shows:
1|ssr_stag | at Server.setupListenHandle [as _listen2] (node:net:1380:16)
1|ssr_stag | at listenInCluster (node:net:1428:12)
1|ssr_stag | at Server.listen (node:net:1516:7)
1|ssr_stag | at Object._default [as default] (/var/www/staging.example.com/node_modules/#inertiajs/server/lib/index.js:52:6)
1|ssr_stag | at file:///var/www/staging.example.com/public/build/server/ssr.mjs:617:21
1|ssr_stag | at ModuleJob.run (node:internal/modules/esm/module_job:198:25)
1|ssr_stag | at async Promise.all (index 0)
1|ssr_stag | at async ESMLoader.import (node:internal/modules/esm/loader:409:24)
1|ssr_witt | at async importModuleDynamicallyWrapper (node:internal/vm/module:435:15) {
1|ssr_stag | code: 'EADDRINUSE',
1|ssr_stag | errno: -98,
1|ssr_stag | syscall: 'listen',
1|ssr_stag | address: '::',
1|ssr_stag | port: 13714
1|ssr_stag | }
PM2 | App [ssr_staging_example:1] exited with code [1] via signal [SIGINT]
PM2 | Script /var/www/staging.example.com/public/build/server/ssr.mjs had too many unstable restarts (16). Stopped. "errored"
I know it is beceause both are using the same port so I went to config/inertia.php and changed the default port, 13714, to 13715
<?php
return [
/*
|--------------------------------------------------------------------------
| Server Side Rendering
|--------------------------------------------------------------------------
|
| These options configures if and how Inertia uses Server Side Rendering
| to pre-render the initial visits made to your application's pages.
|
| Do note that enabling these options will NOT automatically make SSR work,
| as a separate rendering service needs to be available. To learn more,
| please visit https://inertiajs.com/server-side-rendering
|
*/
'ssr' => [
'enabled' => true,
'url' => 'http://127.0.0.1:13715/render',
],
...
...
...
But it still doesn't work and I keep getting the same errors. Should I change the port somewhere else, in another (config) file? Or am I doing it wrong? Is there another approach?
Thanks in advance!
I had the same issue today and I found a solution.
Basically you'll have to change the port both for the SSR server (which is configured when running npm run build) and the Laravel runtime. You did the latter in the config file. In order to do the former, pass the port as a second parameter to the createServer() in the ssr.js file. For example, put 8080 as port:
createServer(page =>
createInertiaApp({
// Config here
}),
8080
)
After the change, you'll have to run npm run build to make the SSR server actually start on 8080. Also make sure the port in config/inertia.php matches.
I wrote a complete explanation here.
Even with the port setup in both the config/inertia.php and the ssr.js, keep in mind that the port for the SSR server is baked into the build when running npm run build. So just setting the port at runtime will not change the actual port the server runs on as long as you do not recreate the production build.

PM2 - Why am I getting EADDRINUSE address already in use message in index-out.log?

I'm running a NodeJS application on ubuntu LTS 20.4 managed by PM2. Application is running fine but when I check the logs I see lots of EADDRINUSE address already in use message.
I started the server using the command sudo pm2 start index.js
Error: listen EADDRINUSE: address already in use :::8000
at Server.setupListenHandle [as _listen2] (node:net:1432:16)
at listenInCluster (node:net:1480:12)
at Server.listen (node:net:1568:7)
at file:///home/ubuntu/wapi/index.js:105:10
at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {
code: 'EADDRINUSE',
errno: -98,
syscall: 'listen',
address: '::',
port: 8000
}
cleanup
Stack trace is pointing to line number 105 of the file below.
https://github.com/billbarsch/myzap/blob/myzap2.0/index.js
What I don't understand is why PM2 is trying to start the server almost every second (because this message appears in the log every second) when the service is already running?
And sudo pm2 ls is listing 2 processes
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
│ 0 │ index │ default │ 1.0.0 │ fork │ 1673211 │ 103s │ 130 │ online │ 0% │ 111.8mb │ root │ disabled │
│ 1 │ index │ default │ 1.0.0 │ fork │ 1673848 │ 2s │ 450… │ online │ 66.7% │ 120.3mb │ root │ disabled │
Really appreciate some help.
Thanks
It appears that you already have another process of pm2 which is running the same application. That is why you are seeing EADDRINUSE.
And the reason you are getting the same log every second is that pm2 tends to restart the application when it errors out.
You can stop all the processes using
pm2 stop all
And then try to re-run your process.
Your error tell that another process already use the specified port.
That can be every process on your server, not only a node process running under PM2.
To determine what process already use the port, you can issue the netstat command:
netstat -ano -p -t | grep 8000
This will print out ALL process connected to this port, server as client. To identify server process look for LISTEN.
If not logged as privileged user, use sudo:
sudo netstat -ano -p -t | grep 8000

High HTTP latency with PM2

I have a NestJS application running in docker with PM2 and it's extremely slow although it consumes very little resources. The reason is definitely not the traffic as there is nearly no traffic. When looking into PM2 Monitoring I see that the HTTP Latency is extremely high.
When running the same application locally I can't see any of this issues.
This is a snapshot of one of the clusters in PM2.
│ Heap Size 106.32 MiB │
│ Heap Usage 86.07 % │
│ Used Heap Size 91.51 MiB │
│ Active requests 0 │
│ Active handles 16 │
│ Event Loop Latency 0.61 ms │
│ Event Loop Latency p95 1.59 ms │
│ HTTP Mean Latency 2 ms │
│ HTTP P95 Latency 9752 ms │
│ HTTP 0 req/min │
Any ideas what I can change in the configuration or how I can investigate into this issue? I haven't found anything on this topic anywhere.
You can delete that app and restart, pm2 delete app and pm2 start index.js. This form will clear that latency. This has happened with my application

PM2: what is 'version' and why is it always '0.1.0'?

I am using pm2 to control some node applications.
When I check pm2 apps using pm2 ps I get an output similar to this one:
┌─────┬─────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼─────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ dlnk │ default │ 0.1.0 │ fork │ 32210 │ 9s │ 0 │ online │ 7.4% │ 61.7mb │ fabio │ enabled │
└─────┴─────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘
What is the version column for? Is it possible to change it using some field in the ecosystem.conf.js file?
It shows the version number in your package.json file.

starting node.js app with pm2 : bad gateway from proxy_verse server

on my server1 , I run nginx as reverse_proxy to server2 which is running a node.js app on port 3000 ( full MEAN stack)
when I start server2 with grunt, everything is running fine
-- server2 ---
cd /opt/mean
grunt # running server.js , MEAN app
as a next learning step, I am trying to use pm2 on server2 to monitor my test web app . I installed pm2 and run
-- server2 ---
cd /opt/mean
pm2 start server.js
and got
[PM2] restartProcessId process id 0
[PM2] Process successfully started
┌──────────┬────┬──────┬──────┬────────┬─────────┬────────┬─────────────┬──────────┐
│ App name │ id │ mode │ pid │ status │ restart │ uptime │ memory │ watching │
├──────────┼────┼──────┼──────┼────────┼─────────┼────────┼─────────────┼──────────┤
│ server │ 0 │ fork │ 2182 │ online │ 14 │ 0s │ 10.867 MB │ disabled │
└──────────┴────┴──────┴──────┴────────┴─────────┴────────┴─────────────┴──────────┘
Use `pm2 show <id|name>` to get more details about an app
yves#gandalf:/opt/mean$ pm2 show server
Describing process with id 0 - name server
┌───────────────────┬─────────────────────────────────────────┐
│ status │ errored │
│ name │ server │
│ id │ 0 │
│ path │ /opt/mean/server.js │
│ args │ │
│ exec cwd │ / │
│ error log path │ /home/yves/.pm2/logs/server-error-0.log │
│ out log path │ /home/yves/.pm2/logs/server-out-0.log │
│ pid path │ /home/yves/.pm2/pids/server-0.pid │
│ mode │ fork_mode │
│ node v8 arguments │ │
│ watch & reload │ ✘ │
│ interpreter │ node │
│ restarts │ 28 │
│ unstable restarts │ 0 │
│ uptime │ 0 │
│ created at │ N/A │
└───────────────────┴─────────────────────────────────────────┘
Process configuration
Revision control metadata
┌──────────────────┬─────────────────────────────────────────────┐
│ revision control │ git │
│ remote url │ https://github.com/meanjs/mean.git │
│ repository root │ /opt/mean │
│ last update │ 2015-09-04T15:02:21.894Z │
│ revision │ 3890aaedf407151fd6b50d72ad55d5d7566a539b │
│ comment │ Merge pull request #876 from codydaig/0.4.1 │
│ branch │ master │
└──────────────────┴─────────────────────────────────────────────┘
When I try to request my app in the browser , I get now an error from server1
502 Bad Gateway
nginx/1.4.6 (Ubuntu)
Do I have to add or update anything into the nginx default config, as the proxy_pass directive is pointing to http://:3000
many thanks for your feedback and Happy New Year 2016 !
That "pm2 show" is showing that your node server errored, so it's probably not running. What do you see if you tail the error log? It should have some details in there about the issue
The proxy error I think is possibly because node isn't running

Resources