Nginx Proxy + NodeJS WebSocket + >17KB messages. No traffic. Who is the culprit? - node.js

Impossible to increase buffer width to avoid dropping frames
OR
Unable to manage WS fragmentation correctly
Summary
My goal:
A very simple thing: have websocket tunnel to transfer at least 2/3 MB of data per tunnel. I need to send directory structure, therefore the data can be very many
The problem:
Sending WebSocket messages over 17KB, from A to B, cause a "communication lost" or packet drop/loss; the connection/tunnel remains up with the inability to send new messages over the same tunnel, from A to B; conversely, from B to A continues to work.
I must restart the tunnel to get functionality back.
It could also be an idea, the management of the packet heap that restarts the tunnel when the threshold is reached, but it is clear that I need to send more than the threshold at one time.
The "signal path":
GoLang app(Client) ---> :443 NGINX Proxy(Debian) ---> :8050 NodeJS WS Server
The tests:
Sending X messages/chunks of 1000 byte each | messages are received up to the 17th chunk, the following ones are not received (see below)
The analyses:
Wireshark on Go app shows the flow of all packets
tcpdump, on Debian machine, set to listen on eth (public), shows the flow of all packets
tcpdump, on Debian machine, set to listen on lo interface (for rev proxy scanning), shows the flow of all packets
NodeJS/fastify-websocket ws.on('message', (msg)=>{console.log(msg)}) shows up to the 17th chunk
Code & Config:
GoLang app relevant part
websocket.DefaultDialer = &websocket.Dialer{
Proxy: http.ProxyFromEnvironment,
HandshakeTimeout: 45 * time.Second,
WriteBufferSize: 1000, //also tried with 2000, 5000, 10000, 11000
}
c, _, err := websocket.DefaultDialer.Dial(u.String(), nil)
wsConn = c
bufferChunk := 1000
bufferSample := ""
for j := 7; j <= bufferChunk; j++ {
bufferSample = bufferSample + "0"
}
i := 1
for {
sendingBytes := i * bufferChunk
fmt.Println(strconv.Itoa(sendingBytes) + " bytes sent")
wsConn.WriteMessage(websocket.TextMessage, []byte(bufferSample))
i++
time.Sleep(1000 * time.Millisecond)
}
NGINX conf:
upstream backend {
server 127.0.0.1:8050;
}
server {
server_name my.domain.com;
large_client_header_buffers 8 32k;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_set_header X-NginX-Proxy true;
proxy_buffers 8 2m;
proxy_buffer_size 10m;
proxy_busy_buffers_size 10m;
proxy_pass http://backend;
proxy_redirect off;
#proxy_buffering off; ### ON/OFF IT'S THE SAME
# enables WS support
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade"; ### "upgrade" it's the same
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/my.domain.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/my.domain.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = my.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
server_name my.domain.com;
listen 80;
return 404; # managed by Certbot
}
NodeJS code:
//index.js
const config = require("./config.js");
const fastify = require('fastify')();
const WsController = require("./controller");
fastify.register(require('fastify-websocket'), {
/*these options are the same as the native nodeJS WS*/
options :{
maxPayload: 10 * 1024 * 1024,
maxReceivedFrameSize: 131072,
maxReceivedMessageSize: 10 * 1024 * 1024,
autoAcceptConnections: false
}
});
fastify.ready(err => {
if (err) throw err
console.log("Server started")
fastify.websocketServer
.on("connection", WsController)
})
//controller.js
module.exports = (ws, req) => {
ws.on("message", (msg) => {
log("msg received"); //it is shown as long as the tunnel does not "fill" up to 17KB
})
})

SOLVED
Updating fastify and fastify-websocket the problem disappeared. What a shame!
I came up with this solution by creating a new cloud instance and installing everything from scratch.
Just npm update.
Thank you all for your support

Related

ASP .netcore WebAPI only processing local requests

Hi I am returning 200 on local / requests from the same IP but the below error when I hotspot my mobile phone to change IP and make the request
HTTPConnectionPool(host={myIPAddress}, port=80): Max retries exceeded
..... Failed to establish a new connection: [WinError 10060] A
connection attempt failed because the connected party did not properly
respond after a period of time, or established connection failed
because connected host has failed to respond'
program.cs file
public static void Main(string[] args)
{
CreateWebHostBuilder(args).Build().Run();
}
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.UseUrls("http://0.0.0.0:5000/")
.UseKestrel(serverOptions =>
{
//Set properties and call methods on serverOptions
serverOptions.Limits.KeepAliveTimeout = TimeSpan.FromMinutes(2);
serverOptions.Limits.MaxConcurrentConnections = 100;
serverOptions.Limits.MaxConcurrentUpgradedConnections = 100;
serverOptions.Limits.MaxRequestBodySize = 10 * 1024;
serverOptions.Limits.RequestHeadersTimeout = TimeSpan.FromMinutes(1);
});
}
NGINX - sites-available/default
server {
listen 80;
listen [::]:80;
server_name HFTest;
location / {
proxy_pass http://localhost:5000; **#edited as per comments**
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection keep-alive;
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
Port Status - sudo netstat -tulpn | grep LISTEN
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 635/nginx: master p
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 628/sshd
tcp 0 0 0.0.0.0:5000 0.0.0.0:* LISTEN 2048/API/HFTest
Any ideas on what to troubleshoot next?
Thankyou
EDIT:
Request - in python
scopes = 'openid'
url='http://{IPAddress}:80/connect/token'
session = OAuth2Session(clientID,clientSecret, scope=scopes)
token = session.fetch_access_token(url,verify=False)
endpoint = "http://{IPAddress}/{endpoint}"
headers = {"Authorization": "Bearer %s" % token['access_token'],"Accept": "text/xml"}
session = requests.Session()
response = session.request("GET",endpoint,headers=headers,verify = False)
print(response.status_code)
print(response.text)
The upstream service ip in your nginx configuration needs to be an actual ip address. You can try the localhost ip as follows
…
location / {
proxy_pass http://127.0.0.1:5000;
…

How to use Nginx as a secure reverse proxy for a websocket running on a NanoCurrency node

I have a server running Ubuntu 20.04, set up with an SSH Key, and I have installed Nano using Docker -
docker run --restart=unless-stopped -d -p 7075:7075/udp -p 7075:7075 -p 127.0.0.1:7076:7076 -p 127.0.0.1:7078:7078 -v /root/nano/:/root --name nano nanocurrency/nano:latest
On the VPS GUI ports 22, 443, and 7075 are open for inbound traffic.
I have a working WebSocket running on localhost:7078 which returns a stream of data to the terminal.
Nginx version 1.18.0 is installed and when I run systemctl status nginx it claims to be active with no errors, but when I try to connect from my laptop nothing happens.
Here's my Nginx config -
server {
listen 443;
# host name to respond to
server_name 000.000.00.000;
location / {
# switch off logging
access_log off;
# redirect all HTTP traffic to localhost:7080
proxy_pass http://localhost:7080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support (nginx 1.4)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
And node.js Websocket -
const WS = require('ws');
const ReconnectingWebSocket = require('reconnecting-websocket');
// Create a reconnecting WebSocket.
// In this example, we wait a maximum of 2 seconds before retrying.
const ws = new ReconnectingWebSocket('ws://localhost:7078', [], {
WebSocket: WS,
connectionTimeout: 1000,
maxRetries: 100000,
maxReconnectionDelay: 2000
});
// As soon as we connect, subscribe to block confirmations
ws.onopen = () => {
const subscription = {
"action": "subscribe",
"topic": "confirmation"
}
ws.send(JSON.stringify(subscription));
};
// The node sent us a confirmation
ws.onmessage = msg => {
console.log(msg.data);
if (msg.data.topic === "confirmation") {
console.log ('Confirmed', data.message.hash)
}
};

Timeout with socket io

I have an application using socket.io with Node and Express. I'm also using AWS EC2 and Nginx.
I'm getting a timeout with socket io.
The error is:
GET https://vusgroup.com/socket.io/?EIO=3&transport=polling&t=MnUHunS 504 (Gateway Time-out)
Express file:
var port = 8090;
host = 'https://18.237.109.96'
var app = express(host);
var webServer = http.createServer(app);
...
// Start Socket.io so it attaches itself to Express server
var socketServer = socketIo.listen(webServer, {"log level":1});
//listen on port
webServer.listen(port, function () {
console.log('listening on http://localhost:' + port);
});
Nginx file:
server {
listen 80 default_server;
listen [::]:80 default_server;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://18.237.109.96:8090/;
}
}
server {
server_name vusgroup.com www.vusgroup.com; # managed by Certbot
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
...
ssl stuff
...
location / {
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
proxy_pass http://18.237.109.96:8090/;
}
location /socket.io/ {
proxy_pass http://18.237.109.96:3000;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
I've tried chaning the proxypass for socket.io to http://18.237.109.96:8090; but that gave me a 400 error.

Reverse-proxy to nodejs with Nginx returns a 502 when sending cookies

I have a nodejs express application which is behind an nginx reverse proxy. Everything works as it should, except that when I try to set cookies on a response, nginx returns a 502 page.
Here is the relevant route code:
officeAuth.getToken(req.query.code).then((data) => {
const key = jwt.sign({access_token: data.access_token}, process.env.JWT_PRIVATE_KEY);
const refreshKey = jwt.sign({refresh_token: data.refresh_token}, process.env.JWT_PRIVATE_KEY);
res.cookie('token', key, {maxAge: data.expires_in * 24000, httpOnly: true});
res.cookie('refresh', refreshKey, {maxAge: data.expires_in * 24000, httpOnly: true});
res.redirect(process.env.APP_HOME_PAGE);
}, (err) => {
res.status(500).send(err);
});
With this code, the nodejs log does not show any errors, and in-fact shows this request as returning a 302 as it should. However in the browser I get Nginx's 502 page.
When I remove the res.cookie statements from the code above, the redirect works fine.
Nginx config:
server {
listen 443 ssl;
server_name my.server.com;
ssl_certificate /my/ssl/cert;
ssl_certificate_key /my/ssl/key;
location / {
proxy_pass http://localhost:3001;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $http_host;
}
}
Turns out my cookies were too large for nginx to handle, so I just increased the header size limit by adding:
proxy_buffers 8 16k;
proxy_buffer_size 16k;
to the location / block.

Port numbers not hiding in nginx reverse proxy (next js server)

I am trying to deploy a next-js app by create-next-app, I have a custom express server like this -
const express = require('express')
const next = require('next')
const dev = process.env.NODE_ENV !== 'production'
const nextApp = next({ dev })
const handle = nextApp.getRequestHandler()
const fs = require('fs')
nextApp.prepare()
.then(() => {
const server = express ()
let port = 3000;
let options = {
key: fs.readFileSync('some key..', 'utf-8'),
cert: fs.readFileSync('some cert..', 'utf-8'),
};
server.get(
...
)
let app = https.createServer(options, server)
.listen((port), function(){
console.log("Express server listening on port " + port);
});
})
.catch((ex) => {
console.error(ex.stack)
process.exit(1)
})
I want to deploy this as the website when someone types the URL subdomain.maindomain.com so I saved two nginx configuration files like this -
/etc/nginx/sites-available/default AND /etc/nginx/sites-available/subdomain.maindomain.com
the default file contains this
server {
root /var/www/html;
index index.html index.htm index.nginx-debian.html;
server_name maindomain.com www.maindomain.com;
location / {
# try_files $uri $uri/ =404;
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen [::]:443 ssl ipv6only=on; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/maindomain.com/fullchain.pem;$
ssl_certificate_key /etc/letsencrypt/live/maindomain.com/privkey.pe$
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
and the subdomain.maindomain.com file looks like this
server {
if ($host = www.subdomain.maindomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
if ($host = subdomain.maindomain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
root /var/www/subdomain.maindomain.com/somecodefolder/;
index index.html index.htm index.nginx-debian.html;
server_name subdomain.maindomain.com www.subdomain.maindomain.com;
location / {
proxy_pass http://localhost:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
# try_files $uri $uri/ =404;
}
}
if I'm typing https://subdomain.maindomain.com:3000, everything works fine, I see my website running. But when I type https://subdomain.maindomain.com (without the port number) it shows nothing. How can I get the content I want when I type just the url without the port number. I have tried many combinations, but could'nt do. someone please help i've been trying since 2 days.
Try with other applications in order to validate if something is wrong in your application.
Configure nginx to use domain instead ports are not complex. Just add https configurations but the main configurations will be the same.
Steps
npm install
node main_domain.js
node subdomain.js
Check if webs are working:
Add the following lines to your /etc/hosts. This will help us to use domains without enterprise web hosting company register.
127.0.0.1 maindomain.com
127.0.0.1 subdomain.maindomain.com
Create a file in /etc/nginx/conf.d called maindomain.com.conf or whatever you want but with .conf
server {
listen 80;
server_name maindomain.com;
location / {
proxy_pass http://localhost:3000/;
}
}
Create a file in /etc/nginx/conf.d called conf.d/subdomain.maindomain.com.conf or whatever you want but with .conf
server {
listen 80;
server_name subdomain.maindomain.com;
location / {
proxy_pass http://localhost:3001/;
}
}
Restart the nginx
service nginx restart
And now, you could use a domain instead ip:port
Try to change from
proxy_pass http://localhost:3000;
Into
proxy_pass http://127.0.0.1:3000;

Resources