Node.js application not loadingproperly in Chromium: Connection timed out while reading response header from upstream - node.js

I'm running a Node.JS application in the subdomain of WP site. The WP site itself is running on Nginx, php-fpm and Varnish and works just fine, so I'm using Nginx to proxy connections to the Node app.
With Firefox, the Node app works perfectly. The home page and every other page loads, including the admin end. However, on Chromium, the site does not load properly. If I attempt to view the home page, the main content area loads, but the sidebar does not. And I get the following message in the Web console:
WebSocket connection to 'ws://forum.site.com/socket.io/1/websocket/91qNR-mt333a'
failed: Unexpected response code: 502
In the Nginx log file, I see entries like:
2089 upstream prematurely closed connection while reading response header from upstream,
client: 127.0.0.1, server: forum.site.com, request: "GET /socket.io/1/websocket/91qNR-
mt333a HTTP/1.1", upstream: "http://127.0.0.1:4567/socket.io/1/websocket/91qNRaWZ3-
mt333a", host: "forum.site.com"
And if I try to navigate between posts on the site, I get these messages in the Web console:
Failed to load resource: the server responded with a status of 504 (Gateway Time-out)
http://forum.site.com/socket.io/1/xhr-polling/91qNRaWZ3rYcF-mt333a?t=1396434040701
Then these lines from Nginx error log:
2128 upstream timed out (110: Connection timed out) while reading response header from
upstream, client: 127.0.0.1, server: forum.site.com, request: "GET /socket.io/1/xhr-
polling/uH9QTAWUGmomqFoy333e?t=1396434162051 HTTP/1.1", upstream:
"http://127.0.0.1:4567/socket.io/1/xhr-polling/uH9QTAWUGmomqFoy333e?t=1396434162051",
host: "forum.site.com", referrer: "http://forum.site.com/category/35/dual-boots"
I've looked at similar issues on this site and other sites and tried to implement the suggested solutions, but no luck so far. For example, in the Nginx config for the subdomain, I've added the following:
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_connect_timeout 120;
proxy_read_timeout 300;
And played around with different values for the last two lines, but still no luck.
What baffles me is that the site works perfectly on FF. It's only on Chromium that I'm having these problem. I've not tried on IE, but I'm not really concerned about that browser at this point.
I'm sure there's something that I'm overlooking, but I don't know what.
Btw, the site exhibits the same behavior on Android's default browser.
Could Varnish be the culprit here. I have Varnish (port 80) in front of Nginx (8080). Does Varnish play nice with WebSockets?

Finally figured out that the problem is with Varnish, which by default does not handle WebSocket traffic. It has to be explicitly configured for it.
See this link for the solution.

Related

Express & HPM - 404 Not Found & Client Disconnected w/ PVE xterm websocket

I am using Express and HPM to proxy all requests to my website. This is all wrapped together into a little tool I call ws-proxy (ws for web server, not websocket).
One of the things proxied is my PVE/Proxmox Virtual Environment node, which uses secure WebSockets for the xterm.js and NoVNC consoles.
ws-proxy mre
What is weird about this, is after starting ws-proxy, I have about 30 seconds to open a console which will be sustained, but connections after this time will be closed with a 404 Not Found error. In the console, I see
[HPM] Upgrading to WebSocket
[HPM] Upgrading to WebSocket (sometimes up to 4 times)
[HPM] Client disconnected
In my browser, I see the connection returned as 404.
With websocat, I get:
websocat: WebSocketError: Received unexpected status code (404 Not Found)
websocat: error running
After additional debugging, I see something in the stack is sending a 404 and closing the connection, where just afterwards PVE sends the 101 Switching Protocols. This also sometimes causes a write after end error, sometimes socket hangup.
I've spent months looking into this and I have nowhere else to look at this point.
http-proxy-middleware#826 (by me)
404 in inspect element:
error log in console after a recent attempt (error will change)
Full list of steps between client and server:
Cloudflare
DigitalOcean w/ ssh-forward (not the problem)
ws-proxy
server
Non-websocket (HTTP) requests work fine. This is with HPM v2 and Node.js v16.
Update 1
After Ryker's answer, I attempted the solution which should have fixed it, but I see something else of concern after setting the logLevel to debug:
0|ws-proxy | pve.internal.0xlogn.dev ::1 - - [02/Nov/2022:23:17:14 +0000] "POST /api2/json/nodes/proxmox/lxc/105/termproxy HTTP/1.1" 200 487 "https://pve.internal.0xlogn.dev/?console=lxc&vmid=105&node=proxmox&resize=scale&xtermjs=1" "Mozilla/5.0 (X11; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0"
0|ws-proxy | Upgrade request for vhost pve.internal.0xlogn.dev, proxy out
0|ws-proxy | [HPM] GET /api2/json/nodes/proxmox/lxc/105/vncwebsocket?port=5900&vncticket=REDACTED -> https://10.0.1.2:8006
0|ws-proxy | [HPM] GET /api2/json/nodes/proxmox/lxc/105/vncwebsocket?port=5900&vncticket=REDACTED -> http://10.0.1.108:80
0|ws-proxy | [HPM] Upgrading to WebSocket
0|ws-proxy | [HPM] Upgrading to WebSocket
0|ws-proxy | [HPM] Client disconnected
0|ws-proxy | [HPM] GET /api2/json/cluster/resources -> https://10.0.1.2:8006
Notice the two GET requests? Something is duplicating the request.
My 'upgrade' event listener:
httpsServer.on('upgrade', (req, socket, head) => {
if (!req.headers.host) {
console.log('No vhost specified in upgrade request. Ignoring.');
socket.end();
return;
} else {
console.log(`Upgrade request for vhost ${req.headers.host}, proxy out`);
vhostProxyMiddlewareList[req.headers.host].upgrade(req, socket, head);
}
})
What's even weirder here, is after restarting, I get a short time where the request isn't duplicated. Plus, there is a normal HTTP request anyway.
Update 2
After noticing the dual requests, I believe it is possible the module vhost is causing a weird wildcard and sending the request to two target nodes. I will update shortly.
Update 3
After further work I believe this is true. However, vhost is not at fault, rather something is implicitly calling next().
Update 4
This is still an issue, even after multiple attempts at changing this. I have not heard anything back from HPM.
http-proxy-middleware relies on a initial http request in order to listen to the http upgrade event by default. To proxy WebSockets without the initial http request, you can subscribe to the server's http upgrade event manually.
Add this listener to your http server
const wsProxy = createProxyMiddleware({ target: targetURL, onError, ...PROXY_DEFAULT_OPTIONS, ...addlProxyOptions });
httpsServer.on('upgrade', wsProxy.upgrade); // <-- subscribe to http 'upgrade'

504 Gateway Timeout error with NuxtJS application running on Nginx

I have a Nuxt JS SSR site hosted on digital ocean. I am using nginx as reverse proxy with configuration as described at Nuxtjs website. I also use pm2 to run nuxt app. Everything works fine until I gat 504 and 502 errors. When I check ngnix logs it shows erros like this:
"[error] 2767773#2767773: *1655282 upstream timed out (110: Connection
timed out) while reading response header from upstream, client:
x.x.x.x, server: leadersport.ge, request: "GET /news/devils HTTP/1.1",
upstream: "http://x.x.x.x:8000/xxxx/xxxx", host: "xxx.com"
It seems like there is problem with Nuxtjs app. I inspected my Nuxtjs application(I use pm2 monit, and also I log every error inside my nuxtjs app) but it seems to work fine. After 504 error I check nuxtjs logs and there seems to be no problem. Could it be that I miss something regarding nuxtjs app? If so how could I find out what is exactly the problem with nuxtjs app? Or could it be the problem with nginx configuration?
I also check memory and cpu usage and it seems to be okay.

Nginx access logs entries don't get created for some connections when they happen

I have a website architecture as follows:
internet --> loadbalancer --> webserver/api
So there is an nginx on the load balancer machine setup as a load balancer and there is also an nginx on the webserver/api node functioning as a reverse proxy. The webserver server receives requests from browsers (via the load balancer), accesses the api over HTTP and renders the page to the browser. The webserver and api are both nodejs apps.
The nginx load balancer has log entries for the webserver-->api connections, but it doesn't log the initial client browser-->webserver connections until the browser is closed (tested with Chrome and Firefox). It's as though the connection is kept in an unfinished state until the browser is fully shutdown, at which point the log entry is written.
nginx load balancer access logs:
110.110.110.101 - - [21/Feb/2019:22:21:23 +0000] loadbalancer01 TCP 200 186833 825 0.047 upstream: 10.0.0.100:443
110.110.110.100 - - [21/Feb/2019:22:21:37 +0000] loadbalancer01 TCP 200 24327 3856 21.991 upstream: 10.0.0.100:443 <-- only created after browser is closed
110.110.110.100 - ip of client connecting with Chrome/Firefox
110.110.110.101 - webserver/api node public interface
10.0.0.100 - webserver/api node private interface
The webserver->api connection is logged first even though it clearly happens second, and the client browser->webserver connection only gets logged when the client browser is completely closed.
Is there some sort of buffering happening? I'm not using the buffer parameter in the stream block logging configuration:
log_format combined '$remote_addr - - [$time_local] $hostname $protocol $status $bytes_sent $bytes_received $session_time upstream: $upstream_addr';
access_log /var/log/nginx/access.log combined;
Why does the connection only get logged when the browser is closed? How can I ensure that the initial connection is logged when the connection happens?
[update - added log configuration, also note that ips have been redacted]
I figured this out by comparing the headers between a browser connection to the load balancer compared to a connection initiated from a script. Turns out the browsers set "Connection: keep-alive" header which keeps the connection open so multiple requests can be sent using the same connection.
Useful commands to run this on the load balancer public ip to see the connection headers:
sudo tcpdump -nn -A -s1500 -l -i eth0 port 80
The other thing to note is that if you are using ufw as firewall, it sets up the underlying iptables rules with limits so it only logs the 1st 3 connections per min.

CUPS bad request

I have a little problem with CUPS 2.2.7
This is my /etc/hosts file:
127.0.0.1 example.com
127.0.0.1 localhost
in http://localhost:631/ CUPS is working right
but in http://example.com:631/ it doesn't work on the same pc.
The message error in View error log is that one:
E [21/Feb/2019:11:54:18 +0100] [Client 33] Request from "localhost" using invalid Host: field "example.com:631".
The web page on Firefox print an error message Invalid request and give me an Error (error code: 400) but seems point on CUPS.
How to solve this problem so that example.com:631 points to localhost and CUPS answers it successfully instead of Error 400: Access Denied.
By default cups servers HTTP requests only with HTTP Host header equal to "localhost". To allow it servicing requests for additional HTTP host headers use ServerAlias directive as described in the man cupsd.conf documentation. It's common to do the most unsafe thing and add
ServerAlias *
to /etc/cupsd.conf to allow all possible HTTP host headers to be serviced.
I know this is old, but I too was experiencing the same issue recently and I resolved it by updating the following line in cupsd.conf from:
Listen 0.0.0.0:631
changed to:
Listen *:631
For those that maybe care to know, I'm running CUPS within a docker container, and this change corrects the "Bad Request" response.

Nginx upstream configuration

I am trying to configure nginx with upstream.
We have 3 machines where we run application server and proxy passing all requests from nginx to application serves.
I used following configuration in nginx:
upstream appcluster {
server host1.example.com:8080 max_fails=2 fail_timeout=300s;
server host2.example.com:8080 max_fails=2 fail_timeout=300s;
}
Now issue is if the request comes to nginx when one server is down due to unknown reasons it's waiting for a long time getting response or sometimes its getting connection timeout.
Can someone suggest me the right configuration to get a response from the appcluster without latency or connection timeout whenever a server won't respond?
Then these can help, check the proxy_next_upstream
These directive determines in what cases the request will be transmitted to the next server.
Your server block should look like for example:
server {
location / {
proxy_pass http://appcluster;
proxy_next_upstream error timeout http_404;
}
}

Resources