I am using Express and HPM to proxy all requests to my website. This is all wrapped together into a little tool I call ws-proxy (ws for web server, not websocket).
One of the things proxied is my PVE/Proxmox Virtual Environment node, which uses secure WebSockets for the xterm.js and NoVNC consoles.
ws-proxy mre
What is weird about this, is after starting ws-proxy, I have about 30 seconds to open a console which will be sustained, but connections after this time will be closed with a 404 Not Found error. In the console, I see
[HPM] Upgrading to WebSocket
[HPM] Upgrading to WebSocket (sometimes up to 4 times)
[HPM] Client disconnected
In my browser, I see the connection returned as 404.
With websocat, I get:
websocat: WebSocketError: Received unexpected status code (404 Not Found)
websocat: error running
After additional debugging, I see something in the stack is sending a 404 and closing the connection, where just afterwards PVE sends the 101 Switching Protocols. This also sometimes causes a write after end error, sometimes socket hangup.
I've spent months looking into this and I have nowhere else to look at this point.
http-proxy-middleware#826 (by me)
404 in inspect element:
error log in console after a recent attempt (error will change)
Full list of steps between client and server:
Cloudflare
DigitalOcean w/ ssh-forward (not the problem)
ws-proxy
server
Non-websocket (HTTP) requests work fine. This is with HPM v2 and Node.js v16.
Update 1
After Ryker's answer, I attempted the solution which should have fixed it, but I see something else of concern after setting the logLevel to debug:
0|ws-proxy | pve.internal.0xlogn.dev ::1 - - [02/Nov/2022:23:17:14 +0000] "POST /api2/json/nodes/proxmox/lxc/105/termproxy HTTP/1.1" 200 487 "https://pve.internal.0xlogn.dev/?console=lxc&vmid=105&node=proxmox&resize=scale&xtermjs=1" "Mozilla/5.0 (X11; Linux x86_64; rv:106.0) Gecko/20100101 Firefox/106.0"
0|ws-proxy | Upgrade request for vhost pve.internal.0xlogn.dev, proxy out
0|ws-proxy | [HPM] GET /api2/json/nodes/proxmox/lxc/105/vncwebsocket?port=5900&vncticket=REDACTED -> https://10.0.1.2:8006
0|ws-proxy | [HPM] GET /api2/json/nodes/proxmox/lxc/105/vncwebsocket?port=5900&vncticket=REDACTED -> http://10.0.1.108:80
0|ws-proxy | [HPM] Upgrading to WebSocket
0|ws-proxy | [HPM] Upgrading to WebSocket
0|ws-proxy | [HPM] Client disconnected
0|ws-proxy | [HPM] GET /api2/json/cluster/resources -> https://10.0.1.2:8006
Notice the two GET requests? Something is duplicating the request.
My 'upgrade' event listener:
httpsServer.on('upgrade', (req, socket, head) => {
if (!req.headers.host) {
console.log('No vhost specified in upgrade request. Ignoring.');
socket.end();
return;
} else {
console.log(`Upgrade request for vhost ${req.headers.host}, proxy out`);
vhostProxyMiddlewareList[req.headers.host].upgrade(req, socket, head);
}
})
What's even weirder here, is after restarting, I get a short time where the request isn't duplicated. Plus, there is a normal HTTP request anyway.
Update 2
After noticing the dual requests, I believe it is possible the module vhost is causing a weird wildcard and sending the request to two target nodes. I will update shortly.
Update 3
After further work I believe this is true. However, vhost is not at fault, rather something is implicitly calling next().
Update 4
This is still an issue, even after multiple attempts at changing this. I have not heard anything back from HPM.
http-proxy-middleware relies on a initial http request in order to listen to the http upgrade event by default. To proxy WebSockets without the initial http request, you can subscribe to the server's http upgrade event manually.
Add this listener to your http server
const wsProxy = createProxyMiddleware({ target: targetURL, onError, ...PROXY_DEFAULT_OPTIONS, ...addlProxyOptions });
httpsServer.on('upgrade', wsProxy.upgrade); // <-- subscribe to http 'upgrade'
Related
I have a Nuxt JS SSR site hosted on digital ocean. I am using nginx as reverse proxy with configuration as described at Nuxtjs website. I also use pm2 to run nuxt app. Everything works fine until I gat 504 and 502 errors. When I check ngnix logs it shows erros like this:
"[error] 2767773#2767773: *1655282 upstream timed out (110: Connection
timed out) while reading response header from upstream, client:
x.x.x.x, server: leadersport.ge, request: "GET /news/devils HTTP/1.1",
upstream: "http://x.x.x.x:8000/xxxx/xxxx", host: "xxx.com"
It seems like there is problem with Nuxtjs app. I inspected my Nuxtjs application(I use pm2 monit, and also I log every error inside my nuxtjs app) but it seems to work fine. After 504 error I check nuxtjs logs and there seems to be no problem. Could it be that I miss something regarding nuxtjs app? If so how could I find out what is exactly the problem with nuxtjs app? Or could it be the problem with nginx configuration?
I also check memory and cpu usage and it seems to be okay.
I have the following setup:
Client => Proxy server => Origin Server
I'm using the following Node.js libraries for each of these pieces, respectively:
isomorphic-fetch => http-proxy => http
Here's a gist of the setup in two files, one for each of the servers and one for the client: https://gist.github.com/headquarters/850cbb199ff397c6da56fb8d86113a7e
To run this locally, run node server.js in one shell and node fetch.js in another shell.
With the servers running, if I go to http://localhost:8818 in a browser, I get the sample response {"a":"b"}, so that's working. If I go to http://localhost:9818, I also get that response, so the proxying appears to be working fine. However, if I run DEBUG=* node fetch.js, which includes the HTTP proxy agent, the request fails (see output at https://gist.github.com/headquarters/850cbb199ff397c6da56fb8d86113a7e#file-failure-txt).
Without the agent property, the fetch command works fine on the command line. How do I go about debugging this socket hang up error?
Turns out I didn't read the https-proxy-agent docs closely enough. This line was a bit confusing: An HTTP(s) proxy http.Agent implementation for HTTPS--the PROXY itself can be either HTTP or HTTPS, but the origin server has to be HTTPS for this flavor of proxy-agent. For an HTTP origin server, I had to use http-proxy-agent. Thus, the socket hang up was probably coming from https.Agent trying to access an HTTP endpoint. It worked when I switched to http-proxy-agent.
I'm running a Node.JS application in the subdomain of WP site. The WP site itself is running on Nginx, php-fpm and Varnish and works just fine, so I'm using Nginx to proxy connections to the Node app.
With Firefox, the Node app works perfectly. The home page and every other page loads, including the admin end. However, on Chromium, the site does not load properly. If I attempt to view the home page, the main content area loads, but the sidebar does not. And I get the following message in the Web console:
WebSocket connection to 'ws://forum.site.com/socket.io/1/websocket/91qNR-mt333a'
failed: Unexpected response code: 502
In the Nginx log file, I see entries like:
2089 upstream prematurely closed connection while reading response header from upstream,
client: 127.0.0.1, server: forum.site.com, request: "GET /socket.io/1/websocket/91qNR-
mt333a HTTP/1.1", upstream: "http://127.0.0.1:4567/socket.io/1/websocket/91qNRaWZ3-
mt333a", host: "forum.site.com"
And if I try to navigate between posts on the site, I get these messages in the Web console:
Failed to load resource: the server responded with a status of 504 (Gateway Time-out)
http://forum.site.com/socket.io/1/xhr-polling/91qNRaWZ3rYcF-mt333a?t=1396434040701
Then these lines from Nginx error log:
2128 upstream timed out (110: Connection timed out) while reading response header from
upstream, client: 127.0.0.1, server: forum.site.com, request: "GET /socket.io/1/xhr-
polling/uH9QTAWUGmomqFoy333e?t=1396434162051 HTTP/1.1", upstream:
"http://127.0.0.1:4567/socket.io/1/xhr-polling/uH9QTAWUGmomqFoy333e?t=1396434162051",
host: "forum.site.com", referrer: "http://forum.site.com/category/35/dual-boots"
I've looked at similar issues on this site and other sites and tried to implement the suggested solutions, but no luck so far. For example, in the Nginx config for the subdomain, I've added the following:
proxy_buffers 8 32k;
proxy_buffer_size 64k;
proxy_connect_timeout 120;
proxy_read_timeout 300;
And played around with different values for the last two lines, but still no luck.
What baffles me is that the site works perfectly on FF. It's only on Chromium that I'm having these problem. I've not tried on IE, but I'm not really concerned about that browser at this point.
I'm sure there's something that I'm overlooking, but I don't know what.
Btw, the site exhibits the same behavior on Android's default browser.
Could Varnish be the culprit here. I have Varnish (port 80) in front of Nginx (8080). Does Varnish play nice with WebSockets?
Finally figured out that the problem is with Varnish, which by default does not handle WebSocket traffic. It has to be explicitly configured for it.
See this link for the solution.
I am making a request to a remote server using https and request, and getting a new error after updating node and request:
nes.get err: [Error: 140735207432576:error:14077438:SSL routines:SSL23_GET_SERVER_HELLO:tlsv1 alert internal error:../deps/openssl/openssl/ssl/s23_clnt.c:741:
I already have the protocol set to SSLv3, so I'm wondering why it appears to be using tlsv1.
https.globalAgent.options.secureProtocol = 'SSLv3_method';
I've also tried adding this to request's options:
secureProtocol: 'SSLv3_method'
This error did not occur with earlier versions of Node.js and request, but now with node v0.10.15 and request 2.26.0, it has surfaced. Any ideas? Thanks!
Update -- narrowed this down to something that changes between request 2.14.0 and 2.16.0. 2.14.0 works and 2.16.0 does not work.
Make sure you are making a secure request to the correct port.
I've received this error when attempting to make a secure request to port 80 instead of port 443.
I would fire up Wireshark to verify that the bits on the wire are what you think they should be.
I am trying to configure nginx with upstream.
We have 3 machines where we run application server and proxy passing all requests from nginx to application serves.
I used following configuration in nginx:
upstream appcluster {
server host1.example.com:8080 max_fails=2 fail_timeout=300s;
server host2.example.com:8080 max_fails=2 fail_timeout=300s;
}
Now issue is if the request comes to nginx when one server is down due to unknown reasons it's waiting for a long time getting response or sometimes its getting connection timeout.
Can someone suggest me the right configuration to get a response from the appcluster without latency or connection timeout whenever a server won't respond?
Then these can help, check the proxy_next_upstream
These directive determines in what cases the request will be transmitted to the next server.
Your server block should look like for example:
server {
location / {
proxy_pass http://appcluster;
proxy_next_upstream error timeout http_404;
}
}