I have a node HTTPS linux server which handles UPGRADE requests to allow secure WebSocket connections as well as other HTTPS requests. Works well 99% of the time.
Periodically and unpredictably, client websocket connection attempts to the server timeout (client gives up after 30 seconds).
I listen for upgrade requests on the HTTPS server as follows:
server.on('upgrade', function upgrade(req, socket, head) {
console.log('Upgrade - Beg - '+req.url);
...
In the cases where the client timeout occurs, I never get the 'upgrade' event.
How can I debug this? Is there a lower-level Node https server event that I can listen for that might indicate something (so that I know the connection is actually getting to the server, for example)?
Notes:
When I detect the timeout on the client side (actually, even before the 30 seconds), I attempt another HTTPS connection to the same server (a POST). It works! The problem only seems to happen with websocket connections.
I have code that retries the websocket connection when it experiences the timeout, but usually it takes several retries before the timeout magically disappears.
Any help in how to debug this would be greatly appreciated.
Related
Objective:
Never close connection between client and SOCKS proxy + reuse it to send multiple HTTPS requests to different targets (example targets: google.com, cloudflare.com) without closing the socket during the switch to different target.
Step 1:
So I have client which connects to SOCKS proxy server over TCP connection. That is client socket(and only socket(file descriptor) used in this project).
client -> proxy
Step 2:
Then after connection is established and verified. Then it does TLS connect to the target server which can be for example google.com (DNS lookup is done before this).
Now we have connection:
client -> proxy -> target
Step 3:
Then client sends HTTPS request over it and receives response successfully.
Issue appears:
After that I want to close connection explicitly between proxy and target so I can send request to another target. For this it is required to close TLS connection and I don't know how to do it without closing connection between client and proxy which is not acceptable.
Possible solutions?:
1:
Would sending Connection: close\n\r request to current target close connection only between proxy and target and not close the socket.
2:
If I added Connection: close\n\r to headers of every request, would that close the socket and thus it's not valid solution?
Question:
(NodeJS) I made custom https Agent which handles Agent-s method -> callback(req, opts) where opts argument is request options from what client sent to target (through proxy). This callback returns tls socket after it's connected, I built tls socket connection outside of the callback and passed it to agent. Is it possible to use this to close connection between proxy and target using req.close(), would this close the socket? Also what is the point of req in Agent's callback, can it be used in this case?
Any help is appreciated.
If you spin up wireshark and look at what is happening through your proxy, you should quickly see that HTTP/S requests are connection oriented, end-to-end (for HTTPS) and also time-boxed. If you stop and think about it, they are necasarily so, to avoid issues such as the confused deputy problem etc.
So the first bit to note is that for HTTPS, the proxy will only see the initial CONNECT request, and then from there on everything is just a TCP stream of TLS bytes. Which means that the proxy won't be able to see the headers (that is, unless your proxy is a MITM that intercepts the TLS handshake, and you haven't mentioned this, so I've assumed not).
The next bit is that the agent/browser will open connections in parallel (typically a half-dozen for a browser) and will also use pipelining and keep-alive to send multiple requests down the same connection.
Then there are connection limits imposed by the browser, and servers. These typically cap the number of requests, and the duration that they are held open, before speculatively closing them. If they didn't, any reasonably busy server would quickly exhaust all their TCP sockets.
So all-in, what you are looking to achieve isn't going to work.
That said, if you are looking to improve performance, the node client has a few things you can enable and tweak:
Enable TLS session reuse, which will make connections much more
efficient to establish.
Enable keep-alive, which will funnel multiple requests through
the same connection.
I need to handle users disconnecting from my sockjs application running in xhr-polling mode. When I connect to localhost, everything works as expected. When I put apache between nodejs and browser, I get ~20 sec delay between closed browser and disconnect event inside nodejs. My apache proxy config is following:
<Location />
ProxyPass http://127.0.0.1:8080/
ProxyPassReverse http://127.0.0.1:8080/
</Location>
The rest of the file is default, you can see it here. I tried playing with ttl=2 and timeout=2 options, but either nothing changes, or I get reconnected each 2 seconds without closing browser. How can I reduce additional disconnect timeout, introduced, but apache, somewhere in its defaults?
It's possible that your Apache server is configured to use HTTP Keep Alive which will keep a persistent connection open. In that case I would try disabling KeepAlive, or lowering the KeepAliveTimeout setting in your Apache configuration to see if this solves the problem.
If that doesn't work, I would also take a look at netstat and see what is the status of each socket and start a root cause analysis. This chart gives is the TCP state machine and can tell you where each connection is. Wireshark can also give you some information on what is going on.
In long polling the connection happens like below
<client> ---> apache ---> <node.js>
When client breaks the connection
<client> -X-> apache ---> <node.js>
Apache still keeps the connection open. Now there are two workaround to this
ProxyTimeout
You can add below to your apache config
ProxyTimeout 10
This will break the connection after 10 seconds, but then this break every long polling connection after 10 seconds, which you don't want
Ping
Next option is to ping the client
pingTimeout (Number): how many ms without a pong packet to consider the connection closed (60000)
pingInterval (Number): how many ms before sending a new ping packet (25000).
var io = require('socket.io')(server, { 'transports': ['polling'], pingTimeout: 5000, pingInterval: 2500});
Above will make sure the client is disconnected within 5 seconds of going off, you can lower it again further but then this may impact the usual loading scenarios
But reading through all the posts, threads and sites, I don't think you can replicate the behavior you get when connect to socket.io directly, because the connection break then can be detected easily by socket.io
The 20 sec delay not in the apache proxy. I got the same issue, delay not happening in the local URL, but delay happening in global URL.
The issue was solved in the NodeJs itself. Need to send one time data from server to client to make sure it's initialized. This problem not in the documentation and issues blog at the WebSocket plugin.
Send a dummy message after request accepted in the server, like below.
let connection = request.accept(null, request.origin);
connection.on('message', function (evt) {
console.log(evt);
});
connection.on('close', function (evt) {
console.log(evt);
});
connection.send("Success"); //dummy message to the client from server
For example socket.io has pingInterval and pingTimeout settings, nes for hapi has similar heartbeat interval settings. This is ostensibly to prevent any intermediates such as over-zealous proxies from closing what seems to be an inactive connection.
But ping/pong frames are part of the websocket protocol and seem to serve the same purpose. So why do websocket library implementors add another layer of ping/pong at the application level?
If I was pushed to guess it would be in case the websocket server is dealing with a client that doesn't respond/support the websocket protocol level ping-pongs.
I did some reading up and made some tests and I think it comes down to this:
Websocket pings are initiated by the server only
The browser Websocket API has isn't able to send ping frames and the incoming pings from the server are not exposed in any way
These pings are all about keepalive, not presence
Therefore if the server goes away without a proper TCP teardown (network lost/crash etc), the client doesn't know if the connection is still open
Adding a heartbeat at application level is a way for the client to establish the servers presence, or lack thereof. These must be sent as normal data messages because that's all the Websocket API (browser) is capable of.
I have a node server and a web page connected via socket.io. I noticed in the browser console that it is outputting
XHR finished loading: GET "http://my_url/socket.io/?EIO=3&transport=polling&t=1418944327412-412&sid=vqLTUtW3QhNLwQG8AAAA".
and
XHR finished loading: POST "http://my_url/socket.io/?EIO=3&transport=polling&t=1418944385398-415&sid=vqLTUtW3QhNLwQG8AAAA".
every few seconds. Should it be doing this or am I missing a setting. I'm really only looking to send data back and forth explicitly via the socket. Perhaps I'm missing something in the set up.
Client side is basically
var socket = io("http://my_url");
with the usual event listeners. Server side is
var io = require('socket.io')(server);
I tried placing this on the server side
io.set('transports', ['websocket']);
but that seemed to kill it.
The socket.io implementation (when using webSockets) sends regular (every few seconds) heartbeat and response packets to constantly verify that the connection is alive and well. This is normal.
These packets are not actual http requests (they are websocket data packets) so there should not be full-on http packets going on unless socket.io is not actually using the webSocket protocol, but is instead using HTTP long polling. socket.io will use the webSocket protocol as long as it is supported in the client (which it should be in all modern browsers nowadays).
You may have to be careful about how you interpret requests in a debugger. A socket.io connection starts its life as an http request with some custom headers and all debuggers will show this initial http request. If webSocket is supported at both ends, then the server will return a response which "upgrades" the connection to the webSocket protocol. That same TCP socket which started out as a TCP request, then becomes a webSocket connection. Subsequent webSockets messages sent on the webSocket then flow over that TCP socket. It is up to the debugger on how it might display that traffic. In the Chrome debugger, you have to open the original http connection and then ask to see websocket traffic and only then can you actually see webSocket packets. But, I could imagine in other debuggers that weren't as webSocket saavy, they might show subsequent packets as related to that original HTTP connection (I haven't looked at how debuggers other than Chrome show webSocket traffic).
The only other reason I can think of that a client would be repeatedly sending HTTP connection requests is if the connection keeps dropping for some reason so the client keeps reconnecting every time the connection drops. socket.io has settings that can control how often/vigorously the client tries to reconnect when the connection is lost, though if you have connection issues, then you really need to figure out why there are connection issues rather than change the reconnect settings.
I'm new in node.js
So the question can be quite naive.
I want to use node.js as a proxy between the javascript client and a windows program which has a API working through a defined port.
So browser sends HTTP request to node.js.
Node.js opens connection with the windows program, sends request, get a respons and returns the response to the javascript that called node.js ( AJAX )
Actually it has been realized so and works.
The problem is that windows program wants to work persistent.
So once the connection is opened it should stay alive.
And my node.js script opens connection. And the next call to node.js try to open connection again. And that leads to error.
So the question - what is the right way and middles to reuse the TCP connection in node.js.
So that next call won't open new connection but go on with already opened one.
It is possible, but requires some coding on your side.
If it is not sticky (the same HTTP client does not require to use the same TCP connection):
Open an connection pools of TCP connections to the windows program
Listen to HTTP requests
If a HTTP request arrives find an unused TCP connection (if none is available, make a new one or wait)
Query the windows program, return the results to who ever called the HTTP request
Mark the TCP connection as free
If it is sticky (always use the same TCP connection for the same HTTP client):
If a HTTP clients connect and he hasn't one, give him session.
If not available for the session: create a TCP connection
Make the request, return the result
Store the TCP "connection" somewhere where you could reach it when the next request comes in and you could identify it by the HTTP clients session (Maybe make a timeout for clearing up)