socket disconnect acknowledgment time delay when user go offline/online - node.js

We are facing an problem with socket on internet switching case.
For example : An user connected to Internet A & switched the internet to network B .
When user disconnected then Socket taking time to fire socket.disconnect() event but in meanwhile User connected again to socket with new socket_id (after switching the network).
After user re-connect we also received Socket.on('disconnect') event with old socket_id but it should not be. Because user is already reconnected.
We are doing this because we have to manage the duplicate session of an user on socket.
Is that possible if user reconnected in meanwhile time we should not received any disconnect event ?
We are using below configuration on our server end
Socket Lib 2.1.1 with pooling & websocket
Node server with single/multi thread socket.
In single thread with Pooling we are facing above problem in rare case. But in multi-thread socket server, we are facing this problem each time.
But In Multi-thread with Websocket we are facing above problem in rare case and in single thread socket server, we are facing this problem each time.
Is that possible to increase the socket respond time ?
Socket should fire disconnect queue first before to connect an user ?
which is the best practice to use socket with Pooling or websocket in single/multi thread socket ?

Related

Error "close (transport close)" on Socket client side

In my express/socket app, (which is running behind HAproxy server), I am using sticky session(cookie based) to route requests to same worker. I have total 16 processes running (8 /machine- 2 machines). Socket session data is being stored in Redis adapter.
The problem I have is, when an event is fired from server, client can't receive it. Inspite, it will keep throwing disconnection errors, after every few seconds (4-5) :
Update : It will only fire event if transport was opened when event was fired, which is getting closed instantly, and than restarting.
Can someone please suggest something on this..
Finally, I found the solution. It was timeout client which was set to too low in HAproxy config. Increasing it, fixed the issue.

Handling reconnections in the socket.io server?

When the socket.io client performs an (automatic) reconnection - as might happen if a mobile client went to sleep then woke up again - does the server get a reconnect event? Or does it just see a disconnection and fresh connection?
In either case is there a way to
identify that it's the same client e.g. by a unique client id that persists across connections
have the client automatically re-join any rooms it was in before
Or do I need to code that functionality manually e.g. by having the client supply the id or rooms itself on reconnection?
I had a read of the socket.io docs and can't see any list of events that the server might receive.

Socket.io - nodejs server side. transfer close after reconnection

I have two browser socket.io clients. For example. Client A and Client B and They are connected to Room one.
Following behavior is happening on my computer (installed nodejs 5, socket.io 1.4.5):
We suppose that both of clients are connected to Room One. When Client A is disconnected (lose connection) and reconnect after timeout and Client B does not use emit for send message to Room one, then original socket of Client A will be closed with transfer reason: timeouted. Thats good for me. I need it this behavior.
On other side:
When Client A is disconnected (lose connection) and reconnect in timeout time and Client B during that disconnected time uses emit for send message to Room one, then original socket of Client A after reconnect will be immediately closed with transfer reason: closed and not timeouted.
It looks that there is some behavior that selects disconnected sockets in timeout time for that some other online socket try emit message to same room. When emited message cannot delivery to disconnected socket (on client side), then this socket is probably select for immediately close of transaction after reconnect in timeout time.
I noticed futher issue. When mobile client lost connecting and somebody send emit to room where mobile client was connected, then mobile socket is disconnected immediately without timeout with reason transport close on linux.
the situation above is different for mac system. The socket is disconnected either timeout is gone or immediately if client mobile is reconnected.
Can anybody help what is a reason that this behaviors on same version of node (6.2) and same version of socket (1.4.6) are different?

Node clustering with websockets

I have a node cluster where the master responds to http requests.
The server also listens to websocket connections (via socket.io). A client connects to the server via the said websocket. Now the client choses between various games (with each node process handles a game).
The questions I have are the following:
Should I open a new connection for each node process? How to tell the client that he should connect to the exact node process X? (Because the server might handle incoming connection-requests on its on)
Is it possible to pass a socket to a node process, so that there is no need for opening a new connection?
What are the drawbacks if I just use one connection (in the master process) and pass the user messages to the respective node processes and the process messages back to the user? (I feel that it costs a lot of CPU to copy rather big objects when sending messages between the processes)
Is it possible to pass a socket to a node process, so that there is no
need for opening a new connection?
You can send a plain TCP socket to another node process as described in the node.js doc here. The basic idea is this:
const child = require('child_process').fork('child.js');
child.send('socket', socket);
Then, in child.js, you would have this:
process.on('message', (m, socket) => {
if (m === 'socket') {
// you have a socket here
}
});
The 'socket' message identifier can be any message name you choose - it is not special. node.js has code that when you use child.send() and the data you are sending is recognized as a socket, it uses platform-specific interprocess communication to share that socket with the other process.
But, I believe this only works for plain sockets that do not yet have any local state established yet other than the TCP state. I have not tried it with an established webSocket connection myself, but I assume it does not work for that because once a webSocket has higher level state associated with it beyond just the TCP socket (such as encryption keys), there's a problem because the OS will not automatically transfer that state to the new process.
Should I open a new connection for each node process? How to tell the
client that he should connect to the exact node process X? (Because
the server might handle incoming connection-requests on its on)
This is probably the simplest means of getting a socket.io connection to the new process. If you make sure that your new process is listening on a unique port number and that it supports CORS, then you can just take the socket.io connection you already have between the master process and the client and send a message to the client on it that tells the client where to reconnect to (what port number). The client can then contain code to listen for that message and make a connection to that new destination.
What are the drawbacks if I just use one connection (in the master
process) and pass the user messages to the respective node processes
and the process messages back to the user? (I feel that it costs a lot
of CPU to copy rather big objects when sending messages between the
processes)
The drawbacks are as you surmise. Your master process just has to spend CPU energy being the middle man forwarding packets both ways. Whether this extra work is significant to you depends entirely upon the context and has to be determined by measurement.
Here's ome more info I discovered. It appears that if an incoming socket.io connection that arrives on the master is immediately shipped off to a cluster child before the connection establishes its initial socket.io state, then this concept could work for socket.io connections too.
Here's an article on sending a connection to another server with implementation code. This appears to be done immediately at connection time so it should work for an incoming socket.io connection that is destined for a specific cluster. The idea here is that there's sticky assignment to a specific cluster process and all incoming connections of any kind that reach the master are immediately transferred over to the cluster child before they establish any state.

Socket.io delay in firing the "disconnect" event?

I have a socket.io client connected to a node.js server. If I kill node.js at the command line, the client immediately freezes (i.e., communication stops), but there is a ~20 second delay before the "disconnect" event is fired. Is this behavior by design? Is there a configuration option to reduce the delay in firing the disconnect event?
It appears that this behavior changed in a relatively recent (last 6 months) update of socket.io. Before the reconnect functionality was built in to socket.io itself, I implemented my own reconnect logic using a "disconnect" event handler and at that time the "disconnect" event fired almost instantly when server communication halted.
I think this is likely a design pattern. The client may be presuming the server is 'temporarily' unreachable (network trafic etc) and essentially will keep trying to reach it... until the client timeout kicks in.
I send a disconnect (socket.disconnect()) to the server directly from the client, and I don't get this issue.

Resources