GCM XMPP Socket consistently getting EPIPE and disconnected when sending notifications - node.js

We have an xmpp connection server that connects sockets to GCM XMPP endpoints and starts sending notifications.
One thing We've noticed is upon sending a semi-large notification (say to as little as a 1000 devices), the sockets keep getting suddenly disconnected receiving the following error message:
Client disconnected socket=b913-512-904dc69, code=EPIPE, errno=EPIPE, syscall=write
For example, this is the log of the live server when starting to send a notification to different registration IDS.
info: Sent downstream message msgId=P#c1uq... socketId=512
info: Sent downstream message msgId=P#c3tE... socketId=512
info: Sent downstream message msgId=P#c1TF... socketId=512
info: Sent downstream message msgId=P#c3sy... socketId=512
info: Sent downstream message msgId=P#c41N... socketId=512
...
info: Sent downstream message msgId=P#cJbr... socketId=512
info: Sent downstream message msgId=P#cJXO... socketId=512
info: Client disconnected socket=b913-512-904dc69, code=EPIPE, errno=EPIPE,
syscall=write
This keeps happening all the time and everywhere in our system and is making service QA pretty difficult.
Another thing that we've noticed is that sometimes when calling socket.send(stanza), the value false is returned, even when the socket is definitely connected. This one is even worse since we have to do re-queueing of the messages and it's really resource heavy when sending millions of messages. This will be explained below.
Additional Information:
From the 1st message to the 84th (where disconnection happens), less
than a 100 milliseconds have passed.
We have about 52 sockets open for this JID/PASSWORD
(senderId,Api_key in GCM's terms), on 3 different servers. All keep
disconnecting now and then when a large notification send task comes
along (say to 10000 recipients).
Sockets successfully re-connect, but they're disconnected for several seconds and this reduces efficiency and reliability of our system.
How the connection is setup:
const xmpp = require('node-xmpp-client');
let socket = new xmpp.Client({
port: 5235,
host: 'gcm-xmpp.googleapis.com',
legacySSL: true,
preferredSaslMechanism: 'PLAIN',
reconnect: true,
jid: $JID,
password: $PASSWORD
});
socket.connection.socket.setTimeout(0);
socket.connection.socket.setKeepAlive(true, 10000);
socket.on('stanza', (stanza) => handleStanza(stanza));
...
Acks are sent for every upstream message received.
But one thing we see is that the following returns false sometimes when sending downstream messages, "even when the socket is connected".
// This returns false many times! even when the socket.connection.connected === true!
socket.send(xmppStanza)
If this happens, we queue the ack message to be retried later but keep sending messages to the gcm.
Why does socket.send return false sometimes? (This obviously is not an error like EPIPE or whatever, it's just a false, meaning flushing the socket was unsuccessful, maybe the socket becomes un-writeable even-though it's connected ?).
If acks are delayed, will GCM close the connection with the delayed acks or will it just stop sending upstreams?
(AFAIK, it'll just stop sending upstreams, so maybe this has nothing to do with the connections being closed (EPIPEs)?)
I'd be really grateful if anyone could shed some light on this behavior.
Thanks !

Related

Python websockets ping / pong / keep-alive errors

I have a simple script reading messages from a websocket server and I don't fully understand the keep-alive system.
I'm getting two errors with practically the same meaning sent 1011 (unexpected error) keepalive ping timeout; no close frame received and no close frame received or sent.
I am using websockets module. link to the docs
I'd like to know when is my job to send a ping or when to send a pong or if I should be changing the timeout to a longer period since I'll be running multiple connections at the same time (to the same server but a different channel).
I have tried running another asyncio task which was pinging the server every 10 to 20 seconds AND
replying only after I receive a packet (which in my case can be 1 second apart or I can get a new one the next day. with a normal websocket.ping() and with a custom payload (heartbeat json string {"event": "bts:heartbeat"}
One solution I can see is to just reopen the connection after I get the error but it feels wrong.
async with websockets.connect(self.ws,) as websocket:
packet = {
"event": "bts:subscribe",
"data": ...,
}
await websocket.send(json.dumps(packet))
await websocket.recv() # reply
try:
async for message in websocket:
tr = json.loads(message)
await self.send(tr)
packet = {"event": "bts:heartbeat"}
await websocket.pong(data=json.dumps(packet))
except Exception as e: # websockets.ConnectionClosedError:
await self.send_status(f"Suscription Error: {e}", 0)
Keep-alive packets are send automatically by the library (see https://websockets.readthedocs.io/en/latest/topics/timeouts.html#keepalive-in-websockets), so there should be no need to do that yourself.
In your case it seems like that the server is not responding to the "ping" by your client in timely manner. This FAQ entry and its recommendation to catch ConnectionClosed looks relevant.

SocketIO Chrome Inspector Frames

I was playing around with Socket.IO and ran into some questions when viewing the frames in the chrome inspector.
What do the numbers beside each frame's content mean?
That's the Engine.io protocol where the number you see is the packet encoding:
<packet type id>[<data>]
example:
2probe
And these are the different packet types:
0 open
Sent from the server when a new transport is opened (recheck)
1 close
Request the close of this transport but does not shutdown the connection itself.
2 ping
Sent by the client. Server should answer with a pong packet containing the same data
example 1. client sends: 2probe 2. server sends: 3probe
3 pong
Sent by the server to respond to ping packets.
4 message
actual message, client and server should call their callbacks with the data.
example 1
server sends: 4HelloWorld
client receives and calls callback socket.on('message', function (data) { console.log(data); });
example 2
client sends: 4HelloWorld
server receives and calls callback socket.on('message', function (data) { console.log(data); });
5 upgrade
Before engine.io switches a transport, it tests, if server and client can communicate over this transport. If this test succeed, the client sends an upgrade packets which requests the server to flush its cache on the old transport and switch to the new transport.
6 noop
A noop packet. Used primarily to force a poll cycle when an incoming websocket connection is received.
example
client connects through new transport
client sends 2probe
server receives and sends 3probe
client receives and sends 5
server flushes and closes old transport and switches to new.
You can read the full documentation here

Upgrading from XHR-Polling to Websockets in Socket.io 1.0.X results in packets dropping

I have been working with a problem in Socket.io 1.0.6, where a client randomly does not receive a packet on a given protocol, and randomly does.
I have been running my node application with prefix DEBUG=* and in the browser set the variable localStorage.debug='*'.
It goes as follows:
1. Client emits on 'event'.
2. Server receives on 'event' and emits on 'event'.
3. Client does receive on 'event'/client does not receive on 'event' (randomly).
Debug messages approves this.
I do not get any error messages, just don't receive packets.
My server now runs with this configuration, which works every time:
var io = require('socket.io')(port, { allowUpgrades : false });
Have anyone else experienced problems with upgrading transport protocols in engine.io?

Websocket transport reliability (Socket.io data loss during reconnection)

Used
NodeJS, Socket.io
Problem
Imagine there are 2 users U1 & U2, connected to an app via Socket.io. The algorithm is the following:
U1 completely loses Internet connection (ex. switches Internet off)
U2 sends a message to U1.
U1 does not receive the message yet, because the Internet is down
Server detects U1 disconnection by heartbeat timeout
U1 reconnects to socket.io
U1 never receives the message from U2 - it is lost on Step 4 I guess.
Possible explanation
I think I understand why it happens:
on Step 4 Server kills socket instance and the queue of messages to U1 as well
Moreover on Step 5 U1 and Server create new connection (it is not reused), so even if message is still queued, the previous connection is lost anyway.
Need help
How can I prevent this kind of data loss? I have to use hearbeats, because I do not people hang in app forever. Also I must still give a possibility to reconnect, because when I deploy a new version of app I want zero downtime.
P.S. The thing I call "message" is not just a text message I can store in database, but valuable system message, which delivery must be guaranteed, or UI screws up.
Thanks!
Addition 1
I do already have a user account system. Moreover, my application is already complex. Adding offline/online statuses won't help, because I already have this kind of stuff. The problem is different.
Check out step 2. On this step we technically cannot say if U1 goes offline, he just loses connection lets say for 2 seconds, probably because of bad internet. So U2 sends him a message, but U1 doesn't receive it because internet is still down for him (step 3). Step 4 is needed to detect offline users, lets say, the timeout is 60 seconds. Eventually in another 10 seconds internet connection for U1 is up and he reconnects to socket.io. But the message from U2 is lost in space because on server U1 was disconnected by timeout.
That is the problem, I wan't 100% delivery.
Solution
Collect an emit (emit name and data) in {} user, identified by random emitID. Send emit
Confirm the emit on client side (send emit back to server with emitID)
If confirmed - delete object from {} identified by emitID
If user reconnected - check {} for this user and loop through it executing Step 1 for each object in {}
When disconnected or/and connected flush {} for user if necessary
// Server
const pendingEmits = {};
socket.on('reconnection', () => resendAllPendingLimits);
socket.on('confirm', (emitID) => { delete(pendingEmits[emitID]); });
// Client
socket.on('something', () => {
socket.emit('confirm', emitID);
});
Solution 2 (kinda)
Added 1 Feb 2020.
While this is not really a solution for Websockets, someone may still find it handy. We migrated from Websockets to SSE + Ajax. SSE allows you to connect from a client to keep a persistent TCP connection and receive messages from a server in realtime. To send messages from a client to a server - simply use Ajax. There are disadvantages like latency and overhead, but SSE guarantees reliability because it is a TCP connection.
Since we use Express we use this library for SSE https://github.com/dpskvn/express-sse, but you can choose the one that fits you.
SSE is not supported in IE and most Edge versions, so you would need a polyfill: https://github.com/Yaffle/EventSource.
Others have hinted at this in other answers and comments, but the root problem is that Socket.IO is just a delivery mechanism, and you cannot depend on it alone for reliable delivery. The only person who knows for sure that a message has been successfully delivered to the client is the client itself. For this kind of system, I would recommend making the following assertions:
Messages aren't sent directly to clients; instead, they get sent to the server and stored in some kind of data store.
Clients are responsible for asking "what did I miss" when they reconnect, and will query the stored messages in the data store to update their state.
If a message is sent to the server while the recipient client is connected, that message will be sent in real time to the client.
Of course, depending on your application's needs, you can tune pieces of this--for example, you can use, say, a Redis list or sorted set for the messages, and clear them out if you know for a fact a client is up to date.
Here are a couple of examples:
Happy path:
U1 and U2 are both connected to the system.
U2 sends a message to the server that U1 should receive.
The server stores the message in some kind of persistent store, marking it for U1 with some kind of timestamp or sequential ID.
The server sends the message to U1 via Socket.IO.
U1's client confirms (perhaps via a Socket.IO callback) that it received the message.
The server deletes the persisted message from the data store.
Offline path:
U1 looses internet connectivity.
U2 sends a message to the server that U1 should receive.
The server stores the message in some kind of persistent store, marking it for U1 with some kind of timestamp or sequential ID.
The server sends the message to U1 via Socket.IO.
U1's client does not confirm receipt, because they are offline.
Perhaps U2 sends U1 a few more messages; they all get stored in the data store in the same fashion.
When U1 reconnects, it asks the server "The last message I saw was X / I have state X, what did I miss."
The server sends U1 all the messages it missed from the data store based on U1's request
U1's client confirms receipt and the server removes those messages from the data store.
If you absolutely want guaranteed delivery, then it's important to design your system in such a way that being connected doesn't actually matter, and that realtime delivery is simply a bonus; this almost always involves a data store of some kind. As user568109 mentioned in a comment, there are messaging systems that abstract away the storage and delivery of said messages, and it may be worth looking into such a prebuilt solution. (You will likely still have to write the Socket.IO integration yourself.)
If you're not interested in storing the messages in the database, you may be able to get away with storing them in a local array; the server tries to send U1 the message, and stores it in a list of "pending messages" until U1's client confirms that it received it. If the client is offline, then when it comes back it can tell the server "Hey I was disconnected, please send me anything I missed" and the server can iterate through those messages.
Luckily, Socket.IO provides a mechanism that allows a client to "respond" to a message that looks like native JS callbacks. Here is some pseudocode:
// server
pendingMessagesForSocket = [];
function sendMessage(message) {
pendingMessagesForSocket.push(message);
socket.emit('message', message, function() {
pendingMessagesForSocket.remove(message);
}
};
socket.on('reconnection', function(lastKnownMessage) {
// you may want to make sure you resend them in order, or one at a time, etc.
for (message in pendingMessagesForSocket since lastKnownMessage) {
socket.emit('message', message, function() {
pendingMessagesForSocket.remove(message);
}
}
});
// client
socket.on('connection', function() {
if (previouslyConnected) {
socket.emit('reconnection', lastKnownMessage);
} else {
// first connection; any further connections means we disconnected
previouslyConnected = true;
}
});
socket.on('message', function(data, callback) {
// Do something with `data`
lastKnownMessage = data;
callback(); // confirm we received the message
});
This is quite similar to the last suggestion, simply without a persistent data store.
You may also be interested in the concept of event sourcing.
Michelle's answer is pretty much on point, but there are a few other important things to consider. The main question to ask yourself is: "Is there a difference between a user and a socket in my app?" Another way to ask that is "Can each logged in user have more than 1 socket connection at one time?"
In the web world it is probably always a possibility that a single user has multiple socket connections, unless you have specifically put something in place that prevents this. The simplest example of this is if a user has two tabs of the same page open. In these cases you don't care about sending a message/event to the human user just once... you need to send it to each socket instance for that user so that each tab can run it's callbacks to update the ui state. Maybe this isn't a concern for certain applications, but my gut says it would be for most. If this is a concern for you, read on....
To solve this (assuming you are using a database as your persistent storage) you would need 3 tables.
users - which is a 1 to 1 with real people
clients - which represents a "tab" that could have a single connection to a socket server. (any 'user' may have multiple)
messages - a message that needs sent to a client (not a message that needs sent to a user or to a socket)
The users table is optional if your app doesn't require it, but the OP said they have one.
The other thing that needs properly defined is "what is a socket connection?", "When is a socket connection created?", "when is a socket connection reused?". Michelle's psudocode makes it seem like a socket connection can be reused. With Socket.IO, they CANNOT be reused. I've seen be the source of a lot of confusion. There are real life scenarios where Michelle's example does make sense. But I have to imagine those scenarios are rare. What really happens is when a socket connection is lost, that connection, ID, etc will never be reused. So any messages marked for that socket specifically will never be delivered to anyone because when the client who had originally connected, reconnects, they get a completely brand new connection and new ID. This means it's up to you to do something to track clients (rather than sockets or users) across multiple socket connections.
So for a web based example here would be the set of steps I'd recommend:
When a user loads a client (typically a single webpage) that has the potential for creating a socket connection, add a row to the clients database which is linked to their user ID.
When the user actually does connect to the socket server, pass the client ID to the server with the connection request.
The server should validate the user is allowed to connect and the client row in the clients table is available for connection and allow/deny accordingly.
Update the client row with the socket ID generated by Socket.IO.
Send any items in the messages table connected to the client ID. There wouldn't be any on initial connection, but if this was from the client trying to reconnect, there may be some.
Any time a message needs to be sent to that socket, add a row in the messages table which is linked to the client ID you generated (not the socket ID).
Attempt to emit the message and listen for the client with the acknowledgement.
When you get the acknowledgement, delete that item from the messages table.
You may wish to create some logic on the client side that discards duplicate messages sent from the server since this is technically a possibility as some have pointed out.
Then when a client disconnects from the socket server (purposefully or via error), DO NOT delete the client row, just clear out the socket ID at most. This is because that same client could try to reconnect.
When a client tries to reconnect, send the same client ID it sent with the original connection attempt. The server will view this just like an initial connection.
When the client is destroyed (user closes the tab or navigates away), this is when you delete the client row and all messages for this client. This step may be a bit tricky.
Because the last step is tricky (at least it used to be, I haven't done anything like that in a long time), and because there are cases like power loss where the client will disconnect without cleaning up the client row and never tries to reconnect with that same client row - you probably want to have something that runs periodically to cleanup any stale client and message rows. Or, you can just permanently store all clients and messages forever and just mark their state appropriately.
So just to be clear, in cases where one user has two tabs open, you will be adding two identical message to the messages table each marked for a different client because your server needs to know if each client received them, not just each user.
As already written in another answer, I also believe you should look at the realtime as a bonus : the system should be able to work also with no realtime.
I’m developing an enterprise chat for a large company (ios, android, web frontend and .net core + postGres backend) and after having developed a way for the websocket to re-establish connection (through a socket uuid) and get undelivered messages (stored in a queue) I understood there was a better solution: resync via rest API.
Basically I ended up by using websocket just for realtime, with an integer tag on each realtime message (user online, typers, chat message and so on) for monitoring lost messages.
When the client gets an id which is not monolithic (+1) then it understands it is out of sync so it drops all the socket messages and asks a resync of all its observers through REST api.
This way we can handle many variations in the state of the application during the offline period without having to parse tons of websocket messages in a row on reconnection and we are sure to be synced (because the last sync date is set just by the REST api, not from the socket).
The only tricky part is monitoring for realtime messages from the moment you call REST api to the moment the server replies because what is read from the db takes time to get back to the client and in the meanwhile variations could happen so they need to be cached and took into account.
We are going into production in a couple of months,
I hope to get back sleeping by then :)
It is seem that you already have user account system. You know which account is online/offline, you you can handle connect/disconnect event:
So the solution is, add online/offline and offline messages on database for each user:
chatApp.onLogin(function (user) {
user.readOfflineMessage(function (msgs) {
user.sendOfflineMessage(msgs, function (err) {
if (!err) user.clearOfflineMessage();
});
})
});
chatApp.onMessage(function (fromUser, toUser, msg) {
if (user.isOnline()) {
toUser.sendMessage(msg, function (err) {
// alert CAN NOT SEND, RETRY?
});
} else {
toUser.addToOfflineQueue(msg);
}
})
Look here: Handle browser reload socket.io.
I think you could use solution which I came up with. If you modify it properly, it should work as you want.
What I think you want is to have a reusable socket for each user, something like:
Client:
socket.on("msg", function(){
socket.send("msg-conf");
});
Server:
// Add this socket property to all users, with your existing user system
user.socket = {
messages:[],
io:null
}
user.send = function(msg){ // Call this method to send a message
if(this.socket.io){ // this.io will be set to null when dissconnected
// Wait For Confirmation that message was sent.
var hasconf = false;
this.socket.io.on("msg-conf", function(data){
// Expect the client to emit "msg-conf"
hasconf = true;
});
// send the message
this.socket.io.send("msg", msg); // if connected, call socket.io's send method
setTimeout(function(){
if(!hasconf){
this.socket = null; // If the client did not respond, mark them as offline.
this.socket.messages.push(msg); // Add it to the queue
}
}, 60 * 1000); // Make sure this is the same as your timeout.
} else {
this.socket.messages.push(msg); // Otherwise, it's offline. Add it to the message queue
}
}
user.flush = function(){ // Call this when user comes back online
for(var msg in this.socket.messages){ // For every message in the queue, send it.
this.send(msg);
}
}
// Make Sure this runs whenever the user gets logged in/comes online
user.onconnect = function(socket){
this.socket.io = socket; // Set the socket.io socket
this.flush(); // Send all messages that are waiting
}
// Make sure this is called when the user disconnects/logs out
user.disconnect = function(){
self.socket.io = null; // Set the socket to null, so any messages are queued not send.
}
Then the socket queue is preserved between disconnects.
Make sure it saves each users socket property to the database and make the methods part of your user prototype. The database does not matter, just save it however you have been saving your users.
This will avoid the problem mentioned in Additon 1 by requiring a confirmation from the client before marking the message as sent. If you really wanted to, you could give each message an id and have the client send the message id to msg-conf, then check it.
In this example, user is the template user that all users are copied from, or like the user prototype.
Note: This has not been tested.
Been looking at this stuff latterly and think different path might be better.
Try looking at Azure Service bus, ques and topic take care of the off line states.
The message wait for user to come back and then they get the message.
Is a cost to run a queue but its like $0.05 per million operations for a basic queue so cost of dev would be more from hours work need to write a queuing system.
https://azure.microsoft.com/en-us/pricing/details/service-bus/
And azure bus has libraries and examples for PHP, C#, Xarmin, Anjular, Java Script etc.
So server send message and does not need to worry about tracking them.
Client can use message to send back also as means can handle load balancing if needed.
Try this emit chat list
io.on('connect', onConnect);
function onConnect(socket){
// sending to the client
socket.emit('hello', 'can you hear me?', 1, 2, 'abc');
// sending to all clients except sender
socket.broadcast.emit('broadcast', 'hello friends!');
// sending to all clients in 'game' room except sender
socket.to('game').emit('nice game', "let's play a game");
// sending to all clients in 'game1' and/or in 'game2' room, except sender
socket.to('game1').to('game2').emit('nice game', "let's play a game (too)");
// sending to all clients in 'game' room, including sender
io.in('game').emit('big-announcement', 'the game will start soon');
// sending to all clients in namespace 'myNamespace', including sender
io.of('myNamespace').emit('bigger-announcement', 'the tournament will start soon');
// sending to individual socketid (private message)
socket.to(<socketid>).emit('hey', 'I just met you');
// sending with acknowledgement
socket.emit('question', 'do you think so?', function (answer) {});
// sending without compression
socket.compress(false).emit('uncompressed', "that's rough");
// sending a message that might be dropped if the client is not ready to receive messages
socket.volatile.emit('maybe', 'do you really need it?');
// sending to all clients on this node (when using multiple nodes)
io.local.emit('hi', 'my lovely babies');
};

Clients get disconnected automatically

I am trying to implement a chat module for my app and getting 2 problems.
The first one is that my clients in sockets keep on disconnecting automatically.
Here's what I am doing.
My clients get connected in a room which is dynamically created and the name of the room is a random ID that i generate.Now when a client sends a message I print a log to see how many people are there in the room and it logs "2" on the server side which is correct.But when I keep on sending messages from the clients to the server it start showing me "1" client is connected and after some time it shows me 0 clients are connected in the room.The people are automatically disconnecting why is this happening?
socket.on('SendChat',function(msgobj){ //pass the message to all people in the room.
console.log("Msg from" + msgobj.MsgSenderName + " in RoomID = "+ msgobj.RoomID);
console.log("People in room " +io.sockets.clients(msgobj.RoomID).length);
socket.broadcast.to(String(msgobj.RoomID)).emit('RecieveChat',msgobj);
});
This event is raised on the server side when someone sends a message from any of the client and the msgobj will contain the RoomID.So you can see I am logging number of people in the room.
The second problem is I am trying to broadcast the message which I recieve from a client.But the event at the other client is not raised.In the above code you can see the last line I used to broadcast msg to all the other clients but the events at those clients is not fired I don't know why.Here's what is on my client
this.Socket.on('RecieveChat',function(obj){
self.Controller.RecieveChat(obj);
});
and here's my log
You can see the RoomID is same but the people in room are leaving and this snapshot is only showing 1 people in room before that there were two people in room which then cut of to 1 and eventually 0.
EDIT:
When I change socket.broadcast.to(String(msgobj.RoomID)).emit('RecieveChat',msgobj); to socket.broadcast.emit('RecieveChat',msgobj); it starts working but not working if I emit inside a room using the to method.
Transport end (close timeout) this seems to be causing the issue.How to resolve this?
Finally found the solution here the problem was with the version of socket.io that I was using.

Resources