Understanding Disconnect / Reconnect Process and Error Messages - pusher

I am trying to implement a feature which notifies the user of disconnections to pusher, and indicates when reconnection has occured. My first experiment is simply to log changing pusher states to console:
var pusher = new Pusher('MY_ACCOUNT_STRING');
pusher.connection.bind('state_change', function(states) {
console.log(states.current);
});
I then refresh the page, get a connection to pusher, disable my internet connection, wait for pusher to detect the disconnection, re-enable my internet connection, and wait for pusher to detect that. Here's a screenshot of chrome's console output during the process (click here for a larger version):
Here are my questions:
It took over a minute, possibly even 2-3 minutes, before the disconnection was detected by pusher. Is there a way to decrease that time so pusher detects disconnection within 10 or so seconds?
Why am I seeing those red errors, and what exactly do they mean? Is that normal? I would think with the correct setup the errors would be handled, since a disconnection event is an "expected" exception within the pusher context.
What is the 1006 error and why am I seeing that?
Thanks for any help!
EDIT:
I've been watching the output for a long-standing connection, and I've also seen this a number of times, and would like to know the cause of it, and how I can capture it and handle it?
disconnected login.js:146
connecting login.js:146
Pusher : Error : {"type":"WebSocketError","error":{"type":"PusherError","data":{"code":1007,"message":"Server heartbeat missed"}}} pusher.min.js:12
connected

That's no normal behavior. Have you had the chance to check this on different machine and network? It looks like a network problem.
Question 1.
When I disable wifi it takes pusher 4 seconds to notice and change the state to disconnected and then to unavailable.
When I re-enable wifi I only get same error as you do on http://js.pusher.com/2.1.3/sockjs.js
I've got no idea about the implications of doing so.. but you could try to alter the default timeouts:
var pusher = new Pusher('MY_ACCOUNT_STRING', {
pong_timeout: 6000, //default = 30000
unavailable_timeout: 2000 //default = 10000
});
Question 2.
No idea, I don't think the lib should throw those error's
Question 3.
The error's are from the WebSocket protocol: https://www.rfc-editor.org/rfc/rfc6455
1006 is a reserved value and MUST NOT be set as a status code in a
Close control frame by an endpoint. It is designated for use in
applications expecting a status code to indicate that the
connection was closed abnormally, e.g., without sending or
receiving a Close control frame.
1007 indicates that an endpoint is terminating the connection
because it has received data within a message that was not
consistent with the type of the message (e.g., non-UTF-8 [RFC3629]
data within a text message).

Related

Is there a way to overload Node.js event loop using websoket

I'm having issues with Node.js and the "WS" implementation of websocket (https://www.npmjs.com/package/ws). After a surge (plenty of messages in a short window of time), I'm having data that suggests that I've "missed" a message.
I've contacted the owner of the emitter server and he assures me that all messages have been sent on his side.
I've logger every message received on my side (at the start of the function on('message', () => {}), and I can't seem to find the message, so my assumption is that it doesn't even reached this point
So I'm wondering:
Messages are reveived and treated in a FIFO order. During the treatment of the current message, new ones will be stacked in the node event loop to be computed immediatly after. Correct ? Is there a way for that event loop to be "too big" that may drop new incomming messages ? If so, does it drop it quietly ? or does the program crashes vigorously (in other words, how can I see if a message has been dropped this way ?)
Does the 'ws' module have any kind of kown limitations for a maximum number of message received ? Does it have an internal way of dropping messages ?
Is there a better alternative than the 'ws' module ?
Is there any other ways to explain a "missed" message ?
Thanks a lot for your insights,
I use ws in nodejs to handle large message flows from many clients simultaneously in production, and I have never had it lose messages. Each server handles several thousand messages each second from hundreds of different client connections. The way my system works, if ws dropped messages or changed their order, my users would complain loudly.
That makes me guess you are not hitting any limitation of ws.
Early in my programming work I had the not-so-bright idea of putting incoming messages in queue objects in my nodejs code and processing them "later." That led to a hideously confusing message flow through my server. It sometimes looked like I had lost ws messages. I was happy to delete all that code, and dispatch every message completely within its message event handler.
Websocket connections sometimes close abnormally. Because network. You can catch those situations with error and close event handlers. It can take a while for the sender of a message, or the receiver, to detect that a network fault of some kind disrupted its connection. That can lead to disagreement about message count between sender and receiver. It's worth investigating.
I adorn ws's connection objects with message counts ("adorn" -- add an application-specific property to an object) and put those message counts into the log when a connection closes.

How to remove all/single event listners in socket.io

I am using socket.io for realtime functionality in my app with hapijs. When I am trying to add a listner on server side in a hapijs route and if I reload the same route/page 10 times or more then it starts showing me an error (node:9004) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 board_5a40a863a7fbf12cf8a7f1b8_element_sorted listeners added. Use emitter.setMaxListeners() to increase limit. you can also see error in attached screenshot.
I tried each of following codes to removeListners first and then add it back using socket.on('eventname', callback):
io.sockets.removeListner("eventname")
io.removeListner("eventname")
socket.removeListner("eventname")
io.sockets.removeAllListners()
io.removeAllListners()
socket.removeAllListners()
But I got an error everytime that removeAllListners/removeListner is not a function.
I also tried setting limit of maxlistners to unlimited by using each of following codes:
io.setMaxListeners(0);
io.sockets.setMaxListeners(0);
socket.setMaxListeners(0);
But I still kept getting the same error of memory leak detected. So can somebody tell me the solution for this. I would preferably like to follow the approach of removing the event listeners first and then setting it back. But I don't know which function do I need to call. :(
Also I want to know one more thing, is it a good approach create a new and unique event listener for every user rather than creating a common event listener of all the users?
For example I have a chat app with 1 million users.
Now in first approach I will have to create 1 million event listeners for 1 million users. So whenever there is a new message from a user then only those users will get the ping from the server who are chatting with that user.
In second approach I will have to create 1 common event listener for all users, but now server will have to ping 1 million users now and I will have to parse the message received every time on client end and check whether the message is for me or for somebody else.
According to me 2nd approach should not be a good approach because of security issues as there are chances of a message being received by a wrong/unauthorized user.
But still I am not sure which one to follow.
So, can anyone guide me on this thing well. Any help is appreciated.

Mail-listener2 - Connection ending

tl;dr: Mail-listener2 appears to timeout, and I want to continually listen for emails without requiring my script to restart.
I'm using the mailer-listerner2 package ( https://github.com/chirag04/mail-listener2/ ) in my node.js project. I would like to continually listen for emails arriving into a particular inbox and then parse these emails for further processing.
I have a connection established as well as my parsing all working, however I am seeing that the imap connection appears to timeout, or at least becomes unresponsive to new emails arriving.
As the mail-listener2 package relies on the imap npm package I have taken a look through the code and attempted to reduce the IDLE timer so that it sends a request to the imap (gmail) servers every 10 seconds instead of once every 30 minutes.
This indeed improved the situation however when waking this morning to check the logs I see the following:
<= 'IDLE OK IDLE terminated (Success)'
=> 'IDLE IDLE'
<= '+ idling'
=> DONE
<= 'IDLE OK IDLE terminated (Success)'
=> 'IDLE IDLE'
<= '+ idling'
[connection] Ended
[connection] Closed
The connection ended & closed appear to come from the core imap module. I thought sending an IDLE check would ensure that the disconnect does not happen, but as you can see this is not the case.
I have also tried looking into Noop but it appears to cause some other issues with mails being read twice.
I understand that if my timers are too low e.g. every few seconds this can cause mails to continually be parsed due to the calls blocking the server responses, which may be why I am seeing the Noop issue above.
Without wanting to go off and keep experimenting with this I'd like to know if others have hit this issue and overcome?
For anyone interested - I've pulled together a bunch of the mail-listener2 forks. Some of these had approached the reconnection issue so I refactored this slightly into a single implementation. I've also pulled together a few other bits not relevant to this issue.
https://www.npmjs.com/package/mail-listener-fixed
https://github.com/Hardware-Hacks/mail-listener-fixed/

Meteor - Connection Timeout. No heartbeat received

I get the following error:
Connection timeout. No heartbeat received.
When accessing my meteor app (http://127.0.0.1:3000). The application has been moved over to a new pc with the same code base - and the server runs fine with no errors, and I can access the mongodb. What would cause the above error?
The problem seems to occur when the collection is larger. however I have it running on another computer which loads the collections instantaneously. The connection to to sock takes over a minute and grows in size, before finally failing:
Meteor's DDP implements Sockjs's Heartbeats used for long-polling. This is probably due to DDP Heartbeat's timeout default of 15s. If you access a large amount of data and it takes a lot of time, in your case, 1 minute, DDP will time out after being blocked long enough by the operation to prevent connections being closed by proxies (which can be worse), and then try to reconnect again. This can go on forever and you may never get the process completed.
You can try hypothetically disconnecting and reconnecting in short amount of time before DDP closes the connection, and divide the database access into shorter continuous processes which you can pick up on each iteration and see if the problem persists:
// while cursorCount <= data {
Meteor.onConnection(dbOp);
Meteor.setTimeout(this.disconnect, 1500); // Adjust timeout here
Meteor.reconnect();
cursorCount++;
}
func dbOp(cursorCount) {
// database operation here
// pick up the operation at cursorCount where last .disconnect() left off
}
However, when disconnected all live-updating will stop as well, but explicitly reconnecting might make up for smaller blocking.
See a discussion on this issue on Google groupand Meteor Hackpad

Catching auth error on redis / heroko / node.js

I'm running a redis / node.js server and had a
[Error: Auth error: Error: ERR max number of clients reached]
My current setup is, that I have a connection manager, that adds connections until the maximum number of concurrent connections for my heroku app (256, or 128 per dyno) is reached. If so, it just delivers an already existing connection. It's ultra fast and it's working.
However, yesterday night I got this error and I'm not able to reproduce it. It may be a rare error and I'm not sleeping well, knowing it's out there. Because: Once the error is thrown, my app is no longer reachable.
So my questions would be:
is that kind of a connection manager a good idea?
would it be a better idea to use that manager to wait for 'idle' to be called and the close the connection, meaning that I had to reestablish a connection everytime a requests kicks in (this is what I wanted to avoid)
how can I stop my app from going down? Should i just flush the connection pool whenever an error occurs?
What are your general strategies for handling multiple concurrent connections with a given maximum?
In case somebody is reading along:
The error was caused by a messed up redis 0.8.x that I deployed to live:
https://github.com/mranney/node_redis/issues/251
I was smart enough to remove the failed connections from the connection pool but forgot to call '.quit()' on it, hence the connection was out there in the wild but still a connection.

Resources