Getting emails about hitting Pusher usage limits even if the stats in the backend says otherwise - pusher

I have been getting emails about my account having hit Pusher usage limits even if I haven't really gotten anywhere close to the limits based on my account stats.
I have searched the internet for clarifications and possible solutions. I only found this.
http://pusher.tenderapp.com/kb/faq-common-requests/half-open-connections-lead-to-temporarily-incorrect-connection-counts-and-webhook-call-delays
I have tried to manually close connections on page unload but it still seem to cause some problems still.
Any alternative solutions? What is this "ping/pong mechanism for detecting half-open connections" solution?

I used to work on Pusher support and from my time there I know that sometime the stats don't show the spikes in connections, if those spikes are very short lived. You may be able to see them if you zoom into the usage stats in the Pusher dashboard for your app.
The FAQ on half-open connections is the correct one to look at and is potentially the cause of some of your problems.
The ping/pong mechanism you mention is Pusher's solution to this problem. The WebSocket protocol defines this mechanism, see:
http://www.whatwg.org/specs/web-apps/current-work/multipage/network.html#ping-and-pong-frames
However, not all clients have implemented this so Pusher have added their own ping/pong solution to their protocol:
http://pusher.com/docs/pusher_protocol#ping-pong
I don't believe there is anything that you can do to stop these problems occurring, it's a networking issue where closed connections aren't being detected by the server.

Related

Best way to detect a loss of connection to a server using Angular 4 and Nodejs

Essentially, I'm trying to work out the best way to ensure that a user is connected to the server / the internet and is thus able to make requests in my application without error.
I have come across various solutions, but I can;t really decide what is the best performing or useful.
Websockets, using Socket.io to keep an open connection with the server for each client. Also opens up the possibility for real time updates in my app, which could be a nice thing in the future. However, having lots of open sockets is sure to be hard hitting in performance.
Polling, so having an endpoint in my API that the angular app hits every 5 seconds or so to check the user is connected. Again, seems like it isn't a good idea to be hitting the server a load.
Waiting for an error, then start polling every couple of seconds to wait for the connection to be re-established. This is a little change on the above. However, you are still waiting for a user to fail, which isn't good for user experience.
Does anybody have any informed input on this issue?
Thanks

socket.io disconnects clients when idle

I have a production app that uses socket.io (node.js back-end)to distribute messages to all the logged in clients. Many of my users are experiencing disconnections from the socket.io server. The normal use case for a client is to keep the web app open the entire working day. Most of the time on the app in a work day time is spent idle, but the app is still open - until the socket.io connection is lost and then the app kicks them out.
Is there any way I can make the connection more reliable so my users are not constantly losing their connection to the socket.io server?
It appears that all we can do here is give you some debugging advice so that you might learn more about what is causing the problem. So, here's a list of things to look into.
Make sure that socket.io is configured for automatic reconnect. In the latest versions of socket.io, auto-reconnect defaults to on, but you may need to verify that no piece of code is turning it off.
Make sure the client is not going to sleep such that all network connections will become inactive get disconnected.
In a working client (before it has disconnected), use the Chrome debugger, Network tab, webSockets sub-tab to verify that you can see regular ping messages going between client and server. You will have to open the debug window, get to the network tab and then refresh your web page with that debug window open to start to see the network activity. You should see a funky looking URL that has ?EIO=3&transport=websocket&sid=xxxxxxxxxxxx in it. Click on that. Then click on the "Frames" sub-tag. At that point, you can watch individual websocket packets being sent. You should see tiny packets with length 1 every once in a while (these are the ping and pong keep-alive packets). There's a sample screen shot below that shows what you're looking for. If you aren't seeing these keep-alive packets, then you need to resolve why they aren't there (likely some socket.io configuration or version issue).
Since you mentioned that you can reproduce the situation, one thing you want to know is how is the socket getting closed (client-end initiated or server-end initiated). One way to gather info on this is to install a network analyzer on your client so you can literally watch every packet that goes over the network to/from your client. There are many different analyzers and many are free. I personally have used Fiddler, but I regularly hear people talking about WireShark. What you want to see is exactly what happens on the network when the client loses its connection. Does the client decide to send a close socket packet? Does the client receive a close socket packet from someone? What happens on the network at the time the connection is lost.
webSocket network view in Chrome Debugger
The most likely cause is one end closing a WebSocket due to inactivity. This is commonly done by load balancers, but there may be other culprits. The fix for this is to simply send a message every so often (I use 30 seconds, but depending on the issue you may be able to go higher) to every client. This will prevent it from appearing to be inactive and thus getting closed.

Optimizing Node.js for a large number of outbound HTTP requests?

My node.js server is experiencing times when it becomes slow or unresponsive, even occasionally resulting in 503 gateway timeouts when attempting to connect to the server.
I am 99% sure (based upon tests that I have run) that this lag is coming specifically from the large number of outbound requests I am making with the node-oauth module to contact external APIs (Facebook, Twitter, and many others). Admittedly, the number of outbound requests being made is relatively large (in the order of 30 or so per minute). Even worse, this frequently means that the corresponding inbound requests to my server can take ~5-10 seconds to complete. However, I had a previous version of my API which I had written in PHP which was able to handle this amount of outbound requests without any problem at all. Actually, the CPU usage for the same number (or even fewer) requests with my Node.js API is about 5x that of my PHP API.
So, I'm trying to isolate where I can improve upon this, and most importantly to make sure that 503 timeouts do not occur. Here's some stuff I've read about or experimented with:
This article (by LinkedIn) recommends turning off socket pooling. However, when I contacted the author of the popular nodejs-request module, his response was that this was a very poor idea.
I have heard it said that setting "http.globalAgent.maxSockets" to a large number can help, and indeed it did seem to reduce bottlenecking for me
I could go on, but in short, I have been able to find very little definitive information about how to optimize performance so these outbound connections do not lag my inbound requests from clients.
Thanks in advance for any thoughts or contributions.
FWIW, I'm using express and mongoose as well, and my servers are hosted on the Amazon Cloud (2x M1.Large for the node servers, 2x load balancers, and 3x M1.Small MongoDB instances).
It sounds to me that the Agent is capping your requests to the default level of 5 per-host. Your tests show that cranking up the agent's maxSockets helped... you should do that.
You can prove this is the issue by firing up a packet sniffer, or adding more debugging code to your application, to show that this is the limiting factor.
http://engineering.linkedin.com/nodejs/blazing-fast-nodejs-10-performance-tips-linkedin-mobile
Disable the agent altogether.

Do page refreshes defeat the the use of NodeJS/Socket.IO and cause too much drain on a server in creating new connections?

In a current web application, where the UI cannot be changed to accommodate page section refreshes rather than an entire page refresh (linking to other pages, etc.). Eventually this would be placed in a non-updating div, however for now, page links will destroy/create the web worker, forcing a new socket connection to be created. What is the drain on the system when having to create new socket connections using Socket.IO?
Given that the entire page will refresh, is this still a good solution?
-- UPDATE --
The application for this is for a system-based push notifications like "friend" logins, etc. I see this as something similar to the private message in a chat environment. I would want to broadcast these events, and the client side would manage who actually gets to see the updates. Does sound like the right way of doing it?
There are a lot of factors at play here. I'll list a few of them:
What browser is the user using and what is their system like?
Some browsers and slower machines will take require more overhead to open the socket. That said, this should be pretty marginal with any modern machine and connection.
What transports do you have enabled?
You can configure Socket.IO to use many different types of transports, including WebSocket, XHR polling, JSONP polling, and even Flash. Which transport actually gets used will be based on what you have configured and on what the user's browser supports.
Some of these transports, such as Flash (disabled by default), will obviously require significantly more overhead to setup. Others, like XHR polling, are inherently inefficient, but are largely unaffected by new page requests since you are making multiple poll requests anyway.
What are you doing when a Socket.IO connection is established, and when a disconnect occurs?
If you have heavy crunching that happens in either of these scenarios, then frequent reconnects are going to be a problem. That said, you really shouldn't be doing heavy crunching on connects, since that pretty much kills XHR polling.
How frequently are these page refreshes happening?
Is it only occasionally, when a user clicks a link? Or do you have something that causes the page to refresh extremely frequently?
What kind of hardware and connection do your servers have?
This is a pretty big variable that is hard to fix down.
While I can't give you a definitive answer, hopefully this will help you think about and optimize your scenario. In general, creating new sockets fairly frequently should not be an issue, but I'd recommend performing load or stress testing to see what your particular system can handle.

Speeding up Socket.IO

When I listen for a client connection in Socket.IO, there seems to be a latency of 8-9 seconds as it falls back to XHR. This is too slow for most purposes, as I'm using Socket.IO to push data to users' news feeds, and a lot can happen 8 or 9 seconds.
Is there any way to speed up this failure?
EDIT:
After deploying to Nodejitsu's VPS I tried this again and the socket connection was nearly immediate (enough that a user wouldn't notice). I'm only experiencing this on my local machine. So the question may actually be: why is it so slow on my local machine?
This question is almost impossible to answer without more information on your local setup, but it's interesting that you're failing over to XHR. The following question might explain why it's failing over to XHR, but not if you're able to use the same browser successfully once it's published.
Socket.io reverting to XHR / JSONP polling for no apparent reason
Another potential problem I've read about is that your browser has cached the incorrect transport method. You could try clearing your browser cache and reconnect to see if that gets around the problem.
https://groups.google.com/group/socket_io/browse_thread/thread/e6397e89efcdbcb7/a3ce764803726804
Lastly, if you're unable to figure out why it's not going using WebSockets or FlashSockets, you could try removing them as options from your socket.io configuration so that when you're developing locally, you may be able to get past that delay for quicker development at least.

Resources