Recycle Ably realtime connections - node.js

I kinda came across a strange problem.
In our application (based on React-native) we hosted 70 concurrent clients but the peak in the monitoring page showed 380 connections.
I assume maybe clients exit and come back or reload the app somehow so Ably connections regenerates again and therefore the peak increases.
Now the question: is there any way to force Ably disconnect all unused connections so the peak decreases? (Maybe from back-end)
Thanks.

By default, the connection will stay active until closed explicitly (using connection.close()), or two minutes after the connection is disconnected unexpectedly to allow for connection state recovery.
Recent versions of ably-js in a browser environment automatically close the connection on page reload (that is, the closeOnUnload client option defaults to true) -- this is just a connection.close() added to a beforeunload handler. The trouble is that isn't going to do anything in a React Native environment, which doesn't use that event.
So you probably just need to actively manage your Ably connection using React Native app lifecycle events. If you don't want it to stay active when the app is backgroundend, then in the handler for the app being in the background (per the React Native AppState event), close the Ably connection. Then re-open it (call connect()) when the app is active again.
For other possible reasons your peak connection count may be higher than expected, see Why are my peak connection counts higher than expected? and How does Ably count peak connections?.

Related

IIS Idle Time-out triggers even though a SignalR connection is still present

In my project, there is a process that can run for a very long time (> 20 min.). The progress is transmitted to interested clients as a percentage value using SignalR. Now I noticed that the server is rigorously terminated after 20 minutes (IIS default Idle Time-out), although a client is connected and actively receiving data via SignalR.
Could it be that communication via WebSockets is not monitored by the IIS routine that resets the timeout? Is there any way to work around the problem? Or have I implemented something wrong?

Signal R randomly loses connection to the server side

We use Signal R with an Azure web app in an ASE for our real time web application.
We noticed that Signal R sometimes looses connection to the hub in no particular pattern.
This happens both during high traffic periods as well as low traffic ones but I am more interested in why this i happening during low traffic periods.
Note: We have a so called "1-minute auto refresh" which is triggered by the JavaScript on the page. That seems to be working.
Anyone experienced similar issues using SignalR, and if so, how did you resolve this?
Thank you
(a tester, don't be too harsh!lol )
I have definitely experienced this, and it drove me nuts.
By default, a SignalR client will try to reconnect for 20 seconds after losing connection to its Hub. After 20 seconds without a successful reconnect, the disconnected event is raised on JavaScript clients. After disconnected is raised, the client will give up trying to reconnect and the connection is dead. This page describes SignalR lifecycle events and offers some code on trying to reconnect after the disconnected event is raised.
Now as to why this happens. I've noticed that an App Pool recycle can take longer than 20 seconds in some apps, which can lead to a disconnected event. Intermittent drops in network connectivity between your JavaScript clients and Hub that lasts more than 20 seconds can cause this also. The bottom line is that things can go wrong that are beyond your control and you cannot code around them. Therefore, put in place the logic to attempt to reconnect after your JavaScript client receives the disconnected event.

socket.io disconnects clients when idle

I have a production app that uses socket.io (node.js back-end)to distribute messages to all the logged in clients. Many of my users are experiencing disconnections from the socket.io server. The normal use case for a client is to keep the web app open the entire working day. Most of the time on the app in a work day time is spent idle, but the app is still open - until the socket.io connection is lost and then the app kicks them out.
Is there any way I can make the connection more reliable so my users are not constantly losing their connection to the socket.io server?
It appears that all we can do here is give you some debugging advice so that you might learn more about what is causing the problem. So, here's a list of things to look into.
Make sure that socket.io is configured for automatic reconnect. In the latest versions of socket.io, auto-reconnect defaults to on, but you may need to verify that no piece of code is turning it off.
Make sure the client is not going to sleep such that all network connections will become inactive get disconnected.
In a working client (before it has disconnected), use the Chrome debugger, Network tab, webSockets sub-tab to verify that you can see regular ping messages going between client and server. You will have to open the debug window, get to the network tab and then refresh your web page with that debug window open to start to see the network activity. You should see a funky looking URL that has ?EIO=3&transport=websocket&sid=xxxxxxxxxxxx in it. Click on that. Then click on the "Frames" sub-tag. At that point, you can watch individual websocket packets being sent. You should see tiny packets with length 1 every once in a while (these are the ping and pong keep-alive packets). There's a sample screen shot below that shows what you're looking for. If you aren't seeing these keep-alive packets, then you need to resolve why they aren't there (likely some socket.io configuration or version issue).
Since you mentioned that you can reproduce the situation, one thing you want to know is how is the socket getting closed (client-end initiated or server-end initiated). One way to gather info on this is to install a network analyzer on your client so you can literally watch every packet that goes over the network to/from your client. There are many different analyzers and many are free. I personally have used Fiddler, but I regularly hear people talking about WireShark. What you want to see is exactly what happens on the network when the client loses its connection. Does the client decide to send a close socket packet? Does the client receive a close socket packet from someone? What happens on the network at the time the connection is lost.
webSocket network view in Chrome Debugger
The most likely cause is one end closing a WebSocket due to inactivity. This is commonly done by load balancers, but there may be other culprits. The fix for this is to simply send a message every so often (I use 30 seconds, but depending on the issue you may be able to go higher) to every client. This will prevent it from appearing to be inactive and thus getting closed.

Whats the connection meaning of pusher?

I have added pusher to my start up web page, but there is something that is troubling me:
Since I have the sandbox plan (which says that i only have 20 maximum connections) I have been testing my web page on several computers (using pusher), but when I get in to my account it says that I am using 6 connections, even when anyone is not using my web page, so whats does this connections means? how does it counts as a connection? when a web page is closed, the connection counter decrease?
Any information about this will be great.
The connection count on pricing plans indicates the number of simultaneous connections allowed.
A connection is counted as a WebSocket connection to Pusher. When using the Pusher JavaScript library a new WebSocket connection is created when you create a new Pusher('APP_KEY'); instance.
Channel subscriptions are created over the existing WebSocket connection, and do not count towards your connection quota (there is no limit on the number allowed per connection).
Note: connections automatically close when a user navigates to another web page or closes their web browser so there is no need to do this manually.

Socket.io huge server response time when using xhr-polling

I am trying to scale a messaging app. Im using nodeJS with Socket.io and Redis-Store on the backend. The client can be iphone native browser, android browsers .. etc
I am using SSL for the node connection, using Nginx to load balance the socket connections. I am not clustering my socket.io app , instead i am load balancing over 10 node servers ( we have a massive amount of users ). Everything looks fine when the transport is Websockets , however when it falls back to xhr-polling ( in case of old android phones ) I see a HUGE response time of up to 3000 rpm in New-relic. And I have to restart my node servers every hour or so otherwise the server crashes.
I was wondering if I am doing anything wrong , and if there are any measures I can take to scale socket.io when using xhr-polling transport ? like increasing or decreasing the poll duration ?
You are not doing anything wrong, xhr-polling is also called long polling. The name comes from the fact that the connection is maintained open longer, usually until some answer can be sent down the wire. After the connection closes, a new connection is open waiting for the next piece of information.
You can read more on this here http://en.wikipedia.org/wiki/Push_technology#Long_polling
New Relic shows you the response time of the polling request. Socket.IO has a default "polling duration" of 20 seconds.
You will get a higher RPM for smaller polling duration and a smaller RPM for a higher polling duration. I would consider increasing the polling duration or just keep the 20 sec default.
Also, to keep New Relic from displaying irrelevant data for the long polling, you can add ignore rules in the newrelic.js that you require in you app. This is also detailed in the newrelic npm module documentation here https://www.npmjs.org/package/newrelic#rules-for-naming-and-ignoring-requests

Resources