Signal R randomly loses connection to the server side - azure

We use Signal R with an Azure web app in an ASE for our real time web application.
We noticed that Signal R sometimes looses connection to the hub in no particular pattern.
This happens both during high traffic periods as well as low traffic ones but I am more interested in why this i happening during low traffic periods.
Note: We have a so called "1-minute auto refresh" which is triggered by the JavaScript on the page. That seems to be working.
Anyone experienced similar issues using SignalR, and if so, how did you resolve this?
Thank you
(a tester, don't be too harsh!lol )

I have definitely experienced this, and it drove me nuts.
By default, a SignalR client will try to reconnect for 20 seconds after losing connection to its Hub. After 20 seconds without a successful reconnect, the disconnected event is raised on JavaScript clients. After disconnected is raised, the client will give up trying to reconnect and the connection is dead. This page describes SignalR lifecycle events and offers some code on trying to reconnect after the disconnected event is raised.
Now as to why this happens. I've noticed that an App Pool recycle can take longer than 20 seconds in some apps, which can lead to a disconnected event. Intermittent drops in network connectivity between your JavaScript clients and Hub that lasts more than 20 seconds can cause this also. The bottom line is that things can go wrong that are beyond your control and you cannot code around them. Therefore, put in place the logic to attempt to reconnect after your JavaScript client receives the disconnected event.

Related

IIS Idle Time-out triggers even though a SignalR connection is still present

In my project, there is a process that can run for a very long time (> 20 min.). The progress is transmitted to interested clients as a percentage value using SignalR. Now I noticed that the server is rigorously terminated after 20 minutes (IIS default Idle Time-out), although a client is connected and actively receiving data via SignalR.
Could it be that communication via WebSockets is not monitored by the IIS routine that resets the timeout? Is there any way to work around the problem? Or have I implemented something wrong?

Azure IOT hub transmission fails

I am using the azure IOT sdk in an ESP32-based device to connect to the IOT hub using MQTT, sending messages with QOS 1. When the connection is good, all works exactly as intended. However, when we deploy to areas where the connectivity seems somewhat more spotty, the messages often time out (i.e. callback is called with the timeout error). The MQTT still thinks it has a connection (i.e. the disconnect callback has not been called), but all sends end up timing out. Interestingly, I see that when I send c2d messages, they do get picked up.
I have configured the firmware to tear down and rebuild the MQTT connection in these scenarios and that sometimes helps but not always.
Two questions:
Why does this seem to happen, and are there parameters that I can twiddle to prevent it. I have reduced the size of the packets but that did not seem to make a difference.
What is the appropriate way of handling this condition? I have seen scenarios where once the communication gets "stuck" like this, it can stay stuck for tens of minutes.
Hope there's someone from MSFT IOT group listening... :)

Recycle Ably realtime connections

I kinda came across a strange problem.
In our application (based on React-native) we hosted 70 concurrent clients but the peak in the monitoring page showed 380 connections.
I assume maybe clients exit and come back or reload the app somehow so Ably connections regenerates again and therefore the peak increases.
Now the question: is there any way to force Ably disconnect all unused connections so the peak decreases? (Maybe from back-end)
Thanks.
By default, the connection will stay active until closed explicitly (using connection.close()), or two minutes after the connection is disconnected unexpectedly to allow for connection state recovery.
Recent versions of ably-js in a browser environment automatically close the connection on page reload (that is, the closeOnUnload client option defaults to true) -- this is just a connection.close() added to a beforeunload handler. The trouble is that isn't going to do anything in a React Native environment, which doesn't use that event.
So you probably just need to actively manage your Ably connection using React Native app lifecycle events. If you don't want it to stay active when the app is backgroundend, then in the handler for the app being in the background (per the React Native AppState event), close the Ably connection. Then re-open it (call connect()) when the app is active again.
For other possible reasons your peak connection count may be higher than expected, see Why are my peak connection counts higher than expected? and How does Ably count peak connections?.

How to not receive the accumulated pushes from Pusher after returning online?

How can one prevent Pusher from automatically pushing all the piled up messages to the client after the client eventually goes online after being offline, i.e. after the client re-establishes the connection?
After exchanging messages with a Pusher support enginner, the issue became more clear.
The connection may still be opened even when the laptop gets asleep (this behaviour varies among computers). Thus, after waking up, it may still be connected. (This is exactly what happened in my case so that everything looked like Pusher pushed the accumulated messages.)
However, the default activity timeout is 120s, and the time to wait for a pong response before closing the connection is 30s. So, allowing it around three minutes would make the client disconnect completely, and the behaviour I encountered would not take place.
Pusher doesn't presently buffer messages to be delivered upon reconnection. So the functionality described in the questions isn't something an application needs to consider right now.
Future releases may contains something called Event Buffer which will offer this functionality. Documentation will be released around that time to detail how to avoid receiving buffered events.

socket.io disconnects clients when idle

I have a production app that uses socket.io (node.js back-end)to distribute messages to all the logged in clients. Many of my users are experiencing disconnections from the socket.io server. The normal use case for a client is to keep the web app open the entire working day. Most of the time on the app in a work day time is spent idle, but the app is still open - until the socket.io connection is lost and then the app kicks them out.
Is there any way I can make the connection more reliable so my users are not constantly losing their connection to the socket.io server?
It appears that all we can do here is give you some debugging advice so that you might learn more about what is causing the problem. So, here's a list of things to look into.
Make sure that socket.io is configured for automatic reconnect. In the latest versions of socket.io, auto-reconnect defaults to on, but you may need to verify that no piece of code is turning it off.
Make sure the client is not going to sleep such that all network connections will become inactive get disconnected.
In a working client (before it has disconnected), use the Chrome debugger, Network tab, webSockets sub-tab to verify that you can see regular ping messages going between client and server. You will have to open the debug window, get to the network tab and then refresh your web page with that debug window open to start to see the network activity. You should see a funky looking URL that has ?EIO=3&transport=websocket&sid=xxxxxxxxxxxx in it. Click on that. Then click on the "Frames" sub-tag. At that point, you can watch individual websocket packets being sent. You should see tiny packets with length 1 every once in a while (these are the ping and pong keep-alive packets). There's a sample screen shot below that shows what you're looking for. If you aren't seeing these keep-alive packets, then you need to resolve why they aren't there (likely some socket.io configuration or version issue).
Since you mentioned that you can reproduce the situation, one thing you want to know is how is the socket getting closed (client-end initiated or server-end initiated). One way to gather info on this is to install a network analyzer on your client so you can literally watch every packet that goes over the network to/from your client. There are many different analyzers and many are free. I personally have used Fiddler, but I regularly hear people talking about WireShark. What you want to see is exactly what happens on the network when the client loses its connection. Does the client decide to send a close socket packet? Does the client receive a close socket packet from someone? What happens on the network at the time the connection is lost.
webSocket network view in Chrome Debugger
The most likely cause is one end closing a WebSocket due to inactivity. This is commonly done by load balancers, but there may be other culprits. The fix for this is to simply send a message every so often (I use 30 seconds, but depending on the issue you may be able to go higher) to every client. This will prevent it from appearing to be inactive and thus getting closed.

Resources