SignalR & Windows 7's IIS 7.5 hangs - iis

I am now testing new Chat with SignalR and IIS/ASP.net 4 and friends (MySQL, etc...)
I can use SignalR with two clients (IE9, Chrome) or one only and it works. after some actions with code (refresh, change, refresh, etc...) I can see that requests to server is frozen for minutes and if I want to work I need to restart IIS.
I took a dump of IIS process, and see a ten open connections to poll.ashx/connect?transport=foreverFrame .... was opened for minutes.
(I am using ashx extenstion to do not use the "runAllManagedModulesForAllRequests", as it hurting performance, it is useless)
I tried GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(30) and GlobalHost.Configuration.KeepAlive = null. same problem. I can see in the dump requests opened to 2 minutes or more.
I know that Windows 7's IIS limited to ten running requests, so I opened Fiddler and abort all sessions.
IIS still hangs after termination.
I tried to close Fiddler (and then - TCP/HTTP connections) - still hangs.
What I can do to stop the "losted" connections? the client (Fiddler or Browser) closed the TCP/HTTP connections and in IIS there are still alive.
I do not use hubs.
Thanks !

Related

IIS Idle Time-out triggers even though a SignalR connection is still present

In my project, there is a process that can run for a very long time (> 20 min.). The progress is transmitted to interested clients as a percentage value using SignalR. Now I noticed that the server is rigorously terminated after 20 minutes (IIS default Idle Time-out), although a client is connected and actively receiving data via SignalR.
Could it be that communication via WebSockets is not monitored by the IIS routine that resets the timeout? Is there any way to work around the problem? Or have I implemented something wrong?

SignalR long polling repeatedly calls /negotiate and /hub POST and returns 404 occasionally on Azure Web App

We have enabled SignalR on our ASP.NET Core 5.0 web project running on an Azure Web App (Windows App Service Plan). Our SignalR client is an Angular client using the #microsoft/signalr NPM package (version 5.0.11).
We have a hub located at /api/hub/notification.
Everything works as expected for most of our clients, the web socket connection is established and we can call methods from client to server and vice versa.
For a few of our clients, we see a massive amount of requests to POST /api/hub/notification/negotiate and POST /api/hub/notification within a short period of time (multiple requests per minute per client). It seems like that those clients switch to long polling instead of using web sockets since we see the POST /api/hub/notification requests.
We have the suspicion that the affected clients could maybe sit behind a proxy or a firewall which forbids the web sockets and therefore the connection switches to long polling in the first place.
The following screenshot shows requests to the hub endpoints for one single user within a short period of time. The list is very long since this pattern repeats as long as the user has opened our website. We see two strange things:
The client repeatedly calls /negotiate twice every 15 seconds.
The call to POST /notification?id=<connectionId> takes exactly 15 seconds and the following call with the same connection ID returns a 404 response. Then the pattern repeats and /negotiate is called again.
For testing purposes, we enabled only long polling in our client. This works for us as expected too. Unfortunately, we currently don't have access to the browsers or the network of the users where this behavior occurs, so it is hard for us to reproduce the issue.
Some more notes:
We currently have just one single instance of the Web App running.
We use the Redis backplane for a scale-out scenario in future.
The ARR affinity cookie is enabled and Web Sockets in the Azure Web App are enabled too.
The Web App instance doesn't suffer from high CPU usage or high memory usage.
We didn't change any SignalR options except of adding the Redis backplane. We just use services.AddSignalR().AddStackExchangeRedis(...) and endpoints.MapHub<NotificationHub>("/api/hub/notification").
The website runs on HTTPS.
What could cause these repeated calls to /negotiate and the 404 returns from the hub endpoint?
How can we further debug the issue without having access to the clients where this issue occurs?
Update
We now implemented a custom logger for the #microsoft/signalr package which we use in the configureLogger() overload. This logger logs into our Application Insights which allows us to track the client side logs of those clients where our issue occurs.
The following screenshot shows a short snippet of the log entries for one single client.
We see that the WebSocket connection fails (Failed to start the transport "WebSockets" ...) and the fallback transport ServerSentEvents is used. We see the log The HttpConnection connected successfully, but after pretty exactly 15 seconds after selecting the ServerSentEvents transport, a handshake request is sent which fails with the message from the server Server returned handshake error: Handshake was canceled. After that some more consequential errors occur and the connection gets closed. After that, the connection gets established again and everything starts from new, a new handshare error occurs after those 15 seconds and so on.
Why does it take so long for the client to send the handshake request? It seems like those 15 seconds are the problem, since this is too long for the server and the server cancels the connection due to a timeout.
We still think that this has maybe something to to with the client's network (Proxy, Firewall, etc.).
Fiddler
We used Fiddler to block the WebSockets for testing. As expected, the fallback mechanism starts and ServerSentEvents is used as transport. Opposed to the logs we see from our issue, the handshake request is sent immediately and not after 15 seconds. Then everything works as expected.
You should check which pricing tier you use, Free or Standard in your project.
You should change the connectionstring which is in Standard Tier. If you still use Free tier, there are some restrictions.
Official doc: Azure SignalR Service limits

Browser only allowing 3 tabs to connect to local server via Socket.io

I have a React App that currently in development. I use socket.io to connect the frontend to my server file which I'm running locally.
I opened multiple tabs (including incognito) so I can simulate multiple people using it at the same time and the browser hangs up on the 4th window. I can open up to 3 just fine. When I introduce the 4th one I can either not get the React app loaded or I load it and it hangs up when I try to do anything that emits a socket action.
I did notice that I can open a 4th window in Firefox no problem. So it seems like it's a Chrome / Browser thing limiting me to 3 socket connections from a single browser.
Any ideas on what's going on? I don't even have a ton of emits being sent out. I really don't think it's my server or client code. I tried turning on `multi-plexing using
const socket = io.connect('http://localhost:3000', { forceNew: true });
in my Client code (React) but it didn't fix the problem until I started using Chrome and Firefox together to keep Chrome under 4 tabs.
Unfortunately this is a hard-coded limit of open connections to a server in Chrome.
It's actually 6 open sockets per host (https://support.google.com/chrome/a/answer/3339263?hl=en). However, to confuse things, I suspect that you're using something like hot-reloading, which also uses a socket (hence why each page takes up two sockets, not just one).
The only thing you could do, depending on your architecture, is spawn multiple servers on different ports (then you'd be able to have 6 per port).
Alternatively, as you've found, you can use another browser that does not enforce this limit.

Web agent and web service time out can stop http hang in domino?

If any web agent at any chance falls in to infinite loop can "Web agent time out parameter"(say 5 sec) in server document prevent http hang? If the agent is called from Xpage postSaveDocument will also apply the same?
Yes, that is exactly what that parameter does - and it's been in the Domino server at least since version 5.0. See this response on the old forum.
So basically, you need to edit the server document, go to "Internet Protocols", "Domino Web Engine". Near the bottom you can set the timeout in seconds for "Web agents and Web Services Timeout".
I can't see there should be any difference in how the web agent is called :-)
/John
I have installed domino server in my system and tried; this agent is not stopped with this server parameter, but restart task http can stop it so it is not like xpage infinite loop which never stops unless sever is forcefully killed. Thanks a lot for your input.

socket.io disconnects clients when idle

I have a production app that uses socket.io (node.js back-end)to distribute messages to all the logged in clients. Many of my users are experiencing disconnections from the socket.io server. The normal use case for a client is to keep the web app open the entire working day. Most of the time on the app in a work day time is spent idle, but the app is still open - until the socket.io connection is lost and then the app kicks them out.
Is there any way I can make the connection more reliable so my users are not constantly losing their connection to the socket.io server?
It appears that all we can do here is give you some debugging advice so that you might learn more about what is causing the problem. So, here's a list of things to look into.
Make sure that socket.io is configured for automatic reconnect. In the latest versions of socket.io, auto-reconnect defaults to on, but you may need to verify that no piece of code is turning it off.
Make sure the client is not going to sleep such that all network connections will become inactive get disconnected.
In a working client (before it has disconnected), use the Chrome debugger, Network tab, webSockets sub-tab to verify that you can see regular ping messages going between client and server. You will have to open the debug window, get to the network tab and then refresh your web page with that debug window open to start to see the network activity. You should see a funky looking URL that has ?EIO=3&transport=websocket&sid=xxxxxxxxxxxx in it. Click on that. Then click on the "Frames" sub-tag. At that point, you can watch individual websocket packets being sent. You should see tiny packets with length 1 every once in a while (these are the ping and pong keep-alive packets). There's a sample screen shot below that shows what you're looking for. If you aren't seeing these keep-alive packets, then you need to resolve why they aren't there (likely some socket.io configuration or version issue).
Since you mentioned that you can reproduce the situation, one thing you want to know is how is the socket getting closed (client-end initiated or server-end initiated). One way to gather info on this is to install a network analyzer on your client so you can literally watch every packet that goes over the network to/from your client. There are many different analyzers and many are free. I personally have used Fiddler, but I regularly hear people talking about WireShark. What you want to see is exactly what happens on the network when the client loses its connection. Does the client decide to send a close socket packet? Does the client receive a close socket packet from someone? What happens on the network at the time the connection is lost.
webSocket network view in Chrome Debugger
The most likely cause is one end closing a WebSocket due to inactivity. This is commonly done by load balancers, but there may be other culprits. The fix for this is to simply send a message every so often (I use 30 seconds, but depending on the issue you may be able to go higher) to every client. This will prevent it from appearing to be inactive and thus getting closed.

Resources