Whats the connection meaning of pusher? - pusher

I have added pusher to my start up web page, but there is something that is troubling me:
Since I have the sandbox plan (which says that i only have 20 maximum connections) I have been testing my web page on several computers (using pusher), but when I get in to my account it says that I am using 6 connections, even when anyone is not using my web page, so whats does this connections means? how does it counts as a connection? when a web page is closed, the connection counter decrease?
Any information about this will be great.

The connection count on pricing plans indicates the number of simultaneous connections allowed.
A connection is counted as a WebSocket connection to Pusher. When using the Pusher JavaScript library a new WebSocket connection is created when you create a new Pusher('APP_KEY'); instance.
Channel subscriptions are created over the existing WebSocket connection, and do not count towards your connection quota (there is no limit on the number allowed per connection).
Note: connections automatically close when a user navigates to another web page or closes their web browser so there is no need to do this manually.

Related

SignalR long polling repeatedly calls /negotiate and /hub POST and returns 404 occasionally on Azure Web App

We have enabled SignalR on our ASP.NET Core 5.0 web project running on an Azure Web App (Windows App Service Plan). Our SignalR client is an Angular client using the #microsoft/signalr NPM package (version 5.0.11).
We have a hub located at /api/hub/notification.
Everything works as expected for most of our clients, the web socket connection is established and we can call methods from client to server and vice versa.
For a few of our clients, we see a massive amount of requests to POST /api/hub/notification/negotiate and POST /api/hub/notification within a short period of time (multiple requests per minute per client). It seems like that those clients switch to long polling instead of using web sockets since we see the POST /api/hub/notification requests.
We have the suspicion that the affected clients could maybe sit behind a proxy or a firewall which forbids the web sockets and therefore the connection switches to long polling in the first place.
The following screenshot shows requests to the hub endpoints for one single user within a short period of time. The list is very long since this pattern repeats as long as the user has opened our website. We see two strange things:
The client repeatedly calls /negotiate twice every 15 seconds.
The call to POST /notification?id=<connectionId> takes exactly 15 seconds and the following call with the same connection ID returns a 404 response. Then the pattern repeats and /negotiate is called again.
For testing purposes, we enabled only long polling in our client. This works for us as expected too. Unfortunately, we currently don't have access to the browsers or the network of the users where this behavior occurs, so it is hard for us to reproduce the issue.
Some more notes:
We currently have just one single instance of the Web App running.
We use the Redis backplane for a scale-out scenario in future.
The ARR affinity cookie is enabled and Web Sockets in the Azure Web App are enabled too.
The Web App instance doesn't suffer from high CPU usage or high memory usage.
We didn't change any SignalR options except of adding the Redis backplane. We just use services.AddSignalR().AddStackExchangeRedis(...) and endpoints.MapHub<NotificationHub>("/api/hub/notification").
The website runs on HTTPS.
What could cause these repeated calls to /negotiate and the 404 returns from the hub endpoint?
How can we further debug the issue without having access to the clients where this issue occurs?
Update
We now implemented a custom logger for the #microsoft/signalr package which we use in the configureLogger() overload. This logger logs into our Application Insights which allows us to track the client side logs of those clients where our issue occurs.
The following screenshot shows a short snippet of the log entries for one single client.
We see that the WebSocket connection fails (Failed to start the transport "WebSockets" ...) and the fallback transport ServerSentEvents is used. We see the log The HttpConnection connected successfully, but after pretty exactly 15 seconds after selecting the ServerSentEvents transport, a handshake request is sent which fails with the message from the server Server returned handshake error: Handshake was canceled. After that some more consequential errors occur and the connection gets closed. After that, the connection gets established again and everything starts from new, a new handshare error occurs after those 15 seconds and so on.
Why does it take so long for the client to send the handshake request? It seems like those 15 seconds are the problem, since this is too long for the server and the server cancels the connection due to a timeout.
We still think that this has maybe something to to with the client's network (Proxy, Firewall, etc.).
Fiddler
We used Fiddler to block the WebSockets for testing. As expected, the fallback mechanism starts and ServerSentEvents is used as transport. Opposed to the logs we see from our issue, the handshake request is sent immediately and not after 15 seconds. Then everything works as expected.
You should check which pricing tier you use, Free or Standard in your project.
You should change the connectionstring which is in Standard Tier. If you still use Free tier, there are some restrictions.
Official doc: Azure SignalR Service limits

What is a "Connection" in MongoDB?

I've been working with MongoDB for a while now and I've been liking it a lot. One thing I do not understand however is "Connections". I've searched online and everything just has very vague and basic answers. I'm using MongoDBs cloud service called "Atlas" and it describes the connection count as
The number of currently active connections to this server. A stack is allocated per connection; thus very many connections can result in significant RAM usage.
However I have a few questions.
What is a connection I guess? As I understand it, a connection is made between the server and the database service. Essentially when I use mongoose.connect(...);, a connection is made. So at most, there should only be one connection. However when I was testing my program I noticed my connection count was at 2 and in some moments it spiked up all the way to 7 and went to 5 and fluctuated. Does a "connection" have anything to do with the client? On the dashboard of Atlas it says I have a max connection amount of 500. What does this value represent? Does this mean only 500 users can use my website at once? If that's the case, how can I increase that number? Or how can I make sure that more than 500 connections never get passed? Or is a connection something that gets opened and I have to manually close myself? Because I've been learning from tutorials and I've never seen/heard anything like that.
Thanks!
mongoose.connect doesn't limit itself to 1 connection to the Mongo Server.
By default, mongoose creates a pool of 5 connections to Mongo.
You can change this default if necessary.
mongoose
.connect(mongoURI, {poolSize : 200});
See https://mongoosejs.com/docs/connections.html
More number of connections which you see in Atlas because there are some internal connections are also made in order to make the cluster running, these may include the connections from:
Connections made from a client.
Internal connections between primary and secondaries.
As it is a hosted service and everything is being monitored so connections from the monitoring agent.
As automation works, so the connections from the automation agent as well.
Hence whenever a new cluster is being created in Atlas, you will always see some connections in the metrics Page even though no client is being connected.

Recycle Ably realtime connections

I kinda came across a strange problem.
In our application (based on React-native) we hosted 70 concurrent clients but the peak in the monitoring page showed 380 connections.
I assume maybe clients exit and come back or reload the app somehow so Ably connections regenerates again and therefore the peak increases.
Now the question: is there any way to force Ably disconnect all unused connections so the peak decreases? (Maybe from back-end)
Thanks.
By default, the connection will stay active until closed explicitly (using connection.close()), or two minutes after the connection is disconnected unexpectedly to allow for connection state recovery.
Recent versions of ably-js in a browser environment automatically close the connection on page reload (that is, the closeOnUnload client option defaults to true) -- this is just a connection.close() added to a beforeunload handler. The trouble is that isn't going to do anything in a React Native environment, which doesn't use that event.
So you probably just need to actively manage your Ably connection using React Native app lifecycle events. If you don't want it to stay active when the app is backgroundend, then in the handler for the app being in the background (per the React Native AppState event), close the Ably connection. Then re-open it (call connect()) when the app is active again.
For other possible reasons your peak connection count may be higher than expected, see Why are my peak connection counts higher than expected? and How does Ably count peak connections?.

How can I disable five connections limit on socket.io?

I am using socket.io with xhr polling on my chat system.I don't want to use websocket because not working on all users.But when I use xhr polling if user open 5 tabs on the browser,messages slowing down.
Same problem here
https://github.com/LearnBoost/socket.io/issues/1145
I tested it but not worked.Still have 5 connections limit.How can I disable this limit ?
I come across this question quite late, but it seems that you have reached the connection limit of your browser. By default, the browser has a limit on how many connection to a host:port can be opened at one time (Chrome allows 8 for example)
So, for your socket.io case, when you open 5 tabs to the same domain which means that you have used 5 connections allowed by your browser. For normal websites, it is not a problem because you request and receive a response, then the connection is closed. But for socket.io (and related libraries), the connection is kept opened at all time to receive "server-push" data. I might be wrong, but at least this is the problem with my project (I don't use Socket.IO but a similar library)
The solution is to limit the number of socket.io connections in your application so that there will be only 1 connection at all time. The rest of the communication should be done via cross-tab (cross-window) events (through LocalStorage for example). The result is that you have 1 tab (window) holds the real socket.io connection and broadcasts events (received from socket.io) to the other tabs (windows). Of course, there are many other factors that you need to consider when you actually implement it
P/s: I am sorry for my bad English
You provided the solution yourself--the bug ticket you linked has links to a solution at the end, which is basically to add this:
var http = require('http');
http.globalAgent.maxSockets = 100;
http.Agent.maxSockets = 100;
Or whatever maximum value you want.

How does gmail browser client detect internet/server disconnect (speed and scalability)

We have an browser application (SaaS) where we would like to notify the user in the case of internet connection or server connection loss. Gmail does this very nicely, the moment I unplug the internet cable or disable network traffic it immediately says unable to reach the server and gives me a count down for retry.
What is the best way to implement something like this? Would I want the client browser issuing AJAX requests to the application server every second, or have a separate server that just reports back "alive". Scalability will be come an issue down the road.
Because GMail already checks for new e-mails every some seconds and for chat information even more frequently, it can tell without a separate request if the connection is down. If you're not using Ajax for some other sort of constant update, then yes, you would just have your server reply with some sort of "alive" signal. Note that you couldn't use a separate server because of Ajax cross-domain restrictions, however.
With the server reporting to the client (push via Comet), you have to maintain an open connection for each client. This can be pretty expensive if you have a large number of clients. Scalability can be an issue, as you mentioned. The other option is to poll. Instead of doing it every second, you can have it poll every 5-10 seconds or so.
Something else that you can look at is Web Sockets (developed as part of HTML 5), but I am not sure if it is widely supported (AFAIK only Chrome supports it).

Resources