Maximum connections on pusher - pusher

I'm trying to understand Pusher.
If I have maximum of 100 connections (Boostrap) does that means that one user can open 100 connections and other also 100. Or first user can open, let's say, 50 and second one also 50 connections, so third cannot open any?

Pusher uses a model of subscribing to channels within a connection. A single user would only need one connection, but could be subscribed to as many channels as you want on that connection.

Related

MongoDB ChangeStream performance

Is it possible to use change stream for extensive use? I want to watch many collections with many documents with various parameters. The idea is to allow for multiple users to watch data that they are interested in. So not only to show few real-time updates on e.g. some stock data from a single collection or whatever, but to allow a modern web application to be real-time. I've stumbled upon some discussions e.g. this one which suggests, that the feature is not usable for such purpose.
So imagine implementing commonly known social network. Each user would want to have live data on (1) notifications, (2) online friends, (3) friends requests, (4) news feed, (5) comments on news feed posts (maybe one for each post?). This makes at least 5 open change streams per user. If a service would have connected e.g. 10000 users, it makes 50000 active change streams.
Is this mechanism ready for such load? If I understood the discussion (and some others) every change stream watcher creates one connection. Would it be okay to have like tens of thousands of connections? It does not seems like a good design. It seems like it'd be better to watch each collection and do the filtering on a application server, but that is more of a database server's job.
Is there way how to handle such load with mongo db?
Each change stream will require a connection to the server. Assuming your 10000 active users are going to do things like login, post things, read things, comment on other people's things, manage friend lists, etc. you may actually be needing more like 10 connections per user.
Each change stream is essentially an aggregation the maintains a cursor over the operations log. That should work fairly well as long as the server is sufficiently sized to handle:
100,000 simultaneous connections
state for 50,000 long running cursors
10s of thousands of queries per second for those change streams
whatever query rate the other non-changestream reads and writes will need
On MongoDB Atlas you would need at least an M140 instance just to handle that number of connections, with a price tag in the neighborhood of $10K per month.
At that price point, it would probably be more cost effective to design a pub/sub notification service that uses a total of 5 change streams to watch for the different types of changes, and deliver those to users with a push mechanism rather than having every user poll the database directly.

A lot of socket endpoints in python?

I need to parse some Crypto exchanges, such as Poloniex and e.t.c.. I can subscribe their socket apis to getting order-books. Which is the best way to connect to as many as possible order-books? (at least 6 pairs on 4 exchanges, which means I need 24-threads to be used only to listening)
You do not need to use threads for this. A reasonably modern server or desktop should be able to receive 24 feeds in a single thread. You will be limited in the amount of data you can receive by your internet connection and by the exchanges' own throttles (they are not interested in publishing 100 Mbps of traffic to you).
Instead of threads, you can use asyncio to listen to as many sockets as you like on a single thread: https://docs.python.org/3/library/asyncio.html
If you find that your single thread truly cannot keep up, you might consider using one thread per exchange or per currency pair (depending on which data is more likely to be used together).

What's the relationship between QPS/TPS, response time and number of concurrent users

Some Concepts:
TPS means Transactions per second
Response time is the total amount of time it takes to respond to a request for service
Is this formula true?
TPS = number of concurrent users / response time
It is true if transactions happen sequentially and in only one thread (on one TCP connection) per user. In reality, however, when talking about web browsers, they will use multiple concurrent connections when talking to a host. 6 concurrent connections is quite common, so the host will then get TPS = 6 x concurrent users / response time.
Also, the browser will sometimes be blocked and not fetch things. Sometimes because it is executing code, sometimes because it cannot perform some operations simultaneously with other operations. See http://www.browserscope.org for more info.
Also, of course, clients (whether they are humans using a browser or e.g. a mobile phone app talking to its backend via a REST API) don't usually make requests back to back, continuously, at the highest possible rate. That is probably not a very realistic test case. Usually, clients will make a bunch of requests and then fall silent for a while, until the user does something new in the application that requires more data from the backend.

Tcp Listener dies after about 50 hours

I have create a tcp listener in C#. I have set timeout to 0 with unlimited time. If there is no activity about 50 hours like on weekends between listener and client, but not disconnect. It dies
Please advice how can i fix this issue.
Thanks in advance.
Routers will typically kill idle TCP connections after a preset amount of time. If your connection is going over the internet you have no control over this. To prevent the problem programmatically detect your idle connections and send a very small amount of information, i.e. byte[] { 0 }; every 5 minutes or so.
This adds no overhead as if the connection is active you do not send the 'keep alive packet'.
Another option is to set the 'KeepAlive' option on the underlying sockets using `Socket.SetSocketOption'. This should perform the same function but I've always found a custom solution more reliable.

Azure Service Bus - Determine Number of Active Connections (Topic/Queue)

Since Azure Service Bus limits the maximum number of concurrent connections to a Queue or Topic to 100, is there a method that we can use to query our Queues/Topics to determine how many concurrent connections there are?
We are aware that we can capture the throttling events, but would very much prefer an active approach, where we can proactively increase or decrease the number of Queues/Topics when the system is under a heavy load.
The use case here is a process waiting for a reply message, where the reply is coming from a long-running process, and the subscription is using a Correlation Filter to facilitate two-way communication between the Publisher and Subscriber. Thus, we must have a BeginReceive() going in order to await the response, and each such Publisher will be consuming a connection for the duration of their wait time. The system already balances load across multiple Topics, but we need a way to be proactive about how many Topics are created, so that we do not get throttled too often, but at the same time not have an excess of Topics for this purpose.
I don't believe it is currently possile to query the listener counts. I think that the subscriber object also figures into that so in theory, if you have up to 2000 subscribers per topic and if each allows up to 100 connections, that's alot of potential connections. We just need to keep in mind that subscribers are cooperative (each gets a copy of all messages) and receivers on subscriers are competitive (only one gets it).
I've also seen unconfirmed reports of performance delays when you start running > 1,000 subscribers so make sure you test this scenario.
But... given your scenario, I'd deduce that performance time likely isn't the biggest factor (you have long running processes already). So introducing a couple seconds lag into the workflow likely won't be critical. If that's the case, I'd set the timeout for your BeginRecieve to something fairly short (couple seconds) and have a sleep/wait delay between attempts. This gives other listeners an opportnity to get messsages as well. We might also want to consider an approach where we attempt to recieve multiple messages and then assign them out other processes for processing (coorelation in this case?).
Juts some thoughts.

Resources