Low latency serving same data to many clients (multicasting or not...) - multicast

I need to send identical information to 100's of clients over the Internet. I currently maintain a list of client connections and iterate over the list. Obviously the longer the list gets the more latency there is toward the end of the list.
I have looked at multicasting. However unless I am missing something it is only good for LAN-based communications at present. It requires routers that support multicasting and most routers do not. There is no mechanism that I can see where one requests an available multicast address to avoid broadcasting to an address already in use.
So my questions are:
1) Am I missing something and can I use multicasting to accomplish this? (have tried without success)
2) Other than multicasting, is there a short cut to sending identical packets to many recipients?

I solved the problem by multicasting between threads in the server. Every client connection results in the creation of an object. These objects are stored in a queue. Each object has its own thread and joins the multicast group. When the server multicasts a string to the client objects the delay that arose from the list iteration no longer occurs.
Every now and then there is huge latency (nearly a second). I suspect that this is a JVM thing.

If you need high performing low latency IO, you shoud try http://nodejs.org/
You may be also interested in some cache http://memcached.org/

Related

Multiplexing with io_uring

I've recently written a simple TCP server using epoll, but I want to explore other mechanisms for high performance mutliplexing, to that end I came across io_uring, and am planning on making another simple TCP server using it.
However I read that the number of entries for io_uring is limited to 4096 in here https://kernel.dk/io_uring.pdf, which seems to imply that I won't be able to theoretically have more than that number of persistent connections.
To my understanding where normally I'd use something like epoll_wait() to wait on an event for epoll, I instead submit a specific request in io_uring and am notified when the request has completed/failed, so does that mean I can submit up to 4096 read() requests for example?
Have I misunderstood the use case of io_uring or have I misunderstood how to use it?
In the same document I linked, it says:
Normally an application would ask for a ring of a given size, and the
assumption may be that this size corresponds directly to how many requests the application can have pending in the
kernel. However, since the sqe lifetime is only that of the actual submission of it, it's possible for the application to
drive a higher pending request count than the SQ ring size would indicate.
Which is precisely what you'd do for the case of listening for messages on lots of sockets - it's just that the upper limit of how many submissions you can send at once is 4096.

Websockets: listen multiple connections simultaneously?

I am working on a project which goal is to receive and store real time data from financial exchanges, using websockets. I have some very general questions about the technology.
Suppose that I have two websocket connections open, receiving real time data from two different servers. How do I make sure not to miss any messages? I have learned a bit of asynchronous programming (python asyncio) but it does not seem to solve the problem: when I listen to one connection, I cannot listen to the other one at the same time, right?
I can think of two solutions: the first one would require that the servers use a buffer system to send their data, but I do not think this is the case (Binance, Bitfinex...). The second solution I see is to listen each websocket using a different core. If my laptop has 8 cores I can listen to 8 connections and be sure not to miss any messages. I guess I can then scale up by using a cloud service.
Is that correct or am I missing something? Many thanks.
when I listen to one connection, I cannot listen to the other one at the same time, right?
Wrong.
When using an evented programming design, you will be using an IO "reactor" that adds IO related events to the event loop.
This allows your code to react to events from a number of connections.
It's true that the code reacts to the events in sequence, but as long as your code doesn't "block", these events could be handled swiftly and efficiently.
Blocking code should be avoided and big / complicated tasks should be fragmented into a number of "events". There should be no point at which your code is "blocking" (waiting) on an IO read or write.
This will allow your code to handle all the connections without significant delays.
...the first one would require that the servers use a buffer system to send their data...
Many evented frameworks use an internal buffer that streams to the IO when "ready" events are raised. For example, look up the drained event in node.js (or the on_ready in facil.io).
This is a convenience feature rather than a requirement.
The event loop might as well add an "on ready" event and assume your code will handle buffering after partial write calls return EAGAIN / EWOULDBLOCK.
The second solution I see is to listen each websocket using a different core.
No need. A single thread on a single core with an evented design should support thousands (and tens of thousands) of concurrent clients with reasonable loads (per-client load is a significant performance factor).
Attaching TCP/IP connections to a specific core can (sometimes) improve performance, but this is a many-to-one relationship. If we had to dedicate a CPU core per connection than server prices would shoot through the roof.

A lot of socket endpoints in python?

I need to parse some Crypto exchanges, such as Poloniex and e.t.c.. I can subscribe their socket apis to getting order-books. Which is the best way to connect to as many as possible order-books? (at least 6 pairs on 4 exchanges, which means I need 24-threads to be used only to listening)
You do not need to use threads for this. A reasonably modern server or desktop should be able to receive 24 feeds in a single thread. You will be limited in the amount of data you can receive by your internet connection and by the exchanges' own throttles (they are not interested in publishing 100 Mbps of traffic to you).
Instead of threads, you can use asyncio to listen to as many sockets as you like on a single thread: https://docs.python.org/3/library/asyncio.html
If you find that your single thread truly cannot keep up, you might consider using one thread per exchange or per currency pair (depending on which data is more likely to be used together).

When does a single JMS connection with multiple producing sessions start becoming a bottleneck?

I've recently read a lot about best practices with JMS, Spring (and TIBCO EMS) around connections, sessions, consumers & producers
When working within the Spring world, the prevailing wisdom seems to be
for consuming/incoming flows - to use an AbstractMessageListenerContainer with a number of consumers/threads.
for producing/publishing flows - to use a CachingConnectionFactory underneath a JmsTemplate to maintain a single connection to the broker and then cache sessions and producers.
For producing/publishing, this is what my (largeish) server application is now doing, where previously it was creating a new connection/session/producer for every single message it was publishing (bad!) due to use of the raw connection factory under JmsTemplate. The old behaviour would sometimes lead to 1,000s of connections being created and closed on the broker in a short period of time in high peak periods and even hitting socket/file handle limits as a result.
However, when switching to this model I am having trouble understanding what the performance limitations/considerations are with the use of a single TCP connection to the broker. I understand that the JMS provider is expected to ensure it can be used in the multi-threaded way etc - but from a practical perspective
it's just a single TCP connection
the JMS provider to some degree needs to co-ordinate writes down the pipe so they don't end up an interleaved jumble, even if it has some chunking in its internal protocol
surely this involves some contention between threads/sessions using the single connection
with certain network semantics (high latency to broker? unstable throughput?) surely a single connection will not be ideal?
On the assumption that I'm somewhat on the right track
Am I off base here and misunderstanding how the underlying connections work and are shared by a JMS provider?
is any contention a problem mitigated by having more connections or does it just move the contention to the broker?
Does anyone have any practical experience of hitting such a limit they could share? Either with particular message or network throughput, or even caused by # of threads/sessions sharing a connection in parallel
Should one be concerned in a single-connection scenario about sessions that write very large messages blocking other sessions that write small messages?
Would appreciate any thoughts or pointers to more reading on the subject or experience even with other brokers.
When thinking about the bottleneck, keep in mind two facts:
TCP is a streaming protocol, almost all JMS providers use a TCP based protocol
lots of the actions from TIBCO EMS client to EMS server are in the form of request/reply. For example, when you publish a message / acknowledge a receive message / commit a transactional session, what's happening under the hood is that some TCP packets are sent out from client and the server will respond with some packets as well. Because of the nature of TCP streaming, those actions have to be serialised if they are initiated from the same connection -- otherwise say if from one thread you publish a message and in the exact same time from another thread you commit a session, the packets will be mixed on the wire and there is no way server can interpret the right message from the packets. [ Note: the synchronisation is done from the EMS client library level, hence user can feel free to share one connection with multiple threads/sessions/consumers/producers ]
My own experience is multiple connections always output perform single connection. In a lossy network situation, it is definitely a must to use multiple connections. Under best network condition, with multiple connections, a single client can nearly saturate the network bandwidth between client and server.
That said, it really depends on what is your clients' performance requirement, a single connection under good network can already provides good enough performance.
Even if you use one connection and 100 sessions it means finally you
are using 100threads, it is same as using 10connections* 10 sessions =
100threads.
You are good until you reach your system resource limits

SocketIO scaling architecture and large rooms requirements

We are using socketIO on a large chat application.
At some points we want to dispatch "presence" (user availability) to all other users.
io.in('room1').emit('availability:update', {userid='xxx', isAvailable: false});
room1 may contains a lot of users (500 max). We observe a significant raise in our NodeJS load when many availability updates are triggered.
The idea was to use something similar to redis store with Socket IO. Have web browser clients to connect to different NodeJS servers.
When we want to emit to a room we dispatch the "emit to room1" payload to all other NodeJS processes using Redis PubSub ZeroMQ or even RabbitMQ for persistence. Each process will itself call his own io.in('room1').emit to target his subset of connected users.
One of the concern with this setup is that the inter-process communication may become quite busy and I was wondering if it may become a problem in the future.
Here is the architecture I have in mind.
Could you batch changes and only distribute them every 5 seconds or so? In other words, on each node server, simply take a 'snapshot' every X seconds of the current state of all users (e.g. 'connected', 'idle', etc.) and then send that to the other relevant servers in your cluster.
Each server then does the same, every 5 seconds or so it sends the same message - of only the changes in user state - as one batch object array to all connected clients.
Right now, I'm rather surprised you are attempting to send information about each user as a packet. Batching seems like it would solve your problem quite well, as it would also make better use of standard packet sizes that are normally transmitted via routers and switches.
You are looking for this library:
https://github.com/automattic/socket.io-redis
Which can be used with this emitter:
https://github.com/Automattic/socket.io-emitter
About available users function, I think there are two alternatives,you can create a "queue Users" where will contents "public data" from connected users or you can use exchanges binding information for show users connected. If you use an "user's queue", this will be the same for each "room" and you could update it when an user go out, "popping" its state message from queue (Although you will have to "reorganize" all queue message for it).
Nevertheless, I think that RabbitMQ is designed for asynchronous communication and it is not very useful approximation have a register for presence or not from users. I think it's better for applications where you don't know when the user will receive the message and its "real availability" ("fire and forget architectures"). ZeroMQ require more work from zero but you could implement something more specific for your situation with a better performance.
An publish/subscribe example from RabbitMQ site could be a good point to begin a new design like yours where a message it's sent to several users at same time. At summary, I will create two queues for user (receive and send queue messages) and I'll use specific exchanges for each "room chat" controlling that users are in each room using exchange binding's information. Always you have two queues for user and you create exchanges to binding it to one or more "chat rooms".
I hope this answer could be useful for you ,sorry for my bad English.
This is the common approach for sharing data across several Socket.io processes. You have done well, so far, with a single process and a single thread. I could lamely assume that you could pick any of the mentioned technologies for communicating shared data without hitting any performance issues.
If all you need is IPC, you could perhaps have a look at Faye. If, however, you need to have some data persisted, you could start a Redis cluster with as many Redis masters as you have CPUs, though this will add minor networking noise for Pub/Sub.

Resources