Right now I have a node.js based application (A) that is connected to another application (B) with a tcp socket in order to receive data.
The node.js application has also some validation and processing logic that is executed right after receiving the data from the socket. Finally it writes a result to a database (async).
So far everything worked fine and performance was good. Application B sent up to 20 messages per second. But since Application B has now a new feature there may be up to 2000 messages per second. As soon as Application B sends a lot of messages, Application A almost freezes (Memory goes also up from 30MByte to 100Mbyte).
After analyzing log files it is very likely that this may be caused due to the amount of messages and the amount of calls to the data callback of the socket. I only need every 20ms a message of Application B, but filtering the messages right after receiving them does also not work. I did some performance measuring:
processing the incoming message takes about 12ms, maybe I can optimize this part a little
The filtering from above took only about 30 microseconds
Right now I am thinking about a scenario like this:
B -> A (socket part, one process) -> C (filters and buffers messages, one process) <- A (processing part, one process)
I read about redis pub/sub but I do not really know if it fits my requirements here. The sequence of the messages is critical.
How would you solve this problem? Is there maybe a point I am missing?
Related
I have around 5700 messages (each message is a 100x100 image as a Base64 string) which I emit from the server to the client from within a for-loop, pretty fast:
[a pretty big array].forEach((imgAsBase64) => {
io.emit('newImgFromServer', imgAsBase64)
})
The client only receives from 1700 to 3000 of them in total, before I get a:
disconnected due to = transport error
socket connected
Once the socket re-connects (and the for-loop has not ended) the emission of new messages from within the loop resumes but I have lost those previous ones forever.
How can I make sure that the client receives all of the messages every time ?
This question is an interesting example of "starving the event loop". If you're in a tight for loop for some period of time with no await in the loop, then you don't let the event loop process any other events during the duration of the for loop. If some events need to be processed during that time for things to work properly, you get problems. Read on for how that applies to this case.
Both client and server need some occasional cycles to process housekeeping pings and pongs in the socket.io protocol. If you firehose messages from one end to the other in a non-stop for loop, you can starve the ability to process those housekeeping messages and it will think that it has timed out (not received the housekeeping messages when it should have which is usually a sign of a lost or inoperative connection). In reality, the housekeeping messages are sitting in the event loop waiting to be processed, but if you never give the event loop a chance to process them, some other code running in the for loop will think that they never arrived.
So, you have to make sure you give both ends enough occasional cycles to process those housekeeping messages. The typical way to do that is to just make sure that you aren't fire hosing messages. Send N messages, then pause for a short period of time (enough time for the event loop to be able to service any incoming network events). Then send N more, pause, etc...
In addition, you could make this whole process a lot more efficient by combining a number of the Base64 strings into a single message. You can probably just put them into an array of 100 of them and send that array of 100 and repeat until they are all sent. Then, obviously change the client to expect an array of Base64 strings instead of just a single one. This will obviously result in a lot fewer messages to send (which is more efficient), but you will still need to pause every so often to let the server process things in the event loop.
Exactly how many messages to send before pausing is something that could be figured out via trial and error, but if you put 100 images into a single message and send 10 of these larger messages (which sends 1,000 images) and then pause for even just 50ms, that should be enough time for the event loop to service any inbound ack messages from socket.io to avoid the timeout. Any sort of pause using setTimeout() makes the setTimeout() get in line behind most other messages that are waiting in the event loop so even a short pause with setTimeout() tends to accomplish the goal of letting the event loop process the things that were waiting to be run.
If end-to-end time was super important, you could experiment with sending more messages at once and/or changing the pause time, but you don't want to end with a setting that is close to where you get a timeout (you want some safety factor).
I have a queue and 3 consumers bind to the queue. Each consumer has a prefetch_count of 250(or say X) and manual acknowledgement is done prefetch_count(X)/2(i.e 125) messages - meaning consumer manually acknowledges 125 messages in a single go (which helps to reduce round-trip time and hence increases performance). Everything is working fine as expected but the only issue arises when there are no new messages in the queue and the consumers have some unacknowledged messages whose count is less 125.
As the acknowledgement is only sent when the count is 125, these unacknowledged messages keeps requeuing. How can I solve this ?
How can I know that my consumer has no new messages to process and can acknowledge all the remaining messages waiting to be acknowledged.
If I understand your scenario correctly, it sounds as though you have a series of messages that get published all at once, and then you process them in batches of 250 at a time until you have none left. The problem is, that if you don't have a number of messages that is divisible by 125, then your final batch never gets acknowledged. Clearly this is a logical problem, but it sounds like you are wondering if there is an easy way to deal with it.
Your question "How can I know that my consumer has no new messages to process?" is based upon a premise which RabbitMQ does not support -- namely, the "end" of a sequence of messages. RabbitMQ consumers expect to continue to receive messages indefinitely, so from their perspective, there is no such thing as "done."
Thus, any such concept must be implemented elsewhere, higher up in your application logic. Here are some options for you to consider:
If you know in advance how many messages will be processed, then send that count first and store. Send the final ack once you have processed that number (assuming no duplicates were processed).
Monitor the in-memory collection at the consumer (all pre-fetched messages reside here until they are actually processed). When it drops below 125, you know that you have a batch size less than that.
Similar to #1, send a special "last message" that the consumer can receive and know to acknowledge upon receipt.
Caveat: I would argue that you have some deeper design problem going on that is leading down the path where it would ever be desirable to do this in the first place. Each message should be 100% independent of any other message. If that assumption is violated, you will have a very fragile system.
I'm working on what's basically a highly-available distributed message-passing system. The system receives messages from someplace over HTTP or TCP, perform various transformations on it, and then sends it to one or more destinations (also using TCP/HTTP).
The system has a requirement that all messages sent to a given destination are in-order, because some messages build on the content of previous ones. This limits us to processing the messages sequentially, which takes about 750ms per message. So if someone sends us, for example, one message every 250ms, we're forced to queue the messages behind each other. This eventually introduces intolerable delay in message processing under high load, as each message may have to wait for hundreds of other messages to be processed before it gets its turn.
In order to solve this problem, I want to be able to parallelize our message processing without breaking the requirement that we send them in-order.
We can easily scale our processing horizontally. The missing piece is a way to ensure that, even if messages are processed out-of-order, they are "resequenced" and sent to the destinations in the order in which they were received. I'm trying to find the best way to achieve that.
Apache Camel has a thing called a Resequencer that does this, and it includes a nice diagram (which I don't have enough rep to embed directly). This is exactly what I want: something that takes out-of-order messages and puts them in-order.
But, I don't want it to be written in Java, and I need the solution to be highly available (i.e. resistant to typical system failures like crashes or system restarts) which I don't think Apache Camel offers.
Our application is written in Node.js, with Redis and Postgresql for data persistence. We use the Kue library for our message queues. Although Kue offers priority queueing, the featureset is too limited for the use-case described above, so I think we need an alternative technology to work in tandem with Kue to resequence our messages.
I was trying to research this topic online, and I can't find as much information as I expected. It seems like the type of distributed architecture pattern that would have articles and implementations galore, but I don't see that many. Searching for things like "message resequencing", "out of order processing", "parallelizing message processing", etc. turn up solutions that mostly just relax the "in-order" requirements based on partitions or topics or whatnot. Alternatively, they talk about parallelization on a single machine. I need a solution that:
Can handle processing on multiple messages simultaneously in any order.
Will always send messages in the order in which they arrived in the system, no matter what order they were processed in.
Is usable from Node.js
Can operate in a HA environment (i.e. multiple instances of it running on the same message queue at once w/o inconsistencies.)
Our current plan, which makes sense to me but which I cannot find described anywhere online, is to use Redis to maintain sets of in-progress and ready-to-send messages, sorted by their arrival time. Roughly, it works like this:
When a message is received, that message is put on the in-progress set.
When message processing is finished, that message is put on the ready-to-send set.
Whenever there's the same message at the front of both the in-progress and ready-to-send sets, that message can be sent and it will be in order.
I would write a small Node library that implements this behavior with a priority-queue-esque API using atomic Redis transactions. But this is just something I came up with myself, so I am wondering: Are there other technologies (ideally using the Node/Redis stack we're already on) that are out there for solving the problem of resequencing out-of-order messages? Or is there some other term for this problem that I can use as a keyword for research? Thanks for your help!
This is a common problem, so there are surely many solutions available. This is also quite a simple problem, and a good learning opportunity in the field of distributed systems. I would suggest writing your own.
You're going to have a few problems building this, namely
2: Exactly-once delivery
1: Guaranteed order of messages
2: Exactly-once delivery
You've found number 1, and you're solving this by resequencing them in redis, which is an ok solution. The other one, however, is not solved.
It looks like your architecture is not geared towards fault tolerance, so currently, if a server craches, you restart it and continue with your life. This works fine when processing all requests sequentially, because then you know exactly when you crashed, based on what the last successfully completed request was.
What you need is either a strategy for finding out what requests you actually completed, and which ones failed, or a well-written apology letter to send to your customers when something crashes.
If Redis is not sharded, it is strongly consistent. It will fail and possibly lose all data if that single node crashes, but you will not have any problems with out-of-order data, or data popping in and out of existance. A single Redis node can thus hold the guarantee that if a message is inserted into the to-process-set, and then into the done-set, no node will see the message in the done-set without it also being in the to-process-set.
How I would do it
Using redis seems like too much fuzz, assuming that the messages are not huge, and that losing them is ok if a process crashes, and that running them more than once, or even multiple copies of a single request at the same time is not a problem.
I would recommend setting up a supervisor server that takes incoming requests, dispatches each to a randomly chosen slave, stores the responses and puts them back in order again before sending them on. You said you expected the processing to take 750ms. If a slave hasn't responded within say 2 seconds, dispatch it again to another node randomly within 0-1 seconds. The first one responding is the one we're going to use. Beware of duplicate responses.
If the retry request also fails, double the maximum wait time. After 5 failures or so, each waiting up to twice (or any multiple greater than one) as long as the previous one, we probably have a permanent error, so we should probably ask for human intervention. This algorithm is called exponential backoff, and prevents a sudden spike in requests from taking down the entire cluster. Not using a random interval, and retrying after n seconds would probably cause a DOS-attack every n seconds until the cluster dies, if it ever gets a big enough load spike.
There are many ways this could fail, so make sure this system is not the only place data is stored. However, this will probably work 99+% of the time, it's probably at least as good as your current system, and you can implement it in a few hundred lines of code. Just make sure your supervisor is using asynchronous requests so that you can handle retries and timeouts. Javascript is by nature single-threaded, so this is slightly trickier than normal, but I'm confident you can do it.
I have business requirement where I have to process messages in a certain priority say priority1 and priority2
We have decided to use 2 JMS queues where priority1 messages will be sent to priority1Queue and priority2 messages will be sent to priority2Queue.
Response time for priority1Queue messages is that the moment message is in Queue, I need to read, process and send the response back to say another queue in 1 second. This means I should immediately process these messages the moment they are in priority1Queue, and I will have hundreds of such messages coming in per second on priority1Queue so I will definitely need to have multiple concurrent consumers consuming messages on this queue so that they can be processed immediately when they are in the queue(consumed and processed within 1 second).
Response time for priority2Queue messages is that I need to read, process and send the response back to say another queue in 1 minute. So the response time of priority2 is lower to priority1 messages however I still need to respond back in a minute.
Can you suggest best possible approach for this so that I can concurrently read messages from both the queue and give higher priority to priority1 messages so that each priority1 message can be read and processed in 1 second.
Mainly how it can be read and fed to a processor so that the next message can be read and so on.
I need to write a java based component that does the reading and processing.
I also need to ensure this component is highly available and doesn't result in OutOfMemory, I will be having this component running across multiple JVMS and multiple application servers thus I can have multiple clusters running this Java component
First off, the requirement to process within 1 second is not going to be dependent on your messaging approach, but more about the actual processing of the message and the raw CPUs available. Picking up 100s of messages per second from a queue is child's play, the JMS provider is most likely not the issue. Depending on your deployment platform (Tomcat, Mule, JEE, whatever), there should be a way to have n listeners to scale up appropriately. Because the messages exist on the queue until you pick it up, doubtful you'll run out of memory. I've done these apps, processed many more messages without problems.
Second, number of strategies for prioritizing messages, not necessarily requiring different queues, using priorities. I'm leaning towards using message priorities and message filters, where one group of listeners take care of the highest priority messages and another listener filters off lower priority but makes sure it does enough to get them out within a minute.
You could also do something where a lower priority message gets rewritten back to the same queue with a higher priority, based on how close to 1 minute you are. I know that sounds wrong, but reading/writing from JMS has very little overhead (at least compared to do the equivalent, column-driven database transactions), but the listener for lower priority messages could just continually increase the priority until it has to be processed.
Or simpler, just have more listeners on the high priority queue/messages than the lower priority ones, and imbalance in number of processes for messages might be all it needs.
Lots of possibilities, time for a PoC.
The title says it all, but here's a more in-depth explanation:
I made a chat server for some of my friends and I, but one of the last issues I need to iron out is that when one of them disconnects, there's no available indication of this to the others connected to the server. I'm planning to start a separate thread that makes sure some specific data is sent to the server every minute or so (sending of data also automated on the client side) to keep each client in-check. If one were to not send data for a certain amount of time, it would be discarded as "disconnected."
The problem is, the way my program is set up, it would be impossible to discern whether they both were receiving data without dismantling most of the code already there.
Help is greatly appreciated,
~P
Two recv() threads, non-blocking, same socket: do both receive a sent buffer?
No, but they would both receive the EOS indication (return value is zero).