I have around 5700 messages (each message is a 100x100 image as a Base64 string) which I emit from the server to the client from within a for-loop, pretty fast:
[a pretty big array].forEach((imgAsBase64) => {
io.emit('newImgFromServer', imgAsBase64)
})
The client only receives from 1700 to 3000 of them in total, before I get a:
disconnected due to = transport error
socket connected
Once the socket re-connects (and the for-loop has not ended) the emission of new messages from within the loop resumes but I have lost those previous ones forever.
How can I make sure that the client receives all of the messages every time ?
This question is an interesting example of "starving the event loop". If you're in a tight for loop for some period of time with no await in the loop, then you don't let the event loop process any other events during the duration of the for loop. If some events need to be processed during that time for things to work properly, you get problems. Read on for how that applies to this case.
Both client and server need some occasional cycles to process housekeeping pings and pongs in the socket.io protocol. If you firehose messages from one end to the other in a non-stop for loop, you can starve the ability to process those housekeeping messages and it will think that it has timed out (not received the housekeeping messages when it should have which is usually a sign of a lost or inoperative connection). In reality, the housekeeping messages are sitting in the event loop waiting to be processed, but if you never give the event loop a chance to process them, some other code running in the for loop will think that they never arrived.
So, you have to make sure you give both ends enough occasional cycles to process those housekeeping messages. The typical way to do that is to just make sure that you aren't fire hosing messages. Send N messages, then pause for a short period of time (enough time for the event loop to be able to service any incoming network events). Then send N more, pause, etc...
In addition, you could make this whole process a lot more efficient by combining a number of the Base64 strings into a single message. You can probably just put them into an array of 100 of them and send that array of 100 and repeat until they are all sent. Then, obviously change the client to expect an array of Base64 strings instead of just a single one. This will obviously result in a lot fewer messages to send (which is more efficient), but you will still need to pause every so often to let the server process things in the event loop.
Exactly how many messages to send before pausing is something that could be figured out via trial and error, but if you put 100 images into a single message and send 10 of these larger messages (which sends 1,000 images) and then pause for even just 50ms, that should be enough time for the event loop to service any inbound ack messages from socket.io to avoid the timeout. Any sort of pause using setTimeout() makes the setTimeout() get in line behind most other messages that are waiting in the event loop so even a short pause with setTimeout() tends to accomplish the goal of letting the event loop process the things that were waiting to be run.
If end-to-end time was super important, you could experiment with sending more messages at once and/or changing the pause time, but you don't want to end with a setting that is close to where you get a timeout (you want some safety factor).
Related
Can the execution of an expressJS method be delayed for 30 days or more just by using setTimeout ?
Let's say I want to create an endpoint /sendMessage that send a message to my other app after a timeout of 30 days. Will my expressJS method execution will last long time enough to fire this message after this delay ?
If your server runs continuously for 30 days or more, then setTimeout() will work for that. But, it is probably not smart to rely on that fact that your server never, ever has to restart.
There are 3rd party programs/modules designed explicitly for this. If you don't want to use one of them, then what I have done in the past is I write each future firing time into a JSON file and I set a timer for it with setTimeout(). If the timer successfully fires, then I remove that time from the JSON file.
So, at any point in time, the JSON file always contains a list of times in the future that I want timers to fire for. Any timer that fires is immediately removed from the JSON file.
Anytime my server starts up, I read the times from the JSON file and reconfigure the setTimeout() for each one.
This way, even if my server restarts, I won't lose any of the timers.
In case you were wondering, the way nodejs creates timers, it does not cost you anything to have a bunch of future timers configured. Nodejs keeps the timers in a sorted linked list and the event loop just checks the time for the next timer to fire - the one at the front of the sorted list (the rest of the timers are not looked at until they get to the front of the sorted list). This means the only time it costs anything to have lots of future timers is when inserting a new timer into the sorted list and there is no regular cost in the event loop to having lots of pending timers present.
I've edited a library (ddp-client) to make use of a heartbeat timer, which sends out a ping every X seconds. However, I'm also doing some work with the bluetooth hardware, which I believe is responsible for pings sometimes not returning in time (because the bluetooth seems to block the event loop temporarily). Is there a way to prioritise a certain function on the event loop, so it will always be executed before others? I don't think setImmediate would be suitable here, since I don't know exactly when the response message from the server would arrive.
The implementation of the timer is roughly as follows:
every X seconds
if(ping outstanding) {
//Did not resolve in time
closeConnection()
} else {
ping outstanding = true
sendPing()
}
This works perfectly fine if I run it without the bluetooth module. When I enable the bluetooth module, pings sometimes do not get resolved because the time taken to scan for bluetooth is sometimes longer than the interval of the timer, leading to a disconnect, while it's actually still connected.
Is there a way to prioritise a certain function on the event loop, so it will always be executed before others?
No. node.js does not have a way for one piece of code to pre-empt another and always have priority. Any code that "hogs" the CPU a bit or otherwise blocks the event loop a bit should either be fixed to not do that or it can be moved into it's own child process and you can communicate with it via any one of the many interprocess communication schemes.
Or, alternatively, if the ping timer is really, really important to run on time, then maybe it should be in its own child process where it can always just run as scheduled with no chance of something else interrupting it.
Implementing precise timers like this is one thing that node.js is just not good at. Because it runs all your Javascript in a single thread, keeping a server instantly responsive or keeping timers running precisely on time requires that nobody ever blocks the event loop or hogs the CPU for longer than your timing threshold. The usual work-around is to move things into their own child process where they get their own priority with the CPU.
Right now I have a node.js based application (A) that is connected to another application (B) with a tcp socket in order to receive data.
The node.js application has also some validation and processing logic that is executed right after receiving the data from the socket. Finally it writes a result to a database (async).
So far everything worked fine and performance was good. Application B sent up to 20 messages per second. But since Application B has now a new feature there may be up to 2000 messages per second. As soon as Application B sends a lot of messages, Application A almost freezes (Memory goes also up from 30MByte to 100Mbyte).
After analyzing log files it is very likely that this may be caused due to the amount of messages and the amount of calls to the data callback of the socket. I only need every 20ms a message of Application B, but filtering the messages right after receiving them does also not work. I did some performance measuring:
processing the incoming message takes about 12ms, maybe I can optimize this part a little
The filtering from above took only about 30 microseconds
Right now I am thinking about a scenario like this:
B -> A (socket part, one process) -> C (filters and buffers messages, one process) <- A (processing part, one process)
I read about redis pub/sub but I do not really know if it fits my requirements here. The sequence of the messages is critical.
How would you solve this problem? Is there maybe a point I am missing?
I have business requirement where I have to process messages in a certain priority say priority1 and priority2
We have decided to use 2 JMS queues where priority1 messages will be sent to priority1Queue and priority2 messages will be sent to priority2Queue.
Response time for priority1Queue messages is that the moment message is in Queue, I need to read, process and send the response back to say another queue in 1 second. This means I should immediately process these messages the moment they are in priority1Queue, and I will have hundreds of such messages coming in per second on priority1Queue so I will definitely need to have multiple concurrent consumers consuming messages on this queue so that they can be processed immediately when they are in the queue(consumed and processed within 1 second).
Response time for priority2Queue messages is that I need to read, process and send the response back to say another queue in 1 minute. So the response time of priority2 is lower to priority1 messages however I still need to respond back in a minute.
Can you suggest best possible approach for this so that I can concurrently read messages from both the queue and give higher priority to priority1 messages so that each priority1 message can be read and processed in 1 second.
Mainly how it can be read and fed to a processor so that the next message can be read and so on.
I need to write a java based component that does the reading and processing.
I also need to ensure this component is highly available and doesn't result in OutOfMemory, I will be having this component running across multiple JVMS and multiple application servers thus I can have multiple clusters running this Java component
First off, the requirement to process within 1 second is not going to be dependent on your messaging approach, but more about the actual processing of the message and the raw CPUs available. Picking up 100s of messages per second from a queue is child's play, the JMS provider is most likely not the issue. Depending on your deployment platform (Tomcat, Mule, JEE, whatever), there should be a way to have n listeners to scale up appropriately. Because the messages exist on the queue until you pick it up, doubtful you'll run out of memory. I've done these apps, processed many more messages without problems.
Second, number of strategies for prioritizing messages, not necessarily requiring different queues, using priorities. I'm leaning towards using message priorities and message filters, where one group of listeners take care of the highest priority messages and another listener filters off lower priority but makes sure it does enough to get them out within a minute.
You could also do something where a lower priority message gets rewritten back to the same queue with a higher priority, based on how close to 1 minute you are. I know that sounds wrong, but reading/writing from JMS has very little overhead (at least compared to do the equivalent, column-driven database transactions), but the listener for lower priority messages could just continually increase the priority until it has to be processed.
Or simpler, just have more listeners on the high priority queue/messages than the lower priority ones, and imbalance in number of processes for messages might be all it needs.
Lots of possibilities, time for a PoC.
I'm pretty new with ZMQ and I'm working with the NodeJS binding. I have an application that uses PUSH/PULL sockets. On one side I PUSH data to some nodes that through the PULL socket receive and process it. Sometimes I have to kill one or more nodes of my application, and it can happen that these nodes still have some data in the PULL socket to be processed. I don't want to lose this data, so I was wondering if there is a way to access ZMQ's PULL socket queue to check if there are still messages to be processed.
I actually couldn't find anything in the specs of ZMQ and the NodeJS binding, so maybe I'm getting the whole concept wrong.
If you kill a process then any data in that processes buffers will be lost.
Instead of killing the process forcefully, you should always find a way to allow processes to shut-down gracefully. Here, you can send a "KILL" message to the PULL socket; the process can then read that and exit when it receives it. If you can flush the socket buffer (depends if there are other processes still sending to it), you can do that and then exit when there are no more messages to read.
I'm posting the solution I found. It's not really a solution as I'm not using the ZMQ socket to check that there are no more messages in the queue, it's just a workaround/hack that came to my mind to make the thing work. I don't have time to write the queue handling by myself, so here's how I solved the problem:
Whenever the processes receive messages to process, they store a timestamp through new Date().getTime(). Whenever a process needs to be killed a kill message is sent to it. As the process receives the message, it starts a timeout with setInterval. Every x seconds (I put 10, can be more or less) the timeout fires a function that checks if the last received message is old enough (takes a timestamp, subtract this ts with the last one saved and if the result is greater that y, which in my case is 100 seconds, it is old enough). If it is, it means no more messages have been received (no more messages in the queue) so it kills the process, otherwise does nothing.