What is the use of heartbeat in stomp protocol? - node.js

Currently I am using stomp protocol to send messages to activeMQ and to listen to messages. This is done in Nodejs using stompit library.
When the application is having high CPU or Memory usage, it stops sending heartbeat to broker. So the broker redelivers the message which is currently being processed, leading to repetitive processing of the same message
On disabling heartbeat, the application seems to work fine but I am unsure of the further issues disabling heartbeat might cause. Even when the broker is stopped while sending messages, behaviour seems to be same with or without heartbeat
I have read that it is an optional parameter but I am unable to find out it's exact use cases
Can anyone mention scenarios where no heart beat can cause issues to the application?

Regarding the purpose of heart-beating the STOMP 1.2 specification just says:
Heart-beating can optionally be used to test the healthiness of the underlying TCP connection and to make sure that the remote end is alive and kicking.
Heart-beats potentially flow both from the client to the server and from the server to the client so the "remote end" referenced in the spec here could be the client or the server.
For the server, heart-beating is useful to ensure that server-side resources are cleaned up in a timely manner to avoid excessive strain. Server-side resources are maintained for all client connections and it helps the broker to be able to detect quickly when those connections fail (i.e. heart-beats aren't received) so it can clean up those resources. If heart-beating is disabled then it's possible that a dead connection would not be detected and the server would have to maintain its resources for that dead connection in vain.
For a client, heart-beating is useful to avoid message loss when performing asynchronous sends. Messages are often sent asynchronously by clients (i.e. fire and forget). If there was no mechanism to detect connection loss the client could continue sending messages async on a dead connection. Those messages would be lost since they would never reach the broker. Heart-beating mitigates this situation.

Related

How to measure Websocket backpressure or network buffer from client

I am using the ws Node.js package to create a simple WebSocket client connection to a server that is sending hundreds of messages per second. Even with a simple onMessage handler that just console.logs incoming messages, the client cannot keep up. My understanding is that this is referred to as backpressure, and incoming messages may start piling up in a network buffer on the client side, or the server may throttle the connection or disconnect all-together.
How can I monitor backpressure, or the network buffer from the client side? I've found several articles speaking about this issue from the perspective of the server, but I have no control over the server and need to know just how slow is my client?
So you don't have control over the server and want to know how slow your client is.(seems like you already have read about backpressure). Then I can only think of using a stress tool like artillery
Check this blog, it might help you setting up a benchmarking scenario.
https://ma.ttias.be/benchmarking-websocket-server-performance-with-artillery/
Add timing metrics to your onMessage function to track how long it takes to process each message. You can also use RUM instrumentation like from the APM providers -- NewRelic or Appdynamics for paid options or you could use free tier of Google Analytics timing.
If you can, include a unique identifier for correlation between the client and server for each message sent.
Then you can correlate for a given window how long a message took to send from the server and how long it spent being processed by the client.
You can't get directly to the network socket buffer associated with your websocket traffic since you're inside the browser sandbox. I checked the WebSocket APIs and there's no properties that expose receive buffer information.
If you don't have control over the server, you are limited. But you could try some client tricks to simulate throttling.
This heavily assumes you don't mind skipping messages.
One approach would be to enable the socket, start receiving events and set your own max count in a in-memory queue/array. Once you reach a full queue, turn off the socket. Process enough of the queue, then enable the socket again.
This has high cost to disable/enable the socket, as well as the loss of events, but at least your client will not crash.
Once your client is not crashing, you can put some additional counts on timestamp and the queue size to determine the threshold before the client starts crashing.

How to inform my application (producer) that it is blocked by ActiveMQ server?

I am using Artemis ActiveMQ for internal asynchronous processes of my application.
All the connection logic is handled by Spring Integration.
I've encountered a low disk space scenario on the artemis server. This resulted in artmeis server blocking my message producers, without any warning (except a warning in the artemis server log). However it can be any other blocking scenario.
The application continued to produce messages, without being aware that the messages aren't written to the queue.
How can my application (producer) be informed about such an infrastructure issue, so I can throw exception or log an error, that will be visible at my applications' end.
If your application sends messages asynchronously then there's no way for it to know about problems sending the message (except for problems that happen specifically on the client). Sending messages async is "fire-and-forget"; the client just sends them and doesn't really care about what happens to them. You'd need to send them synchronously in order to get any indication of a problem on the broker.
Like ActiveMQ the Artemis server supports producer flow control (personally never used it). While the ActiveMQ documentation explicitly states that it also applies to async producers provided you set the Producer Window Size on the connection factory the Artemis documentation says nothing about it. But the windowing concept is the same. You probably should give it a shot.

AMQP NodeJS Connection

I have a Node application that will use RabbitMQ and I am using amqplib to access it. I understand that TCP/IP connections to RabbitMQ and expensive, channels are cheap so create one connection and then multiple channels.
What I am slightly confused about is how I rate that one connection that can be used across the application? Most tutorials seems to indicate that a connection is open for a purpose and then closed again only to be opened again when next required.
I would think that this would result in multiple connections if multiple users were attempting an action that required RabbitMQ access at the same time.
I suppose your users are only connected to your server which is different to each user is connected directly to rabbitmq. Then, server-side, you can keep only one connection to rabbitmq.
For that purpose, I would recommend to create a module to keep track of your connection across your application.
Note that AMQP has a heartbeat feature :
AMQP 0-9-1 offers a heartbeat feature to ensure that the application layer promptly finds out about disrupted connections (and also completely unresponsive peers). Heartbeats also defend against certain network equipment which may terminate "idle" TCP connections.
By default, amqplib's hearbeat is set to 0 meaning no heartbeat.

Socket.IO confirmed delivery

Before I dive into the code, can someone tell me if there is any documentation available for confirmed delivery in Socket.IO?
Here's what I've been able to glean so far:
A callback can be provided to be invoked when and if a message is acknowledged
There is a special mode "volatile" that does not guarantee delivery
There is a default mode that is not "volatile"
This leaves me with some questions:
If a message is not volatile, how is it handled? Will it be buffered indefinitely?
Is there any way to be notified if a message can't be delivered within a reasonable amount of time?
Is there any way to unbuffer a message if I want to give up?
I'm at a bit of a loss as to how Socket.IO can be used in a time sensitive application without falling back to volatile mode and using an external ACK layer that can provide failure events and some level of configurability. Or am I missing something?
TL;DR You can't have reliable confirmed delivery unless you're willing to wait until the universe dies.
The delivery confirmation you seek is related to the theoretical Two Generals Problem, which is also discussed in this SO answer.
TCP manages the reliability problem by guaranteeing delivery after infinite retries. We live in a finite universe, so the word "guarantee" is theoretically dubious :-)
Theory aside, consider this: engine.io, the underpinnings of socket.io 1.x, uses the following transports:
WebSocket
FlashSocket
XHR polling
JSONP polling
Each of those transports is based upon TCP, and TCP is reliable. So as long as connections stay connected and transports don't change, each individual socket.io message or event should be reliable. However, two things can happen on the fly:
engine.io can change transports
socket.io can reconnect in case the underlying transport disconnects
So what happens when a client or your server squirts off a few messages while the plumbing is being fiddled with like that? It doesn't say in either the engine.io protocol or the socket.io protocol (at versions 3 and 4, respectively, as of this writing).
As you suggest in your comments, there is some acknowledgement logic in the implementation. But even simple digital communications has notrivial behavior, so I do not trust an unsupervised socket.io connection for reliable delivery for mission- or safety-critical operations. That won't change until reliable delivery is part of their protocol and their methods have been independently and formally verified.
You're welcome to adopt my policies:
Number my messages
Ask for a resend when in doubt
Do not mutate my state - client or server - unless I know I'm ready
In Short:
Guaranteed message delivery acknowledgement is proven impossible, but TCP guarantees delivery and order given "infinite" retries. I'm less confident about socket.io messages, but they're really powerful and easy to use so I just use them with care.
I ensured delivery using different strategies
I send data using socket including nonce in the message to prevent repeated message errors
The other party sends a confirmation of recived meassage or i resend after x seconds
I used a REST call by the client every 30 seconds to request all new messages sent by server to catch any dropped messages during transport

Advantage/disadvantage of using socketio heartbeats

Socket.io allows you to use heartbeats to "check the health of Socket.IO connections." What exactly are heartbeats and why should or shouldn't I use them?
A heartbeat is a small message sent from a client to a server (or from a server to a client and back to the server) at periodic intervals to confirm that the client is still around and active.
For example, if you have a Node.js app serving a chat room, and a user doesn't say anything for many minutes, there's no way to tell if they're really still connected. By sending a hearbeat at a predetermined interval (say, every 15 seconds), the client informs the server that it's still there. If it's been e.g. 20 seconds since the server's gotten a heartbeat from a client, it's likely been disconnected.
This is necessary because you cannot be guaranteed a clean connection termination over TCP--if a client crashes, or something else happens, you won't receive the termination packets from the client, and the server won't know that the client has disconnected. Furthermore, Socket.IO supports various other mechanisms (other than TCP sockets) to transfer data, and in these cases the client won't (or can't) send a termination message to the server.
By default, a Socket.IO client will send a heartbeat to the server every 15 seconds (heartbeat interval), and if the server hasn't heard from the client in 20 seconds (heartbeat timeout) it will consider the client disconnected.
I can't think of many average use cases where you probably wouldn't want to use heartbeats.

Resources