Delay message delivery for ActiveMQ AMQP 1.0, _without_ JMS - node.js

To clarify the title: I am using ActiveMQ 5.15.15 (NOT the Artemis engine), and I am using AMQP 1.0 without official JMS libraries. And to be more specific, I am using the AmazonMQ version of this, which will soon upgrade to 5.16.2. I could force the upgrade, if needed.
I'm using an AMQP 1.0 compatible library (rhea) that has served us well so far, but I'm not finding any documentation for how to get ActiveMQ's redelivery plugin to work with my library. The library maintainers are unaware with how this is exposed via ActiveMQ, as well.
I've not been able to get the redelivery plugin to work, despite trying to add various headers, delivery annotations, message annotations, or application properties. I do have schedulerSupport="true" in my broker element for the server config.
These are the keys I've tried, and the values are numeric. E.g., 30000 for 30 seconds before allowing a consumer/subscriber see the message in the queue. I saw them in various docs, and it didn't hurt to try them.
AMQ_SCHEDULED_DELAY
x-opt-delivery-delay
_AMQ_SCHED_DELIVERY
I have also released the message from the client, meaning it failed to deliver (also passing a value that signals the failure to the broker and increases the attempted delivery count). While the number of delivery attempts increased, the delay and exponential backoff have not seemed to be working at the broker level.
I see that the STOMP protocol allows for headers when publishing, which allow setting options a bit more clearly. However, I don't want to switch everything over unless it makes sense to do so.
I also saw another ability to send a delayed message as a topic via the REST API, but I'm not sure if that was intended to be a production use case.
So right now, I'm either looking at:
hold the message in memory for a bit and attempt to republish or release it after a delay
Investigate STOMP, see if the redelivery plugin works with that
But I'm hoping someone knows where to implement this.
My redeliveryPolicy is basic:
<!--
The Redelivery plugin extends the capabilities of destination policies with respect to message redelivery.
For more information, see http://activemq.apache.org/message-redelivery-and-dlq-handling.html
-->
<redeliveryPlugin fallbackToDeadLetter="true" sendToDlqIfMaxRetriesExceeded="true">
<redeliveryPolicyMap>
<redeliveryPolicyMap>
<redeliveryPolicyEntries>
<!--<redeliveryPolicy maximumRedeliveries="4" queue="SpecialQueue" redeliveryDelay="10000"/>-->
</redeliveryPolicyEntries>
<defaultEntry>
<!-- 5s -> 15s -> 45s -> 135s -> 405s -->
<redeliveryPolicy backOffMultiplier="3" initialRedeliveryDelay="5000" maximumRedeliveries="5"/>
</defaultEntry>
</redeliveryPolicyMap>
</redeliveryPolicyMap>
</redeliveryPlugin>
Update
I am using the auth plugin, and there's an entry that seems like it's for a built-in process. I think this came from a sample/default config. There doesn't appear to be a whole lot of documentation around this from a quick search. I can try opening access to other users, but each update/restart can take up to 15 minutes with the current setup.
<authorizationEntry admin="administrators" queue="SchedulingProcessor.>" write="scheduling-processor"/>
Comment Clarifications
My main objective is to delay redeliveries, so consumers don't see a failed message that was placed back into the queue for n seconds.
I started with no special headers/properties/annotations + the redelivery plugin, which also didn't work.

There is a distinction around message delivery delay and message redelivery delay that I think you are confusing or at least the question is mixing up.
A sender can request that a message be delivered after some delay from an AMQP client using the Message Annotations section of the sent message and adding in 'x-opt-delivery-delay' or 'x-opt-delivery-time' annotation assuming the broker has enabled scheduled deliveries. Some examples of this can be found in the ActiveMQ unit test. The delay annotation indicates a relative delay from time of receipt in milliseconds while the delivery time annotation indicates a time in UTC to deliver the message.
The ActiveMQ 5 redelivery policy affects messages that have been explicitly tagged as not deliverable on the client and therefore the AMQP Released outcome is not the right choice to trigger this behavior as it simply indicates that the client isn't going to process it and the remote should consider it undelivered and send it elsewhere. You would need to use one of Rejected or Modified(undeliverableHere=true) to "poison" the message and trigger the redelivery policy. This should if things go right trigger a redelivery after some delay although since ActiveMQ 5 has a relatively basic AMQP protocol head it will likely resend to the same consumer even if you've explicitly set the undeliverable here flag. I don't know how much that bit has been tested if any so your mileage may vary.

Related

Message persistence in Spring Integration Aggregator without MessageStore by using AMQP?

I would like to know if I can have persistence in my Spring Integration setup when I use a aggregator, which is not backed by a MessageStore, by leveraging the persistence of AMQP (RabbitMQ) queues before and after the aggregator.
I imagine that this would use ack's: The aggregator won't ack a message before it's collected all the parts and sent out the resulting message.
Additionally I would like to know if this is ever a good idea :)
I am new working with queue's, and am trying to get a good feel for patterns to use.
My business logic for this is as follows:
I receive a messages on one queue.
Each message must result in two unrelated webservice calls (preferably in parallel).
The results of these two calls must be combined with details from the original message.
The combination must then be sent out as a new message on a queue.
Messages are important, so they must not be lost.
I was/am hoping to use only one 'persistent' system, namely RabbitMQ, and not having to add a database as well.
I've tried to keep the question specific, but any other suggestions on how to approach this are greatly appreciated :)
What you would like to do recalls me Scatter-Gather EI Pattern.
So, you get a message from the AMQP send it into the ScatterGather endpoint and wait for the aggregated reply. That's enough for to stick with the default acknowledge.
Right, the scatterChannel can be PublishSubscribeChannel with an executor to call Web Services in parallel. Anyway the gatherer process will wait for replies according the release strategy and will block the original AMQP listener do not ack the message prematurely.

How to enforce the order of messages passed to an IoT device over MQTT via a cloud-based system (API design issue)

Suppose I have an IoT device which I'm about to control (lets say switch on/off) and monitor (e.g. collect temperature readings). It seems MQTT could be the right fit. I could publish messages to the device to control it and the device could publish messages to a broker to report temperature readings. So far so good.
The problems start to occur when I try to design the API to control the device.
Lets day the device subscribes to two topics:
/device-id/control/on
/device-id/control/off
Then I publish messages to these topics in some order. But given the fact that messaging is typically an asynchronous process there are no guarantees on the order of messages received by the device.
So in case two messages are published in the following order:
/device-id/control/on
/device-id/control/off
they could be received in the reversed order leaving the device turned on, which can have dramatic consequences, depending on the context.
Of course the API could be designed in some other way, for example there could be just one topic
/device-id/control
and the payload of individual messages would carry the meaning of an individual message (on/off). So in case messages are published to this topic in a given order they are expected to be received in the exact same order on the device.
But what if the order of publishes to individual topics cannot be guaranteed? Suppose the following architecture of a system for IoT devices:
/ control service \
application -> broker -> control service -> broker -> IoT device
\ control service /
The components of the system are:
an application which effectively controls the device by publishing messages to a broker
a typical message broker
a control service with some business logic
The important part is that as in most modern distributed systems the control service is a distributed, multi instance entity capable of processing multiple control messages from the application at a time. Therefore the order of messages published by the application can end up totally mixed when delivered to the IoT device.
Now given the fact that most MQTT brokers only implement QoS0 and QoS1 but no QoS2 it gets even more interesting as such control messages could potentially be delivered multiple times (assuming QoS1 - see https://stackoverflow.com/a/30959058/1776942).
My point is that separate topics for control messages is a bad idea. The same goes for a single topic. In both cases there are no message delivery order guarantees.
The only solution to this particular issue that comes to my mind is message versioning so that old (out-dated) messages could simply be skipped when delivered after another message with more recent version property.
Am I missing something?
Is message versioning the only solution to this problem?
Am I missing something?
Most definitely. The example you brought up is a generic control system, being attached to some message-oriented scheme. There are a number of patterns that can be used when referring to a message-based architecture. This article by Microsoft categorizes message patterns into two primary classes:
Commands and
Events
The most generic pattern of command behavior is to issue a command, then measure the state of the system to verify the command was carried out. If you forget to verify, your system has an open loop. Such open loops are (unfortunately) common in IT systems (because it's easy to forget), and often result in bugs and other bad behaviors such as the one described above. So, the proper way to handle a command is:
Issue the command
Inquire as to the state of the system
Evaluate next action
Events, on the other hand, are simply fired off. As the publisher of an event, it is not my business to worry about who receives the event, in what order, etc. Now, it should also be pointed out that the use of any decent message broker (e.g. RabbitMQ) generally carries strong guarantees that messages will be delivered in the order which they were originally published. Note that this does not mean they will be processed in order.
So, if you treat a command as an event, your system is guaranteed to act up sooner or later.
Is message versioning the only solution to this problem?
Message versioning typically refers to a property of the message class itself, rather than a particular instance of the class. It is often used when multiple versions of a message-based API exist and must be backwards-compatible with one another.
What you are instead referring to is unique message identifiers. Guids are particularly handy for making sure that each message gets its own unique id. However, I would argue that de-duplication in message-based architectures is an anti-pattern. One of the consequences of using messaging is that duplicates are possible, so you should try to design your system behaviors to be stateless and idempotent. If this is not possible, it should be considered that messaging may not be the correct communication solution for the need.
Using the command-event dichotomy as an example, you could perform the following transaction:
The controller issues the command, assigning a unique identifier to the command.
The control system receives the command and turns on.
The control system publishes the "light on" event notification, containing the unique id of the command that was used to turn on the light.
The controller receives the notification and correlates it to the original command.
In the event that the controller doesn't receive notification after some timeout, the controller can retry the command. Note that "light on" is an idempotent command, in that multiple calls to it will have the same effect.
When state changes, send the new state immediately and after that periodically every x seconds. With this solution your systems gets into desired state, after some time, even when it temporarily disconnects from the network (low battery).
BTW: You did not miss anything.
Apart from the comment that most brokers don't support QOS2 (I suspect you mean that a number of broker as a service offerings don't support QOS2, such as Amazon's AWS IoT service) you have covered most of the major points.
If message order really is that important then you will have to include some form of ordering marker in the message payload, be this a counter or timestamp.

Mqtt paho using spring integration stops processing messages on topic over certain load requests

I am using Spring Integration with mqtt-paho version 4.0.4 For receiving MQTT messages on specified topic.
When application is receiving huge load I found that, sometimes application is dropping connection with IMA (mqtt) and this was happened three times in a span of 1 Lac record.
But it regains the connectivity and started consuming messages received there after. There were no issue in IMA re-connectivity.
There is some other issue which I faced during this testing.
When there is continuous load on application, at some point application stops receiving messages and we can see one message flashed on screen i.e.
May 04, 2015 2:45:29 PM org.eclipse.paho.client.mqttv3.internal.ClientState checkForActivity
SEVERE: gvjIpONtSpP: Timed out as no activity, keepAlive=60,000 lastOutboundActivity=1,430,730,869,017 lastInboundActivity=1,430,730,929,151
After this we can see that there are no messages received on application even if continuous load is pushed through utility.
This behavior I found it three times.
At around 40K.
At around 90K.
At around 145K.
There is no consistent point or figures where application actually stops receiving messages.
Please let me know if anybody has faced and solved this before .
We had the same issue during performance testing and during MQTT Paho client performance/durability testing, before moving to production. The issue was on broker side, after settings adjustment, the IMA broker was able to consume millions of messages with no rejection.
Please look into max buffer parameter on IMA configuration web console. And overlimit behavior policy (what to do with messages published over specified threshold): reject, rollover etc.

Spring Itegration Aggregator: meaning of "send-timeout" XML configuration

I would appreciate some clarification about the meaning of "send-timeout" configuration parameter for an Aggregator. Based on Spring documentation, this configuration is The timeout interval for sending the aggregated messages to the output or reply channel. Optional.
Now, based on my understanding the Aggregator is a passive component and only decides to send a message or not based on the result of the release strategy after the receipt of a message; it won't release messages based on timeout events, for that, a separate Reaper component is needed. Is this correct?
Assuming, the send-timeout is the maximum amount of time that the Aggregator can spend sending the completed group of messages to the output channel. What would happen if time runs out (due to this parameter set up) while sending a message. How will the Aggregator handle that message group that was ready to release, started to be sent but never finished? Will it be marked complete?
Thanks
This is a fairly commonly misunderstood attribute. In many places (but not all) we have explained it clearly in the XSD and docs.
Bottom line is it rarely is applied. It only applies when the output channel can block. For example, if the output-channel is a QueueChannel with a capacity and the queue is full; it is the time we will wait to send the message to the channel.
If the output channel is, for example, a DirectChannel, it never applies.
If it does apply, the exception will be thrown back to the caller and the group will remain. Attempts to re-release such groups will occur if you configure a MessageGroupStoreReaper; if the group is still eligible for release, the reaper will again attempt to send the group to the output channel.
The "stuck" group will also be released if new messages arrives for the same group and the release strategy still considers the group eligible (e.g. it uses size >= n rather than size == n).
BTW, while the aggregator is generally a passive component, we did introduce the group-timeout (and group-timeout-expression) in 4.0 which allows partial groups to be released after a timeout, even without a reaper.
However, if such a release fails to happen because of the send-timeout, a new release attempt will only be made if a reaper is configured.

MQTT what is the purpose or usage of Last Will Testament?

I'm surely missing something about how the whole MQTT protocol works, as I can't grasp the usage pattern of Last Will Testament messages: what's their purpose?
One example I often see is about informing that a device has gone offline. It doesn't make very much sense to me, since it's obvious that if a device isn't publishing any data it may be offline or there could be some network problems.
So, what are some practical usages of the LWT? What was it invented for?
LWT messages are not really concerned about detecting whether a client has gone offline or not (that task is handled by keepAlive messages).
LWT messages are about what happens after the client has gone offline.
The analogy is that of a real last will:
If a person dies, she can formulate a testament, in which she declares what actions should be taken after she has passed away. An executor will heed those wishes and execute them on her behalf.
The analogy in the MQTT world is that a client can formulate a testament, in which it declares what message should be sent on it's behalf by the broker, after it has gone offline.
A fictitious example:
I have a sensor, which sends crucial data, but very infrequently.
It has formulated a last will statement in the form of [topic: '/node/gone-offline', message: ':id'], with :id being a unique id for the sensor. I also have a emergency-subscriber for the topic 'node/gone-offline', which will send a SMS to my phone every time a message is published on that channel.
During normal operation, the sensor will keep the connection to the MQTT-broker open by sending periodic keepAlive messages interspersed with the actual sensor readings. If the sensor goes offline, the connection to the broker will time out, due to the lack of keepAlives.
This is where LWT comes in: If no LWT is specified, the broker doesn't care and just closes the connection. In our case however, the broker will execute the sensor's last will and publish the LWT-message '/node/gone-offline: :id'. The message will then be consumed to my emergency-subscriber and I will be notified of the sensor's ID via SMS so that I can check up on what's going on.
In short:
Instead of just closing the connection after a client has gone offline, LWT messages can be leveraged to define a message to be published by the broker on behalf of the client, since the client is offline and cannot publish anymore.
Just because a device is not publishing does not mean it is not online or there is a network problem.
Take for example a sensor that monitors a value that only changes very infrequently, good design says that the sensor should only publish the changes to help reduce bandwidth usage as periodically publishing the same value is wasteful. If the value is published as a retained value then any new subscriber will always get the current value without having to wait for the sensor value to change and it publish again.
In this case the LWT is used to published when the sensor fails (or there is a network problem) so we know of the problem as soon at the client keep alive times out.
A in-depth article about Last-Will-and-Testament messages is available in the MQTT Essentials Blog Post series: http://www.hivemq.com/mqtt-essentials-part-9-last-will-and-testament/.
To summarize the blog post:
The Last Will and Testament feature is used in MQTT to notify other clients about an ungracefully disconnected client.
MQTT is often used in scenarios were unreliable networks are very common. Therefore it is assumed that some clients will disconnect ungracefully from time to time, because they lost the connection, the battery is empty or any other imaginable case. It would be good to know if a connected client has disconnected gracefully (which means with a MQTT DISCONNECT message) or not, in order to take appropriate action.

Resources