I'm using the stomp-client library and i want to know if it is possible to know if the message was delivered to the queue. Because im implementing a java service to do the dequeue of the messages and an node js to send the messages to the queue. the code bellow shows how I send the message to the queue.
this._stompClient.publish('/queue/MessagesQueue', messageToPublish, { })
When you send a SEND frame (i.e. publish a message) you can add a receipt header and then when you receive the RECEIPT frame from the broker you know it has successfully received the message. The STOMP specification says this about the receipt header:
Any client frame other than CONNECT MAY specify a receipt header with an arbitrary value. This will cause the server to acknowledge receipt of the frame with a RECEIPT frame which contains the value of this header as the value of the receipt-id header in the RECEIPT frame.
However, looking at the documentation for stomp-client I don't see any mention of how to receive RECEIPT frames. I actually would expect the ability to specify a callback on the publish method which was called when the RECEIPT frame is received. It doesn't appear that stomp-client supports working with receipts. Unfortunately that means there's no real way to confirm the message was received by the broker.
I recommend you find a more mature STOMP client implementation that supports receipts. For example stomp-js supports receipts.
Related
Intro
We're developing a system to support multiple real-time messages (chat) and updates (activity notifications).
That is, user A can receive via Web Socket messages for :
receiving new chat messages
receiving updates for some activity, for example if someone like their photo.
and more.
We use one single WebSocket connection to send all these different messages to the client.
However, we also support multiple applications/clients to be open by the user at the same time.
(i.e - user A connect on their web browser, and also from their mobile app, at the same time).
Architecture
We have a "Hub" that stores a map of UserId to a list of active websocket sessions.
(user:123 -> listOf(session#1, session#2))
Each client, once websocket connection is established, has its own Consumer which subscribes to a pulsar topic "userId" (e.g - user:123 topic).
If user A connected on both mobile and web, each client has its own Consumer to topic user:A.
When user A sends a new message from session #1 to user B, the flow is :
user makes a REST POST request to send a message.
service stores a new message to DB.
service sends a Pulsar message to topic user:B and user:A.
return 200 status code + created Message response.
Problem
If user A has two sessions open (two clients/websockets), and they send a message from session #1, how can we make sure only session #2 gets the message ?
Since user A has already received the 200 response with the created message in session #1, there's no need to send the message to him again by sending a message to his Consumer.
I'm not sure if it's a Pulsar configuration, or perhaps our architecture is wrong.
how can we make sure only session #2 gets the message ?
I'm going to address this at the app level.
Prepend a unique nonce (e.g. a guid) to each message sent.
Maintain a short list of recently sent nonces,
aging them out so we never have more than, say, half a dozen.
Upon receiving a message,
check to see if we sent it.
That is, check to see if its nonce is in the list.
If so, silently discard it.
Equivalently, name each connection.
You could roll a guid just once when a new websocket is opened.
Or you could incorporate some of the websocket's addressing
bits into the name.
Prepend the connection name to each outbound message.
Discard any received message which has "sender" of "self".
With this de-dup'ing approach
there's still some wasted network bandwidth.
We can quibble about it if you wish.
When the K-th websocket is created,
we could create K topics,
each excluding a different endpoint.
Sounds like more work than it's worth!
I see it like every unimportant action is sent by client directly to socket channel ('start_typing', 'finish_typing' for example). But sending message should be POST REST method, which performs custom validation, logic, persisting; after all things are done, it sends message to socket channel.
Is it correct way to do this? Or I should rather just send message to socket channel from client?
I did followings successfully:
Creating a hono tenant and registering a device for it.
Connecting a simple python based edge-device to hono.
Connecting hono to ditto.
Creating a twin for the above edge-device.
Sending telemtery data from the edge-device to ditto through hono works perfectly.
I also send a pseudo event every second form edge-device to ditto through hono as follows:
# sending an event from edge-device to ditto through hono-mqtt-adapter
correlation_id = ''.join(random.choices(string.ascii_uppercase + string.digits, k=10))
data = {
"topic": "de.iot1/dev1/things/live/messages/fire",
"headers": {
"content-type": "text/plain",
"correlation-id": correlation_id
},
"path": "/outbox/messages/fire",
"value": "Fire detected"
}
payload = json.dumps(data)
mqtt_client.publish("event", payload=payload, qos=1)
On the other side I wrote a simple ditto-amqp-client which just receives all dittos incomming messages. I receive all incoming telemetry messages as their correct interval - i.e. every second. In the case of events messages they seems to be buffered by ditto and sent to amqp-client every couple of seconds and not at the time they are send from the device! Why?
As far as I understood from the ditto-doc, ditto offers two communication channels. twin-channel for communicating with the twin through commands and events and a live-channel to communicate with the device directly through messages. But in the section protocol-topics
the channel can be either twin or live also for events or commands, which is confusing.
I would like to know what is the recommended way to send an event form the device to ditto?
Should one send it through live channels using events or messages (outbox)?
Is it better to define the event as a feature in the twin and send normal command/modify to its value?
Thanks in advance for any suggestion!
I receive all incoming telemetry messages as their correct interval - i.e. every second. In the case of events messages they seems to be buffered by ditto and sent to amqp-client every couple of seconds and not at the time they are send from the device! Why?
As Ditto does not do a "store-and-forward", but publishes applied events immediately, I can't really explain from Ditto side. Maybe by sending events (which in Hono are persisted using an AMQP broker), those are consumed with some delay (after persisted by the broker), however I can't explain the "every couple of seconds".
You could enable DEBUG logging in Ditto's connectivity service by setting the environment variable LOG_LEVEL_APPLICATION to DEBUG - that way you'll see when the messages are received and when they are forwarded again to your AMQP endpoint.
I would like to know what is the recommended way to send an event form the device to ditto?
Should one send it through live channels using events or messages (outbox)?
When you talk about the "fire detected" event (containing a payload which shall not be stored in a twin managed by Ditto), I would send it as live message from the device (using Hono event channel with "QoS 1" - at least once - semantics).
However, an application connected to Ditto (e.g. via Websocket or via HTTP webhook) must be consuming the live message and acknowledging that it received the message.
Events in Ditto are the entity which is persisted as part of the CQRS architecture style. After persisting, interested parties can be notified about them.
So live events are meant to be in the same Ditto event format and must not be confused with Hono events.
Is it better to define the event as a feature in the twin and send normal command/modify to its value?
It depends on your requirements.
You could also make the "fireDetected" a boolean property in a feature and subscribe to changes of this property. But then this should be only settable via the device (using an appropriate Ditto Policy) - and the "QoS 1" guarantees you get by combining Hono and Ditto cannot be used any longer.
Is there any way to implement request-response pattern with mosca MQTT to "check reply from the client and re publish if i dont receive expected reply within expected time".
I believe this is possible in Mqtt 5, but as of now, I have to use Mosca broker with QoS 1(which support until Mqtt 3.1.1)
I am looking for a Node js workaround to achieve this.
As per my comment you can implement a request-response pattern with any MQTT broker but, prior to v5, you need to implement this yourself (either have a single reply-to topic and a message ID, or include a specific reply-to topic within each message).
Because MQTT 3.11 itself does not provide this functionality directly and there is no standard format for the MQTT payload (just some bytes!) its not possible to come up with a generic implementation (a unique id of some kind is needed within the request). This is resolved in MQTT v5 through the ability to include properties including Response Topic and Correlation Data. For earlier versions you are stuck with adding some extra information into the payload (using whatever encoding mechanism you choose).
There are a few Stack Overflow questions that might provide some insight:
MQTT topic names for request/response
RPC style request with MQTT
Other articles:
Eclipse Kura
Stock Explorer
IoT Application Development Using Request-Response Pattern with MQTT (Academic article - purchase needed to read whole thing).
Amazon device shadow MQTT topics (e.g. send message to $aws/things/thingName/shadow/get and AWS IoT responds on /get/accepted or /get/rejected).
Here are a few node packages (note: these have not been updated for some time and I have not reviewed the code):
replyer
resmetry
Even with MQTT v5 you would need to implement the idle timeout bit yourself. If you are using QOS 1/2 then the broker will take care of resending the message (until it receives a PUBACK/PUBCOMP) so resending the message may be counterproductive (lots of identical messages queued up while the comms link is down)
The summary of the workflow i have done
Adding "Correlation Id" for each message
The expected reply is stored in Redis as the Request Payload(Request with the
Correlation Id as a key) to compare response from the client.
The entry will be removed from Redis if the expected message is
equivalent to the expected response topic and payload.
Time out uses node cron jobs for each response from the client to
Server.
I have issue when using npm mqtt with nodejs. Sever subscribe topic 'alert/userId' to receive data publish from client then server unsubscribe this topic . after each subscribe and unsubscribe message is duplicate. The client sends 1 message; the server receives more 1 message.
How did you publish the message? Did you set the retained flag to true?
If so this message will be delivered every time the client connects to the broker until it is cleared (by sending a null payload message to the same topic)
Publishing with QoS 1 means the message will be delivered at least once. Any subscribers could then receive the same message more than once.
You probably want to use QoS 2 if you want the message to be delivered exactly once.