Is there any way to implement request-response pattern with mosca MQTT to "check reply from the client and re publish if i dont receive expected reply within expected time".
I believe this is possible in Mqtt 5, but as of now, I have to use Mosca broker with QoS 1(which support until Mqtt 3.1.1)
I am looking for a Node js workaround to achieve this.
As per my comment you can implement a request-response pattern with any MQTT broker but, prior to v5, you need to implement this yourself (either have a single reply-to topic and a message ID, or include a specific reply-to topic within each message).
Because MQTT 3.11 itself does not provide this functionality directly and there is no standard format for the MQTT payload (just some bytes!) its not possible to come up with a generic implementation (a unique id of some kind is needed within the request). This is resolved in MQTT v5 through the ability to include properties including Response Topic and Correlation Data. For earlier versions you are stuck with adding some extra information into the payload (using whatever encoding mechanism you choose).
There are a few Stack Overflow questions that might provide some insight:
MQTT topic names for request/response
RPC style request with MQTT
Other articles:
Eclipse Kura
Stock Explorer
IoT Application Development Using Request-Response Pattern with MQTT (Academic article - purchase needed to read whole thing).
Amazon device shadow MQTT topics (e.g. send message to $aws/things/thingName/shadow/get and AWS IoT responds on /get/accepted or /get/rejected).
Here are a few node packages (note: these have not been updated for some time and I have not reviewed the code):
replyer
resmetry
Even with MQTT v5 you would need to implement the idle timeout bit yourself. If you are using QOS 1/2 then the broker will take care of resending the message (until it receives a PUBACK/PUBCOMP) so resending the message may be counterproductive (lots of identical messages queued up while the comms link is down)
The summary of the workflow i have done
Adding "Correlation Id" for each message
The expected reply is stored in Redis as the Request Payload(Request with the
Correlation Id as a key) to compare response from the client.
The entry will be removed from Redis if the expected message is
equivalent to the expected response topic and payload.
Time out uses node cron jobs for each response from the client to
Server.
Related
I'm trying to understand how to do two-way communication with google pub-sub with the following architecture
EDIT: I meant to say subscribers instead of consumers
I'm trying to support the following workflow:
UI sends a request to an api service to process an async process
API Service publishes request to a topic to begin the process kick-off
The consumer picks up the message and processes the async process service.
once the async process service is done it publishes to a process complete topic.
Here is where I want the UI to pick up the process complete message and I'm trying to figure out the best approach.
So two questions:
Is the multiple topic the preferred approach when wanting to do two-way communication back to the client? Or is there a way to do this with a single topic with multiple subscriptions?
How should the consumer of the Process-Complete get the response back to the UI? Should the UI be the consumer of the subscription? Or should I send it back to the api service and publish a websocket message? Both these approaches seem to have tradeoffs.
Multiple topics are going to be preferred in this situation, one for messages going to the asynchronous processors and then one for the responses that go back. Otherwise, your asynchronous processors are going to needlessly receive the response messages and have to ack them immediately, which is unnecessary extra delivery of messages.
With regard to getting the response back to the UI, the UI should not be the consumer of the subscription. In order to do that, you'd need every running instance of the UI to have its own subscription because otherwise, they would load balance messages across them and you couldn't guarantee that the particular client that sent the request would actually receive the response. The same would be true if you have multiple API servers that need to receive particular responses based on the requests that transmitted through them. Cloud Pub/Sub isn't really designed for topics and subscriptions to be ephemeral in this way; it is best when these are created once and all of the data is transmitted across them.
Additionally, having the UI act as a subscriber means that you'd have to have the credentials in the UI to subscribe, which could be a security issue.
You might also consider not using a topic for the asynchronous response. Instead, you could encode as part of the message the address or socket of the client or API server that expects the response. Then, the asynchronous processor could receive a message, process it, send a response to the address specified in the message, and then ack the message it received. This would ensure responses are routed to where they need to go and minimize the delivery of messages that subscribers just ack that they don't need to process, e.g., messages that were intended for a different API server.
I'm using the stomp-client library and i want to know if it is possible to know if the message was delivered to the queue. Because im implementing a java service to do the dequeue of the messages and an node js to send the messages to the queue. the code bellow shows how I send the message to the queue.
this._stompClient.publish('/queue/MessagesQueue', messageToPublish, { })
When you send a SEND frame (i.e. publish a message) you can add a receipt header and then when you receive the RECEIPT frame from the broker you know it has successfully received the message. The STOMP specification says this about the receipt header:
Any client frame other than CONNECT MAY specify a receipt header with an arbitrary value. This will cause the server to acknowledge receipt of the frame with a RECEIPT frame which contains the value of this header as the value of the receipt-id header in the RECEIPT frame.
However, looking at the documentation for stomp-client I don't see any mention of how to receive RECEIPT frames. I actually would expect the ability to specify a callback on the publish method which was called when the RECEIPT frame is received. It doesn't appear that stomp-client supports working with receipts. Unfortunately that means there's no real way to confirm the message was received by the broker.
I recommend you find a more mature STOMP client implementation that supports receipts. For example stomp-js supports receipts.
How do I send my message that is published to a Redis channel only to subscribed server (which is connected to the subscriber) and not to my other servers (where the required subscriber isn't connected).
I'm using Socket.IO and Redis server.
Have you read the documentation?
not programmed to send their messages to specific receivers (subscribers). Rather, published messages are characterized into channels, without knowledge of what (if any) subscribers there may be
I other words, you cannot target a specific subscriber.
Depending on what you are trying to achieve, you can consider using multiple channels, with each consumer using its own.
I am building a specific device based on Node, Cylon and am publishing events to a MQTT broker. I'd like to know how to perform a certain action once a certain MQTT message comes to the device. Can anybody point me in the right direction? I'm a bit lost in the matter ;)
I use this to publish data:
mqtt.publish(thingTopic, JSON.stringify(data));
I'd like to create something like this:
if certain message arrives at broker -> do a post or get request to internal url.
The question is a bit vague, i must admit...
You would probably need to build your own custom MQTT broker to achieve what you are looking for which is not really the point of the pub/sub message paradigm. Instead of customizing the MQTT broker, look into creating your own subscribing application that will react to messages being received from the MQTT Broker.
Hopefully the following sequence diagram will help understand.
I am trying to build a generic publish/subscribe server with nodejs and node_redis that receives requests from a browser with a channel name and responds with any data that has been published too that channel. To do this, I am using long polling requests from the browser and dealing with these requests by sending a response when a message is received on a channel.
For each new request, an obect is created for subscribing to the channel (if and only if it does not already exist).
clients = {};
//when request comes in,
clients[channel] = redis.createClient();
clients[channel].subscribe(channel);
Is this the best way to deal with the subscribtion channels, or is there some other more intuitive way?
I don't know what's your design, but you can subscribe with one redis client on multiple channels (after you subscribe with client, then you can only subscribe to other channel or unsubscribe within this connection: http://redis.io/commands/subscribe), because after you receive message, you have full information which channel this message comes from. Then you can distribute this message to all interested clients.
This helped me a little, because I could put type of message in channel name and then dynamically choose action for each message from small function, instead of generating separate subscription for each channel with separate logic.
Inside my node.js server I have only 2 redis clients:
simple client for all standard actions - lpush, sadd and so on
subscribe client - which listens for messages over subscribed channels, then this messages are distribute to all sessions (stored as sets for each channel type) using first redis client.
I would like to point you out to my post about pubsub using socket.io together with redis. Socket.io is a very good library =>
How to use redis PUBLISH/SUBSCRIBE with nodejs to notify clients when data values change?
I think the design is very simple and it should also be very scalable.
That seems like a pretty reasonable solution to me. What don't you like about it?
Something to keep in mind is that you can have multiple subscriptions on each Redis connection. This might end up complicating your logic, which is the opposite of what you are asking for. However, at scale this might be necessary. Each Redis connection is relatively inexpensive, but it does require a file descriptor and some memory.
Complete Redis Pub/Sub Example (Real-time Chat using Hapi.js & Socket.io)
We were trying to understand Redis Publish/Subscribe ("Pub/Sub") and all the existing examples were either outdated, too simple or had no tests.
So we wrote a Complete Real-time Chat using Hapi.js + Socket.io + Redis Pub/Sub Example with End-to-End Tests!
https://github.com/dwyl/hapi-socketio-redis-chat-example
The Pub/Sub component is only a few lines of node.js code:
https://github.com/dwyl/hapi-socketio-redis-chat-example/blob/master/lib/chat.js#L33-L40
Rather than pasting it here (without any context) we encourage you to checkout/try the example.
We built it using Hapi.js but the chat.js file is de-coupled from Hapi and can easily be used with a basic node.js http server or express (etc.)