I'm using Redis as transport for microservices communication.
Each #MessagePattern in controller creates two Redis channels _ack for Pub message and _res for answer on this message.
Client that are sending message - pub it into _ack channel and wait response by listening _res channel. When answer is receiving via _res channel we get in this packet "disposed: true"
Question:
If I create 3 microservices (different servers). Two of them will be have the same #MessagePattern and third will be send message on this pattern. This is some like broadcast messages.
Will be sender correctly processing both answer from _res channel ?
Is there right way for using nestjs transport for broadcasting?
Related
I see it like every unimportant action is sent by client directly to socket channel ('start_typing', 'finish_typing' for example). But sending message should be POST REST method, which performs custom validation, logic, persisting; after all things are done, it sends message to socket channel.
Is it correct way to do this? Or I should rather just send message to socket channel from client?
Is there any way to implement request-response pattern with mosca MQTT to "check reply from the client and re publish if i dont receive expected reply within expected time".
I believe this is possible in Mqtt 5, but as of now, I have to use Mosca broker with QoS 1(which support until Mqtt 3.1.1)
I am looking for a Node js workaround to achieve this.
As per my comment you can implement a request-response pattern with any MQTT broker but, prior to v5, you need to implement this yourself (either have a single reply-to topic and a message ID, or include a specific reply-to topic within each message).
Because MQTT 3.11 itself does not provide this functionality directly and there is no standard format for the MQTT payload (just some bytes!) its not possible to come up with a generic implementation (a unique id of some kind is needed within the request). This is resolved in MQTT v5 through the ability to include properties including Response Topic and Correlation Data. For earlier versions you are stuck with adding some extra information into the payload (using whatever encoding mechanism you choose).
There are a few Stack Overflow questions that might provide some insight:
MQTT topic names for request/response
RPC style request with MQTT
Other articles:
Eclipse Kura
Stock Explorer
IoT Application Development Using Request-Response Pattern with MQTT (Academic article - purchase needed to read whole thing).
Amazon device shadow MQTT topics (e.g. send message to $aws/things/thingName/shadow/get and AWS IoT responds on /get/accepted or /get/rejected).
Here are a few node packages (note: these have not been updated for some time and I have not reviewed the code):
replyer
resmetry
Even with MQTT v5 you would need to implement the idle timeout bit yourself. If you are using QOS 1/2 then the broker will take care of resending the message (until it receives a PUBACK/PUBCOMP) so resending the message may be counterproductive (lots of identical messages queued up while the comms link is down)
The summary of the workflow i have done
Adding "Correlation Id" for each message
The expected reply is stored in Redis as the Request Payload(Request with the
Correlation Id as a key) to compare response from the client.
The entry will be removed from Redis if the expected message is
equivalent to the expected response topic and payload.
Time out uses node cron jobs for each response from the client to
Server.
How do I send my message that is published to a Redis channel only to subscribed server (which is connected to the subscriber) and not to my other servers (where the required subscriber isn't connected).
I'm using Socket.IO and Redis server.
Have you read the documentation?
not programmed to send their messages to specific receivers (subscribers). Rather, published messages are characterized into channels, without knowledge of what (if any) subscribers there may be
I other words, you cannot target a specific subscriber.
Depending on what you are trying to achieve, you can consider using multiple channels, with each consumer using its own.
developing chat application using MQTT protocol, mosca (node module) and MongoDB as a database in Node.js
facing the problem of how to delete publish message and remove from all subscriber in app.
At a MQTT level you can't, once a message has been published it's will be delivered by the broker to all connected clients (and queued for disconnected clients) with a matching subscription.
The only thing that is possible is to clear a retained message to prevent the same payload being re-delivered each time the client connects. You do this by publishing a message with a null payload (and the retained bit set)
If you want to delete messages at the chat level you will have to implement this yourself with in the application.
The Socket.io API has the ability to send messages to all clients.
With one server and all sockets in memory, I understand how that server one can send a message to all its clients, that's pretty obvious. But what about with multiple servers using Redis to store the sockets?
If I have client a connected to server y and client b connected to server z (and a Redis box for the store) and I do socket.broadcast.emit on one server, the client on the other server will receive this message. How?
How do the clients that are actually connected to the other server get that message?
Is one server telling the other server to send a message to its connected client?
Is the server establishing its own connection to the client to send that message?
Socket.io uses MemoryStore by default, so all the connected clients will be stored in memory making it impossible (well, not quiet but more on that later) to send and receive events from clients connected to a different socket.io server.
One way to make all the socket.io servers receive all the events is that all servers use redis's pub-sub. So, instead using socket.emit one can publish to redis.
redis_client = require('redis').createClient();
redis_client.publish('channelName', data);
And all the socket servers subscribe to that channel through redis and upon receiving a message emit it to clients connected to them.
redis_sub = require('redis').createClient();
redis_sub.subscribe('channelName', 'moreChannels');
redis_sub.on("message", function (channel, message) {
socket.emit(channel, message);
});
Complicated Stuff !! But wait, turns out you dont actually need this sort of code to achieve the goal. Socket.io has RedisStore which essentially does what the code above is supposed to do in a nicer way so that you can write Socket.io code as you would write for a single server and will still get propagated over to other socket.io server through redis.
To summarise socket.io sends messages across multiple servers by using redis as the channel instead of memory.
There are a few ways you can do this. More info in this question. A good explanation of how pub/sub in Redis works is here, in Redis' docs. An explanation of how the paradigm works in general is here, on Wikipedia.
Quoting the Redis docs:
SUBSCRIBE, UNSUBSCRIBE and PUBLISH implement the Publish/Subscribe
messaging paradigm where (citing Wikipedia) senders (publishers) are
not programmed to send their messages to specific receivers
(subscribers). Rather, published messages are characterized into
channels, without knowledge of what (if any) subscribers there may be.
Subscribers express interest in one or more channels, and only receive
messages that are of interest, without knowledge of what (if any)
publishers there are. This decoupling of publishers and subscribers
can allow for greater scalability and a more dynamic network topology.