developing chat application using MQTT protocol, mosca (node module) and MongoDB as a database in Node.js
facing the problem of how to delete publish message and remove from all subscriber in app.
At a MQTT level you can't, once a message has been published it's will be delivered by the broker to all connected clients (and queued for disconnected clients) with a matching subscription.
The only thing that is possible is to clear a retained message to prevent the same payload being re-delivered each time the client connects. You do this by publishing a message with a null payload (and the retained bit set)
If you want to delete messages at the chat level you will have to implement this yourself with in the application.
Related
I am creating an chat application where I have a rest API and a socket.io server, What I want to do is the user will send messages to rest API, The api will persist those messages in database and then send these messages to the rabbimq queue then rabbitmq will send these messages to socket.io if the receiving user is online, Else the message will be stored in the queue and when the user will come online the user will retrieve these messages from the queue however I want to implement this in a way like whatsapp, The messages will be for a particular user and the user will only receive those messages which are meant for them i.e I don't want to broadcast messages I want only particular user to receive those messages
Chat should be a near-real-time application, there are multiple ways of modeling such a thing. First of all, you can use HTTP pooling, HTTP long pooling but some time ago there where introduced the new application-level protocol - web socket. It would be good for you, along with Stomp messages. Here you can check a brief example. And sending messages to specific users is also supported out-of-the-box example
1
To send messages to specific sockets you can use rooms: io.to(room).emit(msg). Every socket is a part of a room with the same name as the socket id.
2
I wouldn't wait for the message to be written to the database before sending it out through socket.io, your API can do both at once. When a user connects they can retrieve their messages from the database then listen for new ones on their socket.
How do I send my message that is published to a Redis channel only to subscribed server (which is connected to the subscriber) and not to my other servers (where the required subscriber isn't connected).
I'm using Socket.IO and Redis server.
Have you read the documentation?
not programmed to send their messages to specific receivers (subscribers). Rather, published messages are characterized into channels, without knowledge of what (if any) subscribers there may be
I other words, you cannot target a specific subscriber.
Depending on what you are trying to achieve, you can consider using multiple channels, with each consumer using its own.
I' m using a MQTT publisher, RabbitMQ and a Mqtt subscriber. I have installed on RabbitMQ the plugin for to label the messages with timestamp (rabbitmq_message_timestamp).
I have built an AMQP Publisher, an AMQP Subscriber and a MQTT Subscriber using node.js and a MQTT Publisher using Node-Red (and the MQTT out block) setting the topic to test the server url, username and password of RabbitMQ user, retain=true and no QoS.
1st PROBLEM) When I use an AMQP Publisher and an AMQP Subscriber, i can retrieve (side Subscriber) the RabbitMQ's timestamp by reading the field with path: msg.properties.timestamp. But when I use a MQTT Publiher and a MQTT subscriber, if I try to retrieve the value of msg.properties.timestamp, the nodejs windows says that field "properties" is undefined.
2nd PROBLEM) When I public message with my Node-Red MQTT Publisher (with topic "test") if a MQTT Subscriber is running on test queue, it downloads the messages, but if there isn't any Subribers on test queue, the RabbitMQ console says that test queue is empty. After stopping the MQTT pUblisher, if I try to connect the MQTT Subscriber to test queue, it will receive only the last message.
Can anyone help me to solve these problems?
There is no where in a MQTT message to store the additional meta data properties (such as the timestamp you mention).
MQTT message headers pretty much just hold the topic, QOS and a retained flag.
So if you subscribe with the Node-RED MQTT client node that is the only meta data that will be available.
I have PUB/SUB program using zmq broker ( node.js ).
Subscriber doesn't receive messages while subscriber is restarted and publisher is still publishing the messages. But PUB/SUB is working fine when both publisher and subscriber services are started naturally. Reason behind this is unknown.
What could possibly be wrong?
While it is impossible to cover exactly a case of an undisclosed MCVE-design of using a PUB-archetype, on Publisher side, and an unspecified number of SUB-archetype nodes on the Subscribers' sides, there is one important fact.
The Design
Yes, the Design. Having read the API, user will be sure that
ZeroMQ does not guarrantee a message delivery
ZeroMQ PUB-lisher does not wait and publishes a message for all connected SUB-scribers, not waiting for late-joiners, nor providing any queue/persistence for not connected SUB-scribers and discards all messages that were requested to PUB-lish during times there are no SUB-s connected.
The Socket.io API has the ability to send messages to all clients.
With one server and all sockets in memory, I understand how that server one can send a message to all its clients, that's pretty obvious. But what about with multiple servers using Redis to store the sockets?
If I have client a connected to server y and client b connected to server z (and a Redis box for the store) and I do socket.broadcast.emit on one server, the client on the other server will receive this message. How?
How do the clients that are actually connected to the other server get that message?
Is one server telling the other server to send a message to its connected client?
Is the server establishing its own connection to the client to send that message?
Socket.io uses MemoryStore by default, so all the connected clients will be stored in memory making it impossible (well, not quiet but more on that later) to send and receive events from clients connected to a different socket.io server.
One way to make all the socket.io servers receive all the events is that all servers use redis's pub-sub. So, instead using socket.emit one can publish to redis.
redis_client = require('redis').createClient();
redis_client.publish('channelName', data);
And all the socket servers subscribe to that channel through redis and upon receiving a message emit it to clients connected to them.
redis_sub = require('redis').createClient();
redis_sub.subscribe('channelName', 'moreChannels');
redis_sub.on("message", function (channel, message) {
socket.emit(channel, message);
});
Complicated Stuff !! But wait, turns out you dont actually need this sort of code to achieve the goal. Socket.io has RedisStore which essentially does what the code above is supposed to do in a nicer way so that you can write Socket.io code as you would write for a single server and will still get propagated over to other socket.io server through redis.
To summarise socket.io sends messages across multiple servers by using redis as the channel instead of memory.
There are a few ways you can do this. More info in this question. A good explanation of how pub/sub in Redis works is here, in Redis' docs. An explanation of how the paradigm works in general is here, on Wikipedia.
Quoting the Redis docs:
SUBSCRIBE, UNSUBSCRIBE and PUBLISH implement the Publish/Subscribe
messaging paradigm where (citing Wikipedia) senders (publishers) are
not programmed to send their messages to specific receivers
(subscribers). Rather, published messages are characterized into
channels, without knowledge of what (if any) subscribers there may be.
Subscribers express interest in one or more channels, and only receive
messages that are of interest, without knowledge of what (if any)
publishers there are. This decoupling of publishers and subscribers
can allow for greater scalability and a more dynamic network topology.