I am using mqtt-node to get subscribed messages. But the problem is the topic list for subscribing will be appended through an API. But the appended topic list is not being read by the mqtt connection while subscribing for other topics. Please advice or suggest a suitable way to solve this issue.
There is no topic list.
The only way to discover what topics are in use is to either maintain a list external to the broker or to subscribe to a a wildcard and see what messages are published.
It's important to remember that topics only really exist at the moment a message is published to one. Subscribers supply a list of patterns (they can include wildcards like + or #) to match against those published topics and any matching messages are delivered.
You maintain an array of Topics
var topics = [
"test/1",
"test/2",
"test/3"
]
When a new Topic arrives via the API, you will need to first unsubscribe from the existing Topics
client.unsubscribe(topics)
then add the new Topic
topics.push(newTopic)
then re-subscribe
client.subscribe(topics)
This is what worked best for me when I have this use case.
Keep in mind that the time between unsubscribing and re-subscribing, messages could be Published and your client would not see them due to not being subscribed at the time. This is easy to overcome if you can use the RETAIN field on your Publishers....but in some use cases, this isn't practical.
Related
Problem
I am facing what I would assume is a common problem where I have a publisher which publishes on topics that are both high and low-volume (i.e. both topics where dropping is okay when subscriber/network is slow because they are high-frequency and topics where updates are slow and I never want to drop messages).
Possible Solutions
Unbound queue on Publisher side
This might work, but the high-volume topics may be so fast that they flood memory. It is desirable for high-volume topics to be dropped and low-volume topics to be protected.
One PUB socket per high-volume topic or all high-volume topics or for every topic
Based on my understanding of ZeroMQ's queueing model, with a queue for each connection both on the publisher side and on the subscriber side, the only way to protect my low-volume topics from being dropped and pushed out by the high-volume topics right now is to create a separate PUB socket for each or all high-volume topics and somehow communicate that to subscribers who will need to connect to multiple endpoints from the same node.
This complicates things on the subscriber side, as they now need prior knowledge of mappings between ports and topics or they need to be able to request an index from the publisher node. This then requires that the port numbers are fixed or that every time a subscriber has a connection issue, it checks the index to see if the port changed.
Publisher- and Subscriber- Side Topic Queues Per Connection
The only solution I can see, at the moment, is to create a queue for each subscriber & topic on the publisher side and on the subscriber side, hidden away inside a library so neither side needs to think about it. When messages for a specific topic overflow, they can still be dropped without pushing out messages for other topics. A separate ordered dictionary would need to be used to maintain queue pull/get order by a worker feeding messages into the PUB socket or pulling events out from the subscriber-side.
This solution only works if I know when the ZeroMQ queue on the publisher side is in a mute state and will drop messages, so I know to hold off "publishing" the next message which will probably be lost. This can be done with the option, ZMQ_XPUB_NODROP (http://api.zeromq.org/master:zmq-setsockopt).
This will probably work, but it is non-trivial, probably slower than it could be because of the language I use (Python), and the kind of thing I kind of would have expected a messaging library to handle for me.
I'm trying to figure out what would be the best way to remove all the messages from a Pulsar topic (either logically or physically), so that they are no longer consumable by subscriptions?
I know we can simply do $ pulsar-admin persistent delete persistent://tenant/namespace/topic.
But, this solution has some drawbacks: it removes the topic completely (so we have to recreate it later) then there should be no active client connected to it (i.e: subscriptions or producers).
Alternatively is there a way to programmatically make all messages between two MessageId unavailable to the subscriptions ?
Thanks
There are a couple of options you can choose from.
You can use topics skip to skip N messages for a specific subscription of a given topic. https://pulsar.apache.org/docs/en/admin-api-persistent-topics/#skip-messages
You can use topics skip-all to skip all the old messages for a specific subscription for a given topic. https://pulsar.apache.org/docs/en/admin-api-persistent-topics/#skip-all-messages
You can use topics clear-backlog to clear the backlog of a specific subscription. It is same as topics skip-all.
You can also use topics reset-cursor to move the subscription cursor to a specific message id or a timestamp back.
From Sijie Guo's answer, I tried skip-all, but got:
Expected a command, got skip-all
Invalid command, please use pulsar-admin --help to check out how to use
I retried with clear-backlog with success.
https://github.com/apache/pulsar/issues/5685#issuecomment-664751216
The doc is updated here:
https://pulsar.apache.org/docs/pulsar-admin/#list-5
but not here:
https://pulsar.apache.org/docs/admin-api-topics#skip-all-messages
So it's confusing
I'm creating a consumer of an Azure Service Bus topic (subscription) that does nothing but store some statistics. The messages sent to the topic contains a rather large body, that is handled by another consumer on a second subscription on the same topic.
Since the statistics consumer can handle a large number of messages in one go, I was wondering if it is possible to receive a lot of messages but leave out the body to improve performance when communicating with Service Bus and to receive even more messages in one go.
I'm currently doing this:
this.messageReceiver = new MessageReceiver(conn, path);
...
await messageReceiver.ReceiveAsync(10, TimeSpan.FromSeconds(5));
It works pretty sweet but it would be nice to be able to receive 100 or more messages, without having to worry about moving large messages over the network.
Before anyone suggests it, I already know that I can ask for a count, etc. on a topic subscription. I still need the Message object since that contains an entry in the UserProperties dictionary that is used to calculate the stats.
Not possible. You can peek, but that brings the whole payload and headers w/o incrementing the DeliveryCount of the message. You could request it as a broker feature here.
I am trying to design the strategy that my organization will employ to create topics, and which messages will go to which one. I am looking at either creating a separate topic for each event, or a single topic to hold messages from all events, and then to triage with filters. I am convinced that using a separate topic for every event is better because:
Filters will be less complex and thus more performant, since each
event is already separated in its own topic.
There will be less chance of message congestion in any given topic.
Messages are less likely to be needlessly copied into any given
subscription.
More topics means more messaging stores, which means better message
retrieval and sending.
From a risk management perspective, it seems like having more topics
is better. If I only used a single topic, an outage would affect all
subscribers for all messages. If I use many topics, then perhaps outages would only affect some
topics and leave the others operational.
I get 12 more shared access keys per topic. It's easier to have more granular control over which topics are
exposed to which client apps since I can add/revoke access by
add/revoking the shared access key for each app on a per-topic basis.
Any thoughts would be appreciated
Like Sean already mentioned, there is really no one answer but here are some details about topics that could help you.
Topics are designed for large number of recipients by sending messages to multiple (upto 2000) subscriptions, which actually have the filters
Topics don't really store messages but subscriptions do
For outages, unless you have topics across regions, I'm not sure if it would help as such
The limit is for shared access authorization rules per policy. You should be using one of these to generate a SAS key for your clients.
Also, chaining service bus with autoforwarding is something you could consider as required.
I would appreciate your thoughts on this.
Node app 1 sends data to a RabbitMQ queue. The data contains a unique ID.
Node app 2 requests data with a specific ID from the RabbitMQ queue.
So as you can see, I need to be able to select specific messages from the queue, rather than just the next available message.
Is this possible? How can I do it?
Thanks.
Yes. You can use either header or topic exchange - look for Exchanges and Exchange Types here. For topic there is also tutorial here.
no directly from a single queue
if you have 3 messages in a queue, those messages will come out of that queue in order: first in, first out.
the "selective consumer" pattern for retrieving a message by some value, from a queue, is an anti-pattern in rabbitmq.
to accomplish what you want, you need to create an exchange / queue / binding setup that sends your message to a specific queue so that your specific consumer can handle it.