I am getting some unexpected behavior when playing around with the initial position of a consumer. From the docs:
"latest" = LIFO queue
"earliest" = FIFO queue.
The thing I am seeing is:
"Latest" ignores all messages before a subscription is created. It then becomes a FIFO queue from, the point the subscription is created. See the output below when using "latest" on a consumer, see that the IDs and publish timestamps show FIFO behavior.
receiving message
<pulsar.Message object at 0x7f011ec92220>
Received message 'b'Hello-10-lifo-v3'' id='(15537,40,-1,-1)' time='0' publish_time='1631809207044'
receiving message
<pulsar.Message object at 0x7f011ebfa310>
Received message 'b'Hello-9-lifo-v3'' id='(15537,41,-1,-1)' time='0' publish_time='1631809207165'
receiving message
<pulsar.Message object at 0x7f011ec92220>
Received message 'b'Hello-8-lifo-v3'' id='(15537,42,-1,-1)' time='0' publish_time='1631809207256'
receiving message
<pulsar.Message object at 0x7f011ebfa310>
Received message 'b'Hello-7-lifo-v3'' id='(15537,43,-1,-1)' time='0' publish_time='1631809207307'
receiving message
<pulsar.Message object at 0x7f011ec92220>
Received message 'b'Hello-6-lifo-v3'' id='(15537,44,-1,-1)' time='0' publish_time='1631809207396'
receiving message
<pulsar.Message object at 0x7f011ebfa310>
Received message 'b'Hello-5-lifo-v3'' id='(15537,45,-1,-1)' time='0' publish_time='1631809207463'
receiving message
<pulsar.Message object at 0x7f011ec92220>
Received message 'b'Hello-4-lifo-v3'' id='(15537,46,-1,-1)' time='0' publish_time='1631809207512'
receiving message
<pulsar.Message object at 0x7f011ebfa310>
Received message 'b'Hello-3-lifo-v3'' id='(15537,47,-1,-1)' time='0' publish_time='1631809207608'
receiving message
<pulsar.Message object at 0x7f011ec92220>
Received message 'b'Hello-2-lifo-v3'' id='(15537,48,-1,-1)' time='0' publish_time='1631809207675'
receiving message
<pulsar.Message object at 0x7f011ebfa310>
Received message 'b'Hello-1-lifo-v3'' id='(15537,49,-1,-1)' time='0' publish_time='1631809207723'
When using "earliest" I am getting true FIFO queue, where all messages since the start of the topic are received in FIFO. Output is basically the same as above in terms of id and publish timestamp.
Is this the expected behavior?
Thanks!
Pulsar only has FIFO behaviour. "Earliest" and "Latest" only refer to where you start consuming in the queue : do you want to get all past unacknowledged messages or only new incoming ones ? See https://pulsar.apache.org/docs/en/concepts-clients/ for more details.
Related
I've been recently having problems with my Service Bus queue. Random messages (one can pass and the other not) are placed on the deadletter queue with the error message saying:
"DeadLetterReason": "Moved because of Unable to get Message content There was an error deserializing the object of type System.String. The input source is not correctly formatted."
"DeadLetterErrorDescription": "Des"
This happens even before my consumer has the chance to receive the message from the queue.
The weird part is that when I requeue the message through Service Bus Explorer it passes and is successfully received and handled by my consumer.
I am using the same version of Service Bus either for sending and receiving the messages:
Azure.Messaging.ServiceBus, version: 7.2.1
My message is being sent like this:
await using var client = new ServiceBusClient(connString);
var sender = client.CreateSender(endpointName);
var message = new ServiceBusMessage(serializedMessage);
await sender.SendMessageAsync(message).ConfigureAwait(true);
So the solution I have for now for the described issue is that I implemented a retry policy for the messages that land on the dead-letter queue. The message is cloned from the DLQ and added again to the ServiceBus queue and for the second time there is no problems and the message completes successfully. I suppose that this happens because of some weird performance issues I might have in the Azure infrastructure. But this approach bought me some time to investigate further.
I am trying to subscribe to a topic in ActiveMQ running in localhost using stompest for connecting to the broker. Please refer below code:
import os
import json
from stompest.sync import Stomp
from stompest.config import StompConfig
CONFIG = StompConfig(uri=os.environ['MQ_URL'],
login=os.environ['MQ_UID'],
passcode=os.environ['MQ_DWP'],
version="1.2")
topic = '/topic/SAMPLE.TOPIC'
msg = {'refresh': True}
client = Stomp(CONFIG)
client.connect()
client.send(topic, json.dumps(msg).encode())
client.disconnect()
client = Stomp(CONFIG)
client.connect(heartBeats=(0, 10000))
token = client.subscribe(topic, {
"ack": "client",
"id": '0'
})
frame = client.receiveFrame()
if frame and frame.body:
print(f"Frame received from MQ: {frame.info()}")
client.disconnect()
Although I see active connection it the ActiveMQ web console, no message is received in the code. The flow of control seems to pause at frame = client.receiveFrame().
I didn't find any reliable resource or documentation regarding this.
Am I doing anything wrong here?
This is the expected behavior since you're using a topic (i.e. pub/sub semantics). When you send a message to a topic it will be delivered to all existing subscribers. If no subscribers are connected then the message is discarded.
You send your message before any subscribers are connected which means the broker will discard the message. Once the subscriber connects there are no messages to receive therefore receiveFrame() will simply block waiting for a frame as the stompest documentation notes:
Keep in mind that this method will block forever if there are no frames incoming on the wire.
Try either sending a message to a queue and then receiving it or creating an asynchronous client first and then sending your message.
C# .NetCore 2.2 -
Azure Service Bus 3.4.0
I have 3 queues in Azure Service Bus with same properties. While sending messages to these queues, the messages in one of the queues always get delivered to Dead letter queues, while other 2 queues receive active messages.
I have tried playing with the properties - increase TTL, maximum delivery count etc. The properties of all 3 queues are same, the only difference is the name of the queues.
I have used this tutorial - https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-dotnet-get-started-with-queues
queue properties image
static async Task SendMessagesAsync(int numberOfMessagesToSend)
{
try
{
for (var i = 0; i < numberOfMessagesToSend; i++)
{
// Create a new message to send to the queue.
string messageBody = $"Message {i}";
var message = new Message(Encoding.UTF8.GetBytes(messageBody));
Console.WriteLine($"Sending message: {messageBody}");
// Send the message to the queue.
await queueClient.SendAsync(message);
}
}
catch (Exception exception)
{
Console.WriteLine($"{DateTime.Now} :: Exception: {exception.Message}");
}
}
How do I prevent messages from going to Dead Letter Queue? Why does it happen with only 1 queue, not the other 2?
When messages are dead-lettered, there is a reason user property gets added. Check that property to see the reason and troubleshoot accordingly. Specifically, check for DeadLetterReason and DeadLetterErrorDescription custom properties.
The common reasons for a message to be dead-lettered are
Maximum transfer hop count exceeded
Session Id Is Null
TTLExpiredException
HeaderSizeExceeded
The messages might also have got dead-lettered due to some errors while receiving the message from the Queue. As Sean Feldman mentioned, looking into the DeadLetterReason and DeadLetterDescription property will help you diagnose the error reason clearly.
Also try to increase or set the time to live of the message sent if the DeadLetterReason is TTLExpiredException. Because if you have set the time to live of the message to a lower value then it will override the time to live property of the Queue.
Check whether the Queue whether the queue where the messages are getting dead-lettered is a Session enabled queue and the message sent has the Session Id value set.
Without seeing your app / messages it's hard to help. But probably there's an error with the application that is trying to consume the message. As it could not be completed, the message goes to the dead letter queue.
Log the messages from this particular queue and see if there's any missing required properties. Sometimes you're trying to deserialize to an uncompatible type.
The purpose of the dead-letter queue is to hold messages that cannot
be delivered to any receiver, or messages that could not be processed.
I have a current topic queue which send some messages to deadletter queue. I am creating another Webjob which listen to ServiceBusTrigger on the deadletter queue. The purpose is to resubmit the deadletter mesg to original queue to be re-processed
When a new message comes to the deadletter queue, I would clone the message and send it back to the original topic subscription to re-process.
I expect that the clone message would be sent to the original queue and stay there until it is processed, but turn out the message is completed as soon as the [originalTopicClient.Send(cloneMessage);] function called.
Am I missing anything?
public void ProcessQueueMessage([ServiceBusTrigger(topicName, dlqSubscription)] string message, TextWriter log)
{
MessagingFactory factory = MessagingFactory.Create(ServiceURI, tokenProvider);
string deadLetterQueuePath = SubscriptionClient.FormatDeadLetterPath(topicName, SubscriptionName);
SubscriptionClient dlqSubscriptionClient = factory.CreateSubscriptionClient(topicName, "mytopicsubscription/$DeadLetterQueue");
QueueClient deadletterQueueClient = factory.CreateQueueClient(deadLetterQueuePath);
BrokeredMessage cloneMessage;
Microsoft.ServiceBus.Messaging.TopicClient originalTopicClient = Microsoft.ServiceBus.Messaging.TopicClient.CreateFromConnectionString(serviceBusEndpoint, topicName);
BrokeredMessage dlqMessage;
if ((dlqMessage = dlqSubscriptionClient.Receive()) != null)
{
cloneMessage = dlqMessage.Clone();
cloneMessage.Properties.Remove("DeadLetterReason");
cloneMessage.Properties.Remove("DeadLetterErrorDescription");
originalTopicClient.Send(cloneMessage); //resend the clone message to original queue
dlqMessage.Complete(); //deadletter queue mesg completed
}
}
I faced a similar issue and then after quite a long research, I found that it worked perfectly fine in 3 ways.
The first one is to receive the message in ReceiveAndDelete mode. This deletes the message from the DLQ and so we can safely send it to the main queue without issues. The second way is to receive the message in PeekLock mode, Complete the message, and then Send it. This is nothing different from the first method except that we are completing rather than the Service Bus client doing it. The third way is to create a new BrokeredMessage and copy the necessary payload from the received message and send it as an entirely new message to the main queue.
While the reason for the Send to not work in your method is still unknown, I see that the Clone and Send work fine without Complete, and it seems like the clone is still referring to the original message.
I have a specific requirement to digest some unwanted message types in the reply channel of TcpOutboundGateway and continue waiting until it comes with a valid message.
I send a message in output channel of TcpOutboundGateway and get an acknowledgment in reply channel. But there are chances I might get an invalid acknowledgment message for the sent message. So I should ignore the invalid message got in reply channel and still wait for a valid message to come.
How to handle this?
The AbstractConnectionFactory can be supplied with the:
public void setInterceptorFactoryChain(TcpConnectionInterceptorFactoryChain interceptorFactoryChain) {
The TcpConnectionInterceptorFactoryChain should be supplied with the custom TcpConnectionInterceptorFactory to produce some custom TcpConnectionInterceptorSupport to intercept that message in the overridden public boolean onMessage(Message<?> message) { and don't let it go to the super.onMessage().
See more info in the Reference Manual.