I have an Azure Service Bus topic back2gq. I've configured EnableDeadLetteringOnMessageExpiration and forwarding to a queue called dlq. The messages on dlq are consumed by a monitoring service. My intention was to detect when a client has stopped processing messages (due to crash or disconnect).
My messages have a TTL of four seconds. The default TTL for the subscription is ten seconds. However, the messages don't get forwarded to the dlq after ten seconds.
I've used the Azure Service Bus Explorer to see what is going on:
The messages start piling up if there are no clients actively listening on the topic. All messages stay marked as 'active' and don't expire. After approximately 120 seconds all messages are flushed to the dlq in one go...
I had expected to see an incremental increase either on the DeadLetter count of back2gw or the message count on dlq.
The Service Bus documentation states (emphasis mine):
The expiration for any individual message can be controlled by setting the TimeToLive system property, which specifies a relative duration. The expiration becomes an absolute instant when the message is enqueued into the entity. At that time, the ExpiresAtUtc property takes on the value (EnqueuedTimeUtc + TimeToLive). The time-to-live (TTL) setting on a brokered message is not enforced when there are no clients actively listening.
In the next paragraph it goes on to say:
All messages sent into a queue or topic are subject to a default expiration that is set at the entity level with the defaultMessageTimeToLive property and which can also be set in the portal during creation and adjusted later.
I interpret this to mean that messages should still expire after defaultMessageTimeToLive is exceeded even if there aren't any active clients. Is the observed behavior correct? Did I misunderstand the docs? I'm on the Standard plan, is this perhaps Premium feature ;)
Expired messages will eventually be moved to the DeadLetter queue (and subsequently forwarded if configured accordingly), but it you cannot depend on when exactly that happens.
The chapter about message browsing mentions (emphasis mine):
Consumed and expired messages are cleaned up by an asynchronous
"garbage collection" run and not necessarily exactly when messages
expire, and therefore Peek may indeed return messages that have
already expired and will be removed or dead-lettered when a receive
operation is next invoked on the queue or subscription.
This periodic cleanup behavior matches my observation.
Related
We have a usecase where we need to schedule jobs which will be sent as a message from Azure web api service to Azure Service Bus Queue. As we need to schedule it at later point in time one solution is to use Scheduled Delivery ScheduledEnqueueTimeUtc.
What i understand is message gets engqueued only after the time specified expires . My concern is what happens if Web API crashes or undergoes upgrade meanwhile.
1.Will the messages be lost as its not enqueued yet?
2.Where does this messages are stored in the intermediate time ?
Second solution is to use visibilityTimeOut of storage queue where messages are enqueued and will not be impacted by Web API.
From stability and scalability perspective which would be a better option ?
The message is sent to Service Bus, which is enqueued (available to receive) according to the schedule. So, to answer your queries
Nope
In the queue, just not available to receive
visibilityTimeOut is for storage queues. Refer the comparison doc for making the decision.
Note that while you cannot receive scheduled messages, you can peek them.
I know that the messages in an event hub expires after a certain period of time depending on how we configure it but is there any way we can delete the events received in an event hub through code or through configuration in the Azure portal as soon as we receive them?
please go through this documentation for sending and receiving messages in eventhub;
https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-java-get-started-send
https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-dotnet-framework-getstarted-send
At this time, there is no mechanism to delete all messages. Messages expire automatically beyond their 24 hour retention. If you care about messages only from the time you subscribe, you can do a one time subscription with SubscribeRecency of newest (check the java sdk for exact value).
Subscriptions are durable and if you disconnect and reconnect you will see newest messages only the first time you subscribe and not each time i.e. message delivery will commence from newest messages upon first subscription and subsequent reconnects will resume delivery of messages published since you last connected
If you want messages deleted once you received them, you might want to consider Azure Service Bus Queues - they support exactly that.
Event Hubs provide an immutable append-only log, in other words, events are not supposed to be changed once they are created. Think an EH message as an event at a point in time therefore you cannot go past in time and change an event.
If you need mutable messaging then consider Azure Service Bus.
I am using Azure Service Bus to implement message communication between separate bounded contexts. I am curious about what techniques people use to ensure that domain events raised in one bc are guaranteed to be received by another consuming bc.
For example, say the "orders" bc raises an "orderPlaced" event, how can I ensure that this event is received by a "shipping" bc. I understand that 2 phase commit is not advisable in cloud, so what is the alternative? How do I mitigate against the order being placed, but the message failing to be sent to the service bus in the event of a network failure?
Thoughts would be welcomed. Thanks.
If you send a BrokeredMessage to a Service Bus Queue and receive an acknowledgement, the message has been successfully stored in the queue. You don't have to worry about the message dying in transit due to a network error after you've been told it is persisted.
What you can worry about is a Service Bus Queue falling offline for a period of time and being unavailable. During an outage, your orderPlaced message wouldn't be able to get into the queue in the first place, and your shipping logic wouldn't be able to receive orders that are already persisted in your queue.
Note that Service Bus Queue outages are transient and the Queue recovers and returns to normal service. At that time, your shipping app could drain the queue of existing messages, and your ordering app could once again insert orderPlaced messages. I don't actually recall the last time I've seen one of my Service Bus Queues go down - it's a rare event.
If you are super-concerned about never ever ever EVER dropping a message, look at paired namespaces. Basically, this allows for failover to standby queues so that you can insert messages while your primary is down. Automatic detection checks to see when your primary queue comes back online. And a siphon process sucks messages that were inserted into the failover queue during the outage back into the primary once the primary comes back online.
Edit: When sending, there is still the chance that even though you had a valid Service Bus Queue connection in your QueueClient or MessagingFactory, the underlying Service Bus Queue just went down like a glass-jawed prizefighter. The vast majority of the time, these errors are transient. To handle them, set the RetryPolicy property of your MessagingFactory or QueueClient. Off the top of my head, I think that the only policy currently available is the RetryExponential policy. This will perform a back-off that will retry sending the message until the specified number of attempts are exhausted. This is the easy-peasy way to handle transient errors that pop up in your Service Bus Queue connection.
We are trying to use Azure Topic/Subscriptions to distribute changes in one domain to others services that need to update their local cache. So we have one guy publishing a message and a bunch, not knowing of eachother, listening to this topic queue.
I might have missunderstod the idea of the azure TopicDescription.DefaultMessageTimeToLive but I thought it indicated that as long as the message is still within this timeout, it will be delivered, regardless if the subscriber is "online" at the time of publishing.
But this does not seem do be the case?
What I want to accomplish is that if I have a DefaultMessageTimeToLive set to 10 minutes, all subscribers are guaranteed to get all published messages if they have a downtime lower than 10 minutes.
When I try it, I do not receive messages unless I am listening at the time of publishing. (Added remark: Each receiving queue has its own unique name)
Have I got it wrong or is there a configuration I missed?
If you want a topic available to N subscribers (listeners), then you need N subscriptions. Subscriptions are "virtual" FIFO queues. If a subscription has more than one listener, they "compete" for the next message. This pattern is often termed the “competing consumer” pattern. Read more at Service Bus Queues, Topics, and Subscriptions and How to Use Service Bus Topics/Subscriptions.
The TopicDescription.DefaultMessageTimeToLive just defines how long a message is available before it is moved to the dead-letter queue.
You can subscribe to asynchronous updates from Azure topics and queues by using SubscriptionClient/QueueClient's .OnMessage call which will presumably create a separate thread polling the topic/queue with default settings and calling a defined callback if it receives anything.
Azure website says that receiving a message is a billable action, which is understandable. However, it isn't clear enough if each those poll requests are considered billable even when they do not return anything, i.e. the queue in question has no pending messages.
Based on the Azure Service Bus Pricing FAQ - the answer to your question is yes
In general, management operations and “control messages,” such as
completes and deferrals, are not counted as billable messages. There
are two exceptions:
Null messages delivered by the Service Bus in
response to requests against an empty queue, subscription, or message
buffer, are also billable. Thus, applications that poll against
Service Bus entities will effectively be charged one message per poll.
Setting and getting state on a MessageSession will also result in
billable messages, using the same message size-based calculation
described above.
Given the price is $0.01 per 10,000 messages, I don't think you should worry too much about that.