I want to use the auto-forwarding feature of the Azure service bus. I have a topic called "trip" and has a subscription called "test".
I have set the auto-forwarding enabled and set to forward the message to another Topic called "trip_elaborated". This is working fine. But, It does not wait for the message to complete and then auto-forward to another topic.
e.g the "test" subscription takes 30 seconds to process the message and before it completed it forwards the message to the "trip_elaborated" topic. I want this operation do in sync.
Is there any configuration needed? Or any other way to achieve this kind of scenario?
I would prefer to manage this using service bus explorer(without explicitly do in the consumer using code).
When Auto forwarding is enabled on an entity, messages will be forwarded automatically, and cannot be processed from the entity they were originally sent to. If you want to process the message and forward it in a synchronous manner, you'd need to do it in your processer. Azure service bus will forward the message from the subscription straight to the destination the moment the message arriving at the topic meets the filter criteria.
To achieve processing and forwarding, you can process the incoming message in a transactional manner, something Azure Service Bus supports. See documentation for more details.
In case you can tolerate processing and forwarding in parallel, you'd have two subscriptions, one for processing and another for solely auto-forwarding.
Related
I did not find much in the way of troubleshooting events lost scenario in the azure event grid.
Hence I am asking question in relation to following scenario:
Our code publishes the events to the domain.
The events are delivered to the configured web hook in the subscription.
This works for a while.
The consumer (who owns the web hook endpoint) complains that he is not receiving some events but most are coming through.
We look in the configured dead-letter queue and find that there are no events. It has been more than a day and hence all retries are already exhausted.
Hence we assume that all events are being delivered because there are no failed delivery events in the metrics.
We also make sure that we indeed submitted these mysterious events to the grid.
But consumer insists about the problem and proves that there is nothing wrong with his side.
Now we need to figure out if some of these events are being swallowed by the event grid.
How do I go about troubleshooting this scenario?
The current version of the AEG is not integrated for Diagnostic settings feature which can be help very well for streaming the metrics and logs.
For your scenario which is based on the Event Domains (still in the public preview, see limits) can help an Azure Monitoring REST API, to see all metrics in the specific your Event Domain.
The valid metrics are:
PublishSuccessCount,PublishFailCount,PublishSuccessLatencyInMs,MatchedEventCount,DeliveryAttemptFailCount,DeliverySuccessCount,DestinationProcessingDurationInMs,DroppedEventCount,DeadLetteredCount
The following example is a REST GET request to obtain all metrics values within your event domain for specific timespan and interval:
https://management.azure.com/subscriptions/{mySubId}/resourceGroups/{myRG}/providers/Microsoft.EventGrid/domains/{myDomain}/providers/Microsoft.Insights/metrics?api-version=2018-01-01&interval=PT1H&aggregation=count,total×pan=2019-02-06T07:58:12Z/2019-02-07T08:58:12Z&metricnames=PublishSuccessCount,PublishFailCount,PublishSuccessLatencyInMs,MatchedEventCount,DeliveryAttemptFailCount,DeliverySuccessCount,DestinationProcessingDurationInMs,DroppedEventCount,DeadLetteredCount
Based on the response values, you can see metrics of the AEG behavior from the publisher side and the event delivery to the subscriber. For your production version, I do recommend to use a polling technique to obtain all metrics from AEG and pushing them to the Event Hub for a streaming analyzing, alerting, etc. Based on the query parameters (such as timespan, interval, etc.), it can be close to the real-time. When the Diagnostic settings will be supported by AEG, than this polling and publishing all metrics is obsoleted and small modification at the analyzing stream job can be continued.
The other point is to extend your eventing model for auditing part. I do recommend the following:
Add a domain scope subscription to capture all events in the event domain and push them to the Event Hub for streaming purposes. Note, that any published event within that event domain should be in this published stream pipeline.
Add a storage subscription for dead-letter messages and push them to the same Event Hub for streaming purposes.
(optional) Add the Diagnostic settings (some metrics) of the dead-letter storage to the same Event Hub for streaming purposes. Note, that the dead-letter message is dropped after 4 hours trying to store it in the blob container. There is no any log message for that failed process, just only metric counter.
For the customer side, I do recommend that each subscriber will create a log message (aeg headers + event message) for auditing and troubleshooting purposes. It should be stored in the blob container or locally and then uploaded, etc. The point is, that this reference can be very useful for analyzing stream job to quickly figure out where is the problem.
In addition to your eventing model, your publisher should periodically (for instance once per hour) probes the event domain endpoint and also should send a probe event message to the probe topic for test purposes. The event subscription for that probe topic will configure a deadlettering option. The subscriber webhook handler should be always failed with a error code = HttpStatusCode.BadRequest such as no retrying action. Note, that there is a 300 seconds delay time, when the deadletter message will be stored in the storage. In other words, after probe event + 5 minutes, the deadlettering message should be in the stream pipeline. This probe scenario in your eventing model will probe a functionality of the AEG from the publisher and delivery point of the view.
The above described solution is shown in the following screen snippet:
In Azure, we have two separate messaging technologies and it's not very well documented when to use what? While EventGrid is really cool, I did not come across when to use EventGrid(scenarios) vs the Storage/ServiceBus queue? Can someone help?
E.g. if I have the following scenario :
A status of a flag changes and based on that, I want to trigger an algorithm that would do recalculations, few inserts/updates etc. in the database.
For implementing this - I can either use EventGrid or Storage Queue. How do we figure what to use in such scenario? I was looking for some kind of guidance.
Basically, Azure Event Grid handles events and Azure ServiceBus handles messages.A message is raw data produced by a service to be consumed or stored. Events are also messages (lightweigth), but they don’t generally convey a publisher intent, other than to inform.
1) If the purpose is to just to store the information ServiceBus can be used.
2) If the information received is used to trigger another service Azure Event Grid can be used.
Find more info here
https://learn.microsoft.com/en-us/azure/event-grid/compare-messaging-services
https://azure.microsoft.com/en-us/blog/events-data-points-and-messages-choosing-the-right-azure-messaging-service-for-your-data/
Events are like notifications from a service to inform the world that something happened in the domain of the publisher (similar to an email notification). There is no expectations from the publisher to have any actions taken. A message is a command you send to a specific receiver with the expectation of the message to be processed (like an asynchronous post request).
Events will work in pub/sub pattern and multiple subscribers could be configured to the events. The service that needs to react to an event will get notified by the event grid when an event occurs (http call from event grid to the receiver). The event will remain in the event grid until deletion (cleanup) and there is no garantie of keeping the original order (no FIFO).
In the other hand, messages will be added to a queue and will be deleted once the “message processor” is done with it. The messages in the queue will keep the original order (FIFO). The message processor has to pull messages from the queue.
In your scenario, you could use a combination of both. Service A sends an event “StatusChanged”, then you can configure a subscription to that event and send a message to a queue, then have your logic to process that message. This will end up with a fully async communication pattern. This is ideal to support scenarios where you processor is down or too busy. The incoming messages will simply get accumulated in the queue and eventually being processed once the service is back up and running. And without affecting the original service that sent the “StatusChanged” event..
I have following requirement
Message published to the Topic/Queue
Multiple consumers subscribed to the Topic/Queue. So our requirement is to only one consumer should listen to the message. That means no other consumer can get the same message.
I feel queue would be the best fit. But I have advise from our architect to check whether we can achieve it from Topics?.
So any body please let me know whether we can achieve it through Topics and also pros and constrains?
Thanks.
Azure Service Bus Queue is a single message queue. You send it a message and the message receiver will get the message and be able to process it accordingly. Each message will only be handled once.
Azure Service Bus Topic is a more robust message queue than Azure Service Bus Queue. With Topics there can be multiple Subscriptions configured to catch messages based on a Filter. If multiple Subscriptions have a Filter that matches an incoming message, then each of those Subscriptions will get a copy of the messages. With Topics it's up to you to configure the Subscription Filters according to your projects needs.
If you know a message only needs to be handled once in your system and the message queue is being used by a single message receiver application (single or multiple hosted instances) then Azure Service Bus Queue is likely the tool for the job.
We are trying to use Azure Topic/Subscriptions to distribute changes in one domain to others services that need to update their local cache. So we have one guy publishing a message and a bunch, not knowing of eachother, listening to this topic queue.
I might have missunderstod the idea of the azure TopicDescription.DefaultMessageTimeToLive but I thought it indicated that as long as the message is still within this timeout, it will be delivered, regardless if the subscriber is "online" at the time of publishing.
But this does not seem do be the case?
What I want to accomplish is that if I have a DefaultMessageTimeToLive set to 10 minutes, all subscribers are guaranteed to get all published messages if they have a downtime lower than 10 minutes.
When I try it, I do not receive messages unless I am listening at the time of publishing. (Added remark: Each receiving queue has its own unique name)
Have I got it wrong or is there a configuration I missed?
If you want a topic available to N subscribers (listeners), then you need N subscriptions. Subscriptions are "virtual" FIFO queues. If a subscription has more than one listener, they "compete" for the next message. This pattern is often termed the “competing consumer” pattern. Read more at Service Bus Queues, Topics, and Subscriptions and How to Use Service Bus Topics/Subscriptions.
The TopicDescription.DefaultMessageTimeToLive just defines how long a message is available before it is moved to the dead-letter queue.
I'm looking to use Azure Service Bus with topics but need to handle the scenario where a subscriber might not be listening for a message it's interested in (e.g. server being rebooted etc.). This is the typical durable subscriber pattern as described here http://www.eaipatterns.com/DurableSubscription.html.
What I can't work out is how to apply this with Azure Service Bus and I can't seem to find any examples or discussion of this in the documentation. Is this something that Azure service bus provides or should I start looking at alternatives to Azure Service Bus?
This is built straight into Service Bus. As long as a subscription is created it is durable. You create a topic and then create one or more subscriptions. One or more consumers then listen to a subscription when they are active. If they go inactive, such as the server being rebooted, then the subscription stores the messages until a consumer comes back up and asks for one.
Service Bus would only be nondurable if you were creating and destroying subscriptions on the fly as each consumer becomes active or becomes inactive. If there are no subscriptions then messages sent to a topic are lost. Once you create a subscription, any messages sent to the topic (if they pass any filters applied) will be available on the subscription regardless if there are any active consumers using that subscription. Subscriptions exist until you remove them or, if you have the idle removal feature turned on, they surpass the idle deletion time.
You can verify this with a simple console application, or using LinqPad to set up code that does the following:
Create a topic.
Create a subscription on that topic (no filters)
Send a few messages to the topic.
In a different script or console app, create a MessageReceiver for that subscription and pull down the messages.
The messages within a subscription are durable for the life of that subscription, until they are processed (completed, etc.), they are forwarded somewhere else or they expire.
I am not sure where you looked for documentation, following are good to read:
1) http://azure.microsoft.com/en-us/documentation/articles/service-bus-dotnet-how-to-use-topics-subscriptions/
2) http://code.msdn.microsoft.com/windowsazure/Simple-Publish-Subscribe-d406eb03