Diagnosing failures in azure event grid? - azure

I did not find much in the way of troubleshooting events lost scenario in the azure event grid.
Hence I am asking question in relation to following scenario:
Our code publishes the events to the domain.
The events are delivered to the configured web hook in the subscription.
This works for a while.
The consumer (who owns the web hook endpoint) complains that he is not receiving some events but most are coming through.
We look in the configured dead-letter queue and find that there are no events. It has been more than a day and hence all retries are already exhausted.
Hence we assume that all events are being delivered because there are no failed delivery events in the metrics.
We also make sure that we indeed submitted these mysterious events to the grid.
But consumer insists about the problem and proves that there is nothing wrong with his side.
Now we need to figure out if some of these events are being swallowed by the event grid.
How do I go about troubleshooting this scenario?

The current version of the AEG is not integrated for Diagnostic settings feature which can be help very well for streaming the metrics and logs.
For your scenario which is based on the Event Domains (still in the public preview, see limits) can help an Azure Monitoring REST API, to see all metrics in the specific your Event Domain.
The valid metrics are:
PublishSuccessCount,PublishFailCount,PublishSuccessLatencyInMs,MatchedEventCount,DeliveryAttemptFailCount,DeliverySuccessCount,DestinationProcessingDurationInMs,DroppedEventCount,DeadLetteredCount
The following example is a REST GET request to obtain all metrics values within your event domain for specific timespan and interval:
https://management.azure.com/subscriptions/{mySubId}/resourceGroups/{myRG}/providers/Microsoft.EventGrid/domains/{myDomain}/providers/Microsoft.Insights/metrics?api-version=2018-01-01&interval=PT1H&aggregation=count,total&timespan=2019-02-06T07:58:12Z/2019-02-07T08:58:12Z&metricnames=PublishSuccessCount,PublishFailCount,PublishSuccessLatencyInMs,MatchedEventCount,DeliveryAttemptFailCount,DeliverySuccessCount,DestinationProcessingDurationInMs,DroppedEventCount,DeadLetteredCount
Based on the response values, you can see metrics of the AEG behavior from the publisher side and the event delivery to the subscriber. For your production version, I do recommend to use a polling technique to obtain all metrics from AEG and pushing them to the Event Hub for a streaming analyzing, alerting, etc. Based on the query parameters (such as timespan, interval, etc.), it can be close to the real-time. When the Diagnostic settings will be supported by AEG, than this polling and publishing all metrics is obsoleted and small modification at the analyzing stream job can be continued.
The other point is to extend your eventing model for auditing part. I do recommend the following:
Add a domain scope subscription to capture all events in the event domain and push them to the Event Hub for streaming purposes. Note, that any published event within that event domain should be in this published stream pipeline.
Add a storage subscription for dead-letter messages and push them to the same Event Hub for streaming purposes.
(optional) Add the Diagnostic settings (some metrics) of the dead-letter storage to the same Event Hub for streaming purposes. Note, that the dead-letter message is dropped after 4 hours trying to store it in the blob container. There is no any log message for that failed process, just only metric counter.
For the customer side, I do recommend that each subscriber will create a log message (aeg headers + event message) for auditing and troubleshooting purposes. It should be stored in the blob container or locally and then uploaded, etc. The point is, that this reference can be very useful for analyzing stream job to quickly figure out where is the problem.
In addition to your eventing model, your publisher should periodically (for instance once per hour) probes the event domain endpoint and also should send a probe event message to the probe topic for test purposes. The event subscription for that probe topic will configure a deadlettering option. The subscriber webhook handler should be always failed with a error code = HttpStatusCode.BadRequest such as no retrying action. Note, that there is a 300 seconds delay time, when the deadletter message will be stored in the storage. In other words, after probe event + 5 minutes, the deadlettering message should be in the stream pipeline. This probe scenario in your eventing model will probe a functionality of the AEG from the publisher and delivery point of the view.
The above described solution is shown in the following screen snippet:

Related

Azure service bus, Auto forwarding does not wait to message completed

I want to use the auto-forwarding feature of the Azure service bus. I have a topic called "trip" and has a subscription called "test".
I have set the auto-forwarding enabled and set to forward the message to another Topic called "trip_elaborated". This is working fine. But, It does not wait for the message to complete and then auto-forward to another topic.
e.g the "test" subscription takes 30 seconds to process the message and before it completed it forwards the message to the "trip_elaborated" topic. I want this operation do in sync.
Is there any configuration needed? Or any other way to achieve this kind of scenario?
I would prefer to manage this using service bus explorer(without explicitly do in the consumer using code).
When Auto forwarding is enabled on an entity, messages will be forwarded automatically, and cannot be processed from the entity they were originally sent to. If you want to process the message and forward it in a synchronous manner, you'd need to do it in your processer. Azure service bus will forward the message from the subscription straight to the destination the moment the message arriving at the topic meets the filter criteria.
To achieve processing and forwarding, you can process the incoming message in a transactional manner, something Azure Service Bus supports. See documentation for more details.
In case you can tolerate processing and forwarding in parallel, you'd have two subscriptions, one for processing and another for solely auto-forwarding.

EventGrid vs EventHub

I am working on a service fabric application and want to publish few events from this application and subscribe or process those publish events in another application.
I have tried EventGrid concept and observed that there is a delay while publishing and processing the events. So, now I am looking for other alternatives like EventHub or Queues, etc..
If anyone had already used EventGrid, EventHud or Queues, etc.. , Please do suggest which one will give more performance when we deal with more events.
Design Approach
We have migrated the tables from SQL service to Service Fabric. There is a view in SQL Service, and we are planning to implement that as a service in service fabric.
The implementation logic follows below.
Table 1 implemented service and we publish an event for each CRUD operation to EventGrid/ EventHud.
Table 2 implemented service and we publish an event for each CRUD operation to EventGrid/ EventHud.
We have created a view service where it listens to the events when any event sent to EventGrid/ EventHud, it will perform required calculations and store in the ViewService( It is a background job)
We are looking for a messaging service which gives more performance.
Have you seen this comparison and this one?
Anyway, can you clarify your requirements in terms of throughput and performance? It depends on a lot of factors including, but not limited to, the message size and the amount of messages.
Having used both Event Grid and Event Hub I'd say Event Hub works very well for many messages per second, say data streams from iot devices, but the performance of the downstream processing can be a bottleneck. You have to process them very fast in order to receive new events. Then there are partitions and consumer groups that can be of help to balance the load and have different processors for the same data but with different view of the data stream. (A fast processor for live displaying of sensor data and a slower one for storing the data for later analysis)
If you're talking about a few events generated by an application that triggers other apps to start doing some work based on those events Event Grid is a good fit. I haven't experienced much delay in receiving those events.
But bottom line, I think all services (Event Grid, Event Hub, Service Bus etc) support different use cases and that should be your first decision point.
Can you describe your publisher, subscriber, etc. and show your metrics of the Azure Event Grid usage?
You can use the portal screen snippets on the topic (publisher) and subscription (subscriber).
The following screen snippets are from my tester when manually have been fired few events.
Publisher side:
Subscriber side:
Metrics on the portal:
As you can see, the delivery destination processing time is ~1ms. The latency time on the publisher side (custom topic) is between 2-4ms.
Note, that the AEG is a PUSH->PUSH-ACK or PUSH->PULL-ACK eventing loosely decupled Pub/Sub model instead of the Event Hub model which is based on the PUSH->PULL mechanism, in other words, the Event Hub needs to host a listener and receiver for pulling an event from the partition.

How to consume events delivered by Azure Event Grid to GCP

Basically what I understood from few Azure topics is as below:
Azure Event Hub - where data is received initially and converted into events
Service Bus- acting as a queue
Azure Event Grid - where events converted in hub are transferred here.
so the connection is like below:
Hub -> Service Bus -> Event Grid -> Pub Sub -> Storage
I understood this concept. My problem is I want data to be pushed from the event grid to GCP (subscription / topics). My question are:
How can I establish this using PUSH method?
What do I need to develop exactly?
How can I push things from grid to pubsub/subscriptions?
I found this link where data is getting published into Event Grid but I want to push data from the event grid to gcp. Can anybody explain me where am I going wrong or what exactly should I start with. I am new to this and its very confusing so I just need little bit of guidance over here.
I have below doubts:
Is there any direct subscriber option available with event grid listener? I mean can I directly link my google storage account with this listener so, whenever there is an event triggered it will be directly pushed to my GCP account(I don't have Azure account with me right now since access issue is in progress so I can't see it that's why I am asking here)
Suppose I have 20 columns in my data but I want only 16 columns to be pushed in GCP so is there any customization possible while sending data from event grid/event hub to pub/sub
If I write custom connectors code as per the links provided in the below answers then how can I run it? I mean where I can deploy those scripts on the cloud so that they will be triggered automatically whenever an event is triggered?
Can I implement webhooks in this scenario? (as an alternative to connectors), If yes then how can I do it and on which side do I need to create it?
Also, I read some articles and I came to know from a few guys that they experienced data loss in this entire process. So, what's the possibility over here and how can it be avoided
Can anybody explain me where am I going wrong or what exactly should I start with.
It's right here:
so the connection is like below:
Hub -> Service Bus -> Event Grid -> Pub Sub -> Storage
Although this might be the case, it sounds very much as if you're looking at one (very) specific scenario where data flows in this exact way.
Azure Event Hub, Azure Service Bus and Azure Event Grid can work together, but can also be used completely separate from each other.
Event Grid
The purpose of Event Grid is to enable Reactive programming. Use this when you want to react to (status) changes.
Event Hubs
Event Hubs facilitate a big data pipeline. Use this when you need telemetry and distributed data streaming.
Service Bus
The purpose of Service bus is to enable High-value enterprise messaging. Use this when you want to do something like Order processing and financial transactions.
In some cases, you use the services side by side to fulfill distinct roles. For example, an ecommerce site can use Service Bus to process the order, Event Hubs to capture site telemetry, and Event Grid to respond to events like an item was shipped.
In other cases, you link them together to form an event and data pipeline. You use Event Grid to respond to events in the other services. For an example of using Event Grid with Event Hubs to migrate data to a data warehouse, see Stream big data into a data warehouse.
Taken from the very interesting and important documentation article Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus
EDIT
My problem is I want data to be pushed from event grid to GCP (subscription / topics). So how can I establish this using PUSH method??
Possibly the simplest solution is to have an Event Grid Event trigger a webhook (which might run an Azure Function or a Google Cloud Function) which in turn puts the event/message on the GCP Topic.
Publishing messages is quite well documented. There are examples on how to do so with a REST call, command-line, C#, Go, JAVA, NodeJS, PHP, Python and Ruby.
EDIT 2
What you need to do is create an Event Grid Subscription to listen to and handle Event Grid Events.
Here's an example screenshot on how to listen for events for a specific Storage Account and call a WebHook whenever such an event occurs:
Pay attention to the "Endpoint Details": that's where you can specify to, for instance, call a webhook every time an event is triggered.
The easiest way to transfer the EventHub generated events would probably be to create an EventHub event receiver in Node.js (which you mentioned in your comments) as described here, which receives events and publishes them to Cloud Pub/Sub directly, as described in the Cloud Pub/Sub publisher documentation for Node.js.

Can I create monitoring alerts for azure event grid domain topics?

I would like to setup following alerts for domain topics when
Delivery Failed Events (at domain) exceed x in y amount of time
Delivery Failed Events (at domain topic 1) exceed x in y amount of time
Delivery Failed Events (at domain topic 2) exceed x in y amount of time
The reason why I want the domain topic granularity is that topic 1 customer may be fine but topic 2 customer may be having issues. So customer (for topic 2) is down currently and is in extended outage period (that may last more than a day). So I want to be able to disable the alert for topic 2 only and would like to enable it once customer (for topic 2) is up and running again. Meanwhile, I want to have all other topic level alerts enabled.
I did not see a way to configure the above in the portal. Is it possible (or not) to configure above at this time in any other way? If so, can please provide the direction on how to achieve it?
The AEG provides durable delivery for each event message at least once to each subscriber based on its subscription. More details can be found in the docs.
In the case, when the AEG can not successfully deliver a message after retrying, the dead-lettering feature (configured for each subscriber) can be used for notification and/or analyzing process via a storage eventing, where a dead-letter message is stored.
On the publisher side, the publisher received a standard Http response from the event domain endpoint immediately after its posting, see more details in the docs.
The current version of the AEG is not integrated to the Diagnostic settings (for instance, like it is done for Event Hubs) which will be enabled to push the metrics and/or logs to the stream pipeline for their analyzing process.
However, as a workaround for that, the Azure Monitoring REST API can help you.
Using Lists the metrics values for event domain, we can obtained the metrics for topics such as Publish Succeeded, Publish Failed and Unmatched.
the following is an example of the REST Get:
https://management.azure.com/subscriptions/{myId}/resourceGroups/{myRG}/providers/Microsoft.EventGrid/domains/{myDomain}/providers/Microsoft.Insights/metrics?api-version=2018-01-01&interval=PT1M&aggregation=none&metricnames=PublishSuccessCount,PublishFailCount,PublishSuccessLatencyInMs,DroppedEventCount
Based on the polling technique, you can push the event domain metrics values to the stream pipeline for their analyzing, monitoring, alerting, etc. using an Azure Stream Analytics job. Your management requirements (for instance, publisher_topic1 is disabled, etc.) can be referenced to the input stream job.
Note, that the event domain metrics didn't give a topic granularity and also there is no an activity event log at that level. I do recommend to use the AEG feedback page.

When to use EventGrid and when to use ServiceBus / Storage Queue?

In Azure, we have two separate messaging technologies and it's not very well documented when to use what? While EventGrid is really cool, I did not come across when to use EventGrid(scenarios) vs the Storage/ServiceBus queue? Can someone help?
E.g. if I have the following scenario :
A status of a flag changes and based on that, I want to trigger an algorithm that would do recalculations, few inserts/updates etc. in the database.
For implementing this - I can either use EventGrid or Storage Queue. How do we figure what to use in such scenario? I was looking for some kind of guidance.
Basically, Azure Event Grid handles events and Azure ServiceBus handles messages.A message is raw data produced by a service to be consumed or stored. Events are also messages (lightweigth), but they don’t generally convey a publisher intent, other than to inform.
1) If the purpose is to just to store the information ServiceBus can be used.
2) If the information received is used to trigger another service Azure Event Grid can be used.
Find more info here
https://learn.microsoft.com/en-us/azure/event-grid/compare-messaging-services
https://azure.microsoft.com/en-us/blog/events-data-points-and-messages-choosing-the-right-azure-messaging-service-for-your-data/
Events are like notifications from a service to inform the world that something happened in the domain of the publisher (similar to an email notification). There is no expectations from the publisher to have any actions taken. A message is a command you send to a specific receiver with the expectation of the message to be processed (like an asynchronous post request).
Events will work in pub/sub pattern and multiple subscribers could be configured to the events. The service that needs to react to an event will get notified by the event grid when an event occurs (http call from event grid to the receiver). The event will remain in the event grid until deletion (cleanup) and there is no garantie of keeping the original order (no FIFO).
In the other hand, messages will be added to a queue and will be deleted once the “message processor” is done with it. The messages in the queue will keep the original order (FIFO). The message processor has to pull messages from the queue.
In your scenario, you could use a combination of both. Service A sends an event “StatusChanged”, then you can configure a subscription to that event and send a message to a queue, then have your logic to process that message. This will end up with a fully async communication pattern. This is ideal to support scenarios where you processor is down or too busy. The incoming messages will simply get accumulated in the queue and eventually being processed once the service is back up and running. And without affecting the original service that sent the “StatusChanged” event..

Resources