Logging Azure Service Bus server exceptions - azure

Is there a way to log Service Bus exceptions and errors if I don't have access to the client that submits the queue messages? As in, from the Service Bus queue itself?

Have a look at the Azure metrics REST API here:
https://msdn.microsoft.com/en-us/library/azure/dn163589.aspx
It is a bit fiddly to get up and running with as you need to create and install a management certificate on Azure. The link below explains how:
https://msdn.microsoft.com/en-us/library/azure/gg551722.aspx
You start by hitting the supported metrics resource for the queue or topic you want to monitor - the URL below is a redacted example:
https://management.core.windows.net/[SubscriptionID]/services/servicebus/Namespaces/[Namespace]/queues/[Queue]/Metrics
It brings back a big chunk of JSON that includes links to the supported metrics resources. This includes counts of the failed requests and internal server errors.
Beyond that it's worth using a logger mechanism on your clients that lets you remotely aggregate more detailed messages. This will give you a fuller picture of why your clients might be failing to send and receive .

Related

Database/Cache in Azure Service Bus for jobs in queue completed elsewhere

I've an API (python-flask app) running on an app service in azure and want to implement a queuing system using Azure Service Bus such that requests from API are sent to a simple FIFO queue managed/ran by the service bus. Another resource in Azure will be pulling from this queue and running the jobs based on the contents of the json/payload contained in the message in the queue element.
When this element has been processed by the other resource I want to encode the job status/metadata (e.g., "finished" along with metadata such as the location where resulting data was stored). I read about such a system that makes use of the lightweight database offered by Redis, however, I'm wondering if something like this lightweight database/cache system of job status/ids/metadata is available through Azure Service Bus? I'm aware that Redis can be run standalone on a VM in Azure, however, if this can all be managed via the service bus that would be ideal. I couldn't find specifics on this being offered within Azure Service Bus and due to how this job metadata is later being accessed I cannot just push metadata messages to a new queue.
Does anyone have any insight on this or potential alternatives? If Redis can be run alongside flask within the same App Service then that would be ideal, but again I wasn't able to find anything explicit on this and it doesn't seem possible to simultaneously run a flask server/app and Redis server at the same time on an App Service.
Thanks.
I'm wondering if something like this lightweight database/cache system
of job status/ids/metadata is available through Azure Service Bus?
Azure Service Bus is a fully managed enterprise message broker, Azure Redis is a NoSQL database with steroids. It also offers queue mechanism and some other data structures.
it doesn't seem possible to simultaneously run a flask server/app and
Redis server at the same time on an App Service.
You can, but inside containers.
Please check if this can help you: https://stackoverflow.com/a/39008342/1384539

Can I see Http Queue Length inside Azure Application Insight?

My Azure WebApp is having problem with many requests in Http Queue.
To analyse the reason and find out what the problem is, I would like to see the Http Queue Length inside Application Insight.
I wonder if it's possible to see the Http Queue Length inside Azure Application Insight?
If it is possible then I would like to know how to see this value.
I have tried to find Http Queue Length in Azure Application Insight in Portal GUI.
I have also tried to find Http Queue Length in analytics.applicationinsights.io.
I have also tried to get Http Queue Length from the Azure REST API, but I did not succeed in getting the value.
If I did then I could add it as a custom event into Application Insight.
Thanks,
Henrik
Unfortunately for Azure Web Apps the answer is no as there is only a subset of performance counters available to the web app process and by extension Application Insights.
You’ve already been looking into this, but be sure you’re trying to get the queue length for the App Sevice Plan and not the Web App Instance via the REST APIs or the PowerShell cmdlet. The link below explains that it is only available for some tiers and only available for the App Service Plan.
https://learn.microsoft.com/en-us/azure/app-service/web-sites-monitor#understanding-quotas-and-metrics
Http Queue Length is not in Application Insights but it can be seen in the metrics on the App Service Plan, since it is a VM level statistic. See this answer.

Scalable Request Response pattern using Azure Service Bus

We are evaluating "Azure service bus" to use between web server and app server for request response pattern. We are planning to have two queues:
Request Queue
Response Queue
Webserver will push a message to request queue and subscribe to response queue.
By comparing the MessageID and CorrelationId, it can receive the response back, which can be sent back to browser.
But over cloud, using elastic scaling, we can increase/decrease web server (and app server) instances.
We are wondering if this pattern will work here optimally.
To make this work, we will have to have one Request queue and multiple topics (one for each web server instance).
This will have two down sides:
Along with increasing/decreasing web server instance, we will have
to create/delete topic as well.
All the message will be pushed to
all the topics. So, every message will be processed by all the web
servers. And this is not an efficient way.
Please share your thoughts.
Thanks In Advance
When you scale out your endpoint, you don't want to have an instance affinity. You want to rely on the competing consumers and not care which instance of your endpoint processes messages.
For example, if you receive a response and write that to a database, most likely you don't care what instance of an endpoint has written the data. But if you have some in-memory state or anything other info only available to the endpoint that has originated the request and processing reply messages requires that information, then you have instance affinity and need to either remove it or use technology that allows to address that. For example, something like a SignalR with a backplane to communicate a reply message to all your web endpoint instances.
Note that ideally you should avoid instance affinity as much as you can.
I know this is old, but thought I should comment to complete this thread.
I agree with Sean.
In principle, Do not design with instance affinity in mind.
Any design should work irrespective of number of instances and whichever instance runs the code.
Microsoft does recommend the same when designing application architecture for running in the cloud.
In your case, I do not think you should plan to have one topic for each instance.
You should just put the request messages into one topic, with a subscription to allow your receiving app service to process those request messages.
When your receiving app service scales out, that's where your design needs to allow reading messages from the subscription from multiple receivers (multiple instances), which is described in the Competing consumers pattern.
https://learn.microsoft.com/en-us/azure/architecture/patterns/competing-consumers
Please post what you have finally implemented.

how can i detect and get email notification of traffic in azure api management

i have question regarding Azure API Management again : ).
i am using API management which is API Gateway doing HTTPS to Azure Storage REST API directly
and is there any way that i cant get email notification when there are massive requests or high latency at response ??
Thanx for reading : )
You can configure alert notifications either in the portal or via the REST API or .NET SDK to monitor for specific Azure Storage Metrics that you want.
See https://azure.microsoft.com/en-us/documentation/articles/insights-receive-alert-notifications/ for more details.
For massive requests, you might want to consider using "TotalRequests" or "TotalBillableRequests" in a specific time period.
For high response latency, you can track "AverageE2ELatency" or "AverageServerLatency" in a specific time period.
See https://azure.microsoft.com/en-us/documentation/articles/storage-monitoring-diagnosing-troubleshooting/#monitoring-performance for more details on these specific metrics and how they relate to performance monitoring.
Hope this helps.
Sriprasad's answer makes sense for configuration from the Storage side. From the API Management side, you cannot currently set a notification on any event other than the built-in ones (subscription requests, new subscriptions, application gallery requests, new issues/comments, approaching of user subscription quota limit).
You can use Log-To-Eventhub policy to log a message to event hub for every request and consume it in a custom or third party solution like AppInsights/Runscopee to fire an alert.
Refer
https://azure.microsoft.com/en-us/documentation/articles/api-management-log-to-eventhub-sample/
If your requirement is to get report/metrics from API Management you can use the management rest api's for APIM.
https://msdn.microsoft.com/en-us/library/dn781421.aspx
Specifically you might want to look at reportByAPI (which gives you useful metrics in response like callcounts, apiTimeAvg) based on which you can setup alerts/email notification.
https://msdn.microsoft.com/en-us/library/dn781421.aspx#ReportByAPI

Azure service bus statistics/Monitoring

I want to make a dashboard which shows the status of our Azure services bus queues and displays the history for "messages added to queue", "length of queue" and "messages processed" etc. Using the Azure Management Portal, I can see that most of these statistics manually for each queue.
Is there any way to get access to the data that is displayed in the Management Portal through one of the APIs as I want to combine the data from number of queues that we use into a single interface. I have searched in vain but I don't want to log my own statistics as that seems like redoing a task that Microsoft already perform.
Currently with REST API all I can see is how to get the current approximate count of messages in the queue.
There is an API for this now (wasn't back when the OP created the thread):
https://msdn.microsoft.com/en-gb/library/azure/dn163589.aspx (REST)
https://msdn.microsoft.com/en-us/library/mt348562.aspx (.NET)
Also, I believe it should be available via Azure Insights API:
https://msdn.microsoft.com/en-us/library/microsoft.azure.insights.aspx
It is possible to fetch the Count of Messages in a Queue, Incoming Messages, Outgoing Messages with the help of the latest Azure Monitor Metrics, with which you can build you own Dashboard. Or you can make use of the Azure Monitor in Azure portal, which allows you to configure dashboards and alerts.

Resources