Can I see Http Queue Length inside Azure Application Insight? - azure

My Azure WebApp is having problem with many requests in Http Queue.
To analyse the reason and find out what the problem is, I would like to see the Http Queue Length inside Application Insight.
I wonder if it's possible to see the Http Queue Length inside Azure Application Insight?
If it is possible then I would like to know how to see this value.
I have tried to find Http Queue Length in Azure Application Insight in Portal GUI.
I have also tried to find Http Queue Length in analytics.applicationinsights.io.
I have also tried to get Http Queue Length from the Azure REST API, but I did not succeed in getting the value.
If I did then I could add it as a custom event into Application Insight.
Thanks,
Henrik

Unfortunately for Azure Web Apps the answer is no as there is only a subset of performance counters available to the web app process and by extension Application Insights.
You’ve already been looking into this, but be sure you’re trying to get the queue length for the App Sevice Plan and not the Web App Instance via the REST APIs or the PowerShell cmdlet. The link below explains that it is only available for some tiers and only available for the App Service Plan.
https://learn.microsoft.com/en-us/azure/app-service/web-sites-monitor#understanding-quotas-and-metrics

Http Queue Length is not in Application Insights but it can be seen in the metrics on the App Service Plan, since it is a VM level statistic. See this answer.

Related

Reducing maximum concurrent requests on Azure Cloud Service

I have an Azure Cloud Service (classic) that performs a lengthy operation for a request. If the traffic suddenly increases dramatically, I would rather the cloud service not be slowed to the speed of a snail by trying to serve all the requests at once. So the natural solution to me seems to be to simply reject new requests if the number of concurrent requests is too high (for my application it it is better if some of the requests are rejected than all of them slowed down massively).
For ASP.NET there seems to be a capability to do this through <system.web>:
<system.web>
<applicationPool
maxConcurrentRequestsPerCPU="30"
maxConcurrentThreadsPerCPU="0"
requestQueueLimit="30"
/>
</system.web>
However this doesn't seem to work for me, all requests then fail without clear error message. And besides Visual Studio is telling me that the tag is not expected. So I would guess this isn't available for Azure Cloud Services.
What can I do here to achieve this for an Azure Cloud Service? I would also be interested in something like limiting the maximum request time. The only thing I can think of is to actually try to count the requests in the C# code but that definitely seems suboptimal.
I am using .NET 4.6.1 and when RDPing into the cloud service VM the IIS version seems to be 10.0 (from looking at the manager).
I have seen the answer to this question: Limit concurrent requests in Azure App Service . However that is not what I want as I do not want to block IP addresses at any stage.

Can ApplicationInsights track events across many WebApps/LogicApps/etc?

I have the following resources
One Mobile/API app
One MVC app
Three Logic apps
One Azure function deployment with 5 functions
I want to have a single tracking number (correlation ID) to track across all instances at the same time. I'm looking at the Contoso Insurance sample, but I'm rebuilding it by hand (not using Azure Deploy scripts).
I've read the deployment code, but I'm not sure if I can merge app insight logs together, or if it's a hack of some sort.
Observations
When I right click on visual studio, I can only associate to Application insights instances that aren't already connected to a *app (web | mobile | api).
However, in the configuration, I can give Application insights a direct GUID which might allow me to achieve the goal of one App Insights activity log for the entire process
Question
Is it possible to have one app insights log among all Mobile/API/Logic/MVC sites?
Is there a way to have (or should I have) one standard app insights instance per web app, then a special dedicated shared app insights instance for my code to call into and log?
What is contoso insurance doing with Azure App Insights?
Jeff from Logic Apps team here -- So the answer is yes - but there are some caveats. We are working to make the experience seamless and automatic, but for now it will require the following. First as a heads up:
First, for Logic Apps we have what's called the client tracking ID -- this is a header you can set on an incoming HTTP Request or Service Bus message to track and correlate events across actions. It will be sent to all steps (functions, connectors, etc.) with the x-ms-client-tracking-id header.
Logic Apps emits all logs to Azure Monitor - which unfortunately today only has a sink into Event Hubs, Storage, and Log Analytics -- not App Insights.
With all of that in-mind, here's the architecture we see many following:
Have your web apps just emit to App Insights directly. Use some correlation ID as needed. When firing any Logic Apps, pass in the x-ms-client-tracking-id header so you can correlate events.
Log your events to App Insights in the Function app. This blog details some of how to do that, and it is also being worked on for a better experience soon.
In your logic app - either write a Function to consume events off of Azure monitor and push to App Insights, or write a function that is an App Insight "logger" that you can call in your workflow to also get the data into App Insights.
This is how Contoso Insurance is leveraging App Insights as far as I understand. We are working across all teams (App Insights, Azure Monitor, Azure Functions, Logic Apps) to make this super-simple and integrated in the coming weeks/months, but for now achievable with above. Feel free to reach out for any ?s

Azure WebJobs for Aggregation

I'm trying to figure out a solution for recurring data aggregation of several thousand remote XML and JSON data files, by using Azure queues and WebJobs to fetch the data.
Basically, an input endpoint URL of some sort would be called (with a data URL as parameter) on an Azure website/app. It should trigger a WebJobs background job (or can it continuously running and checking the queue periodically for new work), fetch the data URL and then callback an external endpoint URL on completion.
Now the main concern is the volume and its performance/scaling/pricing overhead. There will be around 10,000 URLs to be fetched every 10-60 minutes (most URLs will be fetched once every 60 minutes). With regards to this scenario of recurring high-volume background jobs, I have a couple of questions:
Is Azure WebJobs (or Workers?) the right option for background processing at this volume, and be able to scale accordingly?
For this sort of volume, which Azure website tier will be most suitable (comparison at http://azure.microsoft.com/en-us/pricing/details/app-service/)? Or would only a Cloud or VM(s) work at this scale?
Any suggestions or tips are appreciated.
Yes, Azure WebJobs is an ideal solution to this. Azure WebJobs will scale with your Web App (formerly Websites). So, if you increase your web app instances, you will also increase your web job instances. There are ways to prevent this but that's the default behavior. You could also setup autoscale to automatically scale your web app based on CPU or other performance rules you specify.
It is also possible to scale your web job independently of your web front end (WFE) by deploying the web job to a web app separate from the web app where your WFE is deployed. This has the benefit of not taking up machine resources (CPU, RAM) that your WFE is using while giving you flexibility to scale your web job instances to the appropriate level. Not saying this is what you should do. You will have to do some load testing to determine if this strategy is right (or necessary) for your situation.
You should consider at least the Basic tier for your web app. That would allow you to scale out to 3 instances if you needed to and also removes the CPU and Network I/O limits that the Free and Shared plans have.
As for the queue, I would definitely suggest using the WebJobs SDK and let the JobHost (from the SDK) invoke your web job function for you instead of polling the queue. This is a really slick solution and frees you from having to write the infrastructure code to retrieve messages from the queue, manage message visibility, delete the message, etc. For a working example of this and a quick start on building your web job like this, take a look at the sample code the Azure WebJobs SDK Queues template punches out for you.

Logging Azure Service Bus server exceptions

Is there a way to log Service Bus exceptions and errors if I don't have access to the client that submits the queue messages? As in, from the Service Bus queue itself?
Have a look at the Azure metrics REST API here:
https://msdn.microsoft.com/en-us/library/azure/dn163589.aspx
It is a bit fiddly to get up and running with as you need to create and install a management certificate on Azure. The link below explains how:
https://msdn.microsoft.com/en-us/library/azure/gg551722.aspx
You start by hitting the supported metrics resource for the queue or topic you want to monitor - the URL below is a redacted example:
https://management.core.windows.net/[SubscriptionID]/services/servicebus/Namespaces/[Namespace]/queues/[Queue]/Metrics
It brings back a big chunk of JSON that includes links to the supported metrics resources. This includes counts of the failed requests and internal server errors.
Beyond that it's worth using a logger mechanism on your clients that lets you remotely aggregate more detailed messages. This will give you a fuller picture of why your clients might be failing to send and receive .

How to Scale Azure cloud service up and down from Rest API with WebAPI

I am creating a demo where I need to be able to scale up my cloud service deployment from a WebAPI.
I went over the rest API documentation and didn't seem to find what I need.
Is it possible to use Service Management API to scale a cloud service up and down ?
Alternative I will just enable auto scale on a queue and then post messages to the queue to get it to scale up :)
As such there's no such function in Service Management REST API for scaling. Since the number of instances for a particular role is stored in the service configuration file, what you would need to do is read this configuration information using Get Deployment operation, locate the Instances node and change the value of count attribute. Then you would need to call Change Deployment Configuration for new instance count to be effective.

Resources