I scale out an App Service Plan to 2 instances. So how to monitor each instance individually. Metrics board has no option for this, only show average of both.
I know it's a bit old thread but wanted to provide an update if you are still looking for a response on it.
I believe your requirement is currently not a supported feature so I would recommend to raise a feature request in this Azure feedback forum / UserVoice. In general, Azure feature team would check feasibility of the feature request, prioritize against existing feature backlog, add in roadmap as appropriate and would announce and/or update the related Azure document once a feature request is addressed.
We can see how many instances are created and destroyed in a certain time, for each App Service, through App Insights with a kusto query:
requests
| project cloud_RoleName, cloud_RoleInstance
| order by cloud_RoleName desc
| summarize Count=dcount(cloud_RoleInstance) by cloud_RoleName, cloud_RoleInstance
Instances per App Service:
Related
This answer summarizes that App Insights (AI) and Log Analytics (LA) are being merged into one service. It also provides a suggestion that new resources in AI can point at LA, so that all your code is in one place.
My question is how can I query across LA and AI resources, given that both exist, and you don't have the time or permissions to change the AI to point at LA.
Using Azure Workbooks I realised I can query from multiple resources inside LA or AI, but I don't seem to be able to query across LA and AI in one cell (nor save results between cells.)
At present the only ways I can think to solve this are to query through the API or joining in a PBI report, but both of these are massive overhead to complete exploratory querying. Is there an easier way, ideally whilst staying inside Kusto queries?
Azure Monitor is your one-stop shop for querying across cross-resources.
Previously with Azure Monitor, you could only analyze data from within
the current workspace, and it limited your ability to query across
multiple workspaces defined in your subscription. Additionally, you
could only search telemetry items collected from your web-based
application with Application Insights directly in Application Insights
or from Visual Studio. This also made it a challenge to natively
analyze operational and application data together.
Now you can query not only across multiple Log Analytics workspaces,
but also data from a specific Application Insights app in the same
resource group, another resource group, or another subscription. This
provides you with a system-wide view of your data. You can only
perform these types of queries in Log Analytics.
To reference another workspace in your query, use the workspace identifier, and for an app from Application Insights, use the app identifier.
For example, you can query multiple resources from any of your resource instances, these can be workspaces and apps combined like below.
// crossResource function that scopes my Application Insights resources
union withsource= SourceApp
app('Contoso-app1').requests,
app('Contoso-app2').requests,
app('Contoso-app3').requests,
app('Contoso-app4').requests,
app('Contoso-app5').requests
Or like,
union Update, workspace("contosoretail-it").Update, workspace("b459b4u5-912x-46d5-9cb1-p43069212nb4").Update
| where TimeGenerated >= ago(1h)
| where UpdateState == "Needed"
| summarize dcount(Computer) by Classification
Or like,
applicationsScoping
| where timestamp > ago(12h)
| where success == 'False'
| parse SourceApp with * '(' applicationName ')' *
| summarize count() by applicationName, bin(timestamp, 1h)
| render timechart
For details, refer this.
As we're new to the Azure Function App where we heard one of its great functionality was scalability, but how did azure function scale works out? Was it automatically scaling in behind or any mechanism we can set up? For example, max of scale size limitation.
When we debug the azure function locally (we've tried ServiceBusTrigger, EventHubTrigger, QueueTrigger and CosmosDBTrigger), it seems like every time the same function instance was called multiple times over and over while we continue sending messages, which doesn't work as scaling/working parallel as we expected, is there any good way of debugging the scalability locally?
The scaling of Azure Functions is determined by the Scale Controller:
The Scale Controller only runs in the cloud so it is not possible to test the scaling locally. Also the inner workings of this controller are not disclosed.
The best way to test the scaling is to actually do a proof of concept in the cloud and make sure you configure Application Insights. Once you have load tested your function app you can do a Log Analytics query such as the following one to see if multiple instances of your function app have been provisioned:
requests |
project timestamp, id, operation_Id, operation_Name, duration, cloud_RoleName, cloud_RoleInstance |
where cloud_RoleName =~ 'FUNCTION_APP_NAME' |
order by timestamp desc |
take 100
The cloud_RoleInstance property has the ID of the resource that has been provisioned. When that column contains muliple values you know that scaling has occurred.
To be honest, testing if Azure Functions autoscales should not be a primary concern to you since it's the responsibility of Azure. You probaly need the autoscaling to handle both small and large workloads and you might have time constraints in which the processing should be finished. If that is your real concern then you might be better off measuring the end-to-end performance/timings.
The scalability of azure function depends on the hosting plan, and there're 3 types of hosting plan: Consumption plan, Premium plan(it's in preview, so we can ignore it now), Dedicated plan(app service plan).
For Consumption plan, it scales automatically based on the number of incoming events.
For app service plan, you can manually scale out by adding more VM instances, or you can also enable autoscale. More details you can refer to this article.
And when you run it locally without hosting plan, you cannot see this behavior.
Hope this helps.
I'm trying to find any documentation/advisory info on whether or not to use the same App Insights instance from multiple regions. I'm assuming that if I have an API App Service in useast, it's not recommended to use an App Insights instance from West region as it would add latency.
I just got the feedback from MS application insights team, the answer is no performance issue:
Application insights sends data to their backend asynchronously - so
the actual network RT time should not matter.
Also, even though the
app insights is in West Region, the 'ingest' endpoints are globally
distributed, and telemetry is always sent to the nearest available
'ingest' point.
Details are here.
For the official document, I'm asking for it(but not sure if they would have one).
Hope it helps.
I have multiple services running on Service Fabric. I would like to add Application Insight for logging. I'm just wondering whether I have to add an Application Insight resource for each microservice or only one is common for all. What is the best practice?
There is no such thing a the best practice for this. It really depends. Some considerations:
Pricing: depending on the level (basic or enterprise) you will get an amount of data for free / included in the base price. See the docs. So in some cases, depending on the amount of traffic you can reduce costs by having a dedicated AI resource per service. AI resources for services that send data below the threshold of the AI pricing plan are then (almost) free.
Querying: if you split up services per AI resource getting an overview of the whole system is difficult since at the moment you cannot create queries spanning multiple AI resources.
Responsibility: If you have multiple teams working on multiple services it might be an option to have an AI resource per team so they have a good insight in only the parts they are responsible for.
If you do decide to use a shared AI resource there are options like custom telemetry initializers to include custom data that further identify which ASF application or service is sending the data if it is not included by default.
See also Add Application Insight to a existing Azure Service Fabric cluster for more info about how to integrate AI.
Now, when it comes to bring data together you do have some additional options that may or may not need additional services or configuration. For example:
PowerBi: You can visualize data of AI resources using dashboards, see https://learn.microsoft.com/en-us/azure/application-insights/app-insights-export-power-bi
OMS: Operation Management Suite, See https://blogs.technet.microsoft.com/msoms/2016/09/26/application-insights-connector-in-oms/. As Jesse mentions you can link multiple AI Resources
Custom dashboards: Using the rest api you can create your own solution that displays data for one or more AI resources.
Our service writes a lot of custom events and metrics to App Insights.
Using the AI portal we can make complex queries and see nice charts, but I would like to access the data from outside the portal.
The Azure Application Insights REST API page (https://dev.applicationinsights.io) claims that those API can be used to perform such task, but I am unable to make them work - again I want to query custom events and metrics, not the standard ones.
Does anyone have any examples?
Here is for example one our queries:
customEvents
| where name startswith "Monitor.Xxxxxxx"
| summarize count() by bin(timestamp, 1min)
| order by timestamp desc
It turned out I was using the wrong AppId/Key; once I plugged the correct ones I am able to use the API Explorer.