Application Insights Referesh When using API - azure

I am trying to use application Insights API get some telemetary and use it in my website.
During my testing i navigate to some pages in website and then call api but that doesnt get the latest pages i go to
Is there any setting that should be done on azure to make make api get latest pages ?

You should expect a delay between the data generation and data available for querying. Typically latency is in single minutes range. And it is typically smaller for metrics than for raw events.
SLA for Application Insights is 2 hours:
"Data Latency" is the number of minutes that data received from the instrumentation in Customer’s application is delayed from appearing in Application Insights service where the delay is greater than 2 hours.

Related

Is Azure Monitor a good store for custom application performance monitoring

We have legacy applications that currently write out various run time metrics (SQL calls run time, api / http request run times etc) to local SQL DB.
format:( source, event, data, executionduration)
We are moving away from storing those in local SQL DB, and are now publishing those same metrics to azure event hub.
Looking for a good place to store those metrics for the purpose of monitoring the health of the application. Simple solution would be to store in some DB and build custom application to visualize the data in custom ways.
We are also considering using Azure Monitor for this purpose via data collector API (https://learn.microsoft.com/en-us/azure/azure-monitor/platform/data-collector-api)
QUESTION: Are there any issues with azure monitor that would prevent us from achieving this type of health monitoring?
Details
each event is small (few hundred characters)
expecting ~ 10 million events per day
retention of 1-2 days is enough
ability to aggregate old events per source per event is important (to have historical run time information)
Thank you
You can do some simple graphs and with the Log Analytics query language, you can do just about any form of data analytics you need.
Here's a pretty good article on Monitor Visualizations.
learn.microsoft.com/en-us/azure/azure-monitor/log-query/charts

Azure Application Insights Continuous Export Process

I am using the Application Insights APIs to get my customEvents data, If i enable the continuous export, the old data like 1 year ago can still accessed by the Application Insights APIs, or the APIs will show me only 90 days ?
As per the official doc
After Continuous Export copies your data to storage (where it can stay
for as long as you like), it's still available in Application Insights
for the usual retention period.
Which means you can only get the usual retention period of 90 days and not the old data like 1 year ago in the application insights. However, you can still get the data from your Azure storage, download it and write whatever code you need to process it.

Azure - Web API / App Service Plan - Performance Details not recorded

I have an app service plan which is hosting 2 Web APIs, the issue I am facing is that I am unable to view details such as: CPU Usage, Memory Percentage, Requests, Average Response time etc.
These can be found under the Overview tab for both App Service and App Service Plan but no data is being recorded, even if I retrieve data for the whole week rather than the last hour only.
I have also confirmed that I am hitting the correct App hosted on the correct Plan. Have I missed anything? Do I need to enable something?
I have also generated around 20k requests in the last few hours so I expect something to show up.

Monitor when Azure Web App is unloaded?

What would be the best way to monitor when our Azure web app is being unloaded when no requests have been made to the web app for a certain amount of time?
Enabling Logstream for the web server doesn't seem to reveal anything of use..
Any hints much appreciated!
You can use Azure Application Insights to create a web test that will alert you when the site is not available anymore. It will ping your site from the data centers you select and perform some action you select (mail, webhook, etc).
However, if you want your web app to stay online, you could upgrade its plan to be at least basic, and under settings enable always on.
In addition to the kim’s response:
If you are running your web app in the Standard pricing tier, Web Apps lets you monitor two endpoints from three geographic locations.
Endpoint monitoring configures web tests from geo-distributed locations that test response time and uptime of web URLs. The test performs an HTTP GET operation on the web URL to determine the response time and uptime from each location. Each configured location runs a test every five minutes.
Uptime is monitored using HTTP response codes, and response time is measured in milliseconds. A monitoring test fails if the HTTP response code is greater than or equal to 400 or if the response takes more than 30 seconds. An endpoint is considered available if its monitoring tests succeed from all the specified locations.
Web Apps also provides you with the ability to troubleshoot issues related to your web app by looking at HTTP logs, event logs, process dumps, and more. You can access all this information using our Support portal at http://.scm.azurewebsites.net/Support
The Azure App Service support portal provides you with three separate tabs to support the three steps of a common troubleshooting scenario:
-Observe current behavior
-Analyze by collecting diagnostics information and running the built-in analyzers
-Mitigate
If the issue is happening right now, click Analyze > Diagnostics > Diagnose Now to create a diagnostic session for you, which collects HTTP logs, event viewer logs, memory dumps, PHP error logs, and PHP process report.
Once the data is collected, the support portal runs an analysis on the data and provides you with an HTML report.
In case you want to download the data, by default, it would be stored in the D:\home\data\DaaS folder.
Hope this helps.

Azure WebJobs for Aggregation

I'm trying to figure out a solution for recurring data aggregation of several thousand remote XML and JSON data files, by using Azure queues and WebJobs to fetch the data.
Basically, an input endpoint URL of some sort would be called (with a data URL as parameter) on an Azure website/app. It should trigger a WebJobs background job (or can it continuously running and checking the queue periodically for new work), fetch the data URL and then callback an external endpoint URL on completion.
Now the main concern is the volume and its performance/scaling/pricing overhead. There will be around 10,000 URLs to be fetched every 10-60 minutes (most URLs will be fetched once every 60 minutes). With regards to this scenario of recurring high-volume background jobs, I have a couple of questions:
Is Azure WebJobs (or Workers?) the right option for background processing at this volume, and be able to scale accordingly?
For this sort of volume, which Azure website tier will be most suitable (comparison at http://azure.microsoft.com/en-us/pricing/details/app-service/)? Or would only a Cloud or VM(s) work at this scale?
Any suggestions or tips are appreciated.
Yes, Azure WebJobs is an ideal solution to this. Azure WebJobs will scale with your Web App (formerly Websites). So, if you increase your web app instances, you will also increase your web job instances. There are ways to prevent this but that's the default behavior. You could also setup autoscale to automatically scale your web app based on CPU or other performance rules you specify.
It is also possible to scale your web job independently of your web front end (WFE) by deploying the web job to a web app separate from the web app where your WFE is deployed. This has the benefit of not taking up machine resources (CPU, RAM) that your WFE is using while giving you flexibility to scale your web job instances to the appropriate level. Not saying this is what you should do. You will have to do some load testing to determine if this strategy is right (or necessary) for your situation.
You should consider at least the Basic tier for your web app. That would allow you to scale out to 3 instances if you needed to and also removes the CPU and Network I/O limits that the Free and Shared plans have.
As for the queue, I would definitely suggest using the WebJobs SDK and let the JobHost (from the SDK) invoke your web job function for you instead of polling the queue. This is a really slick solution and frees you from having to write the infrastructure code to retrieve messages from the queue, manage message visibility, delete the message, etc. For a working example of this and a quick start on building your web job like this, take a look at the sample code the Azure WebJobs SDK Queues template punches out for you.

Resources