I am using the Application Insights APIs to get my customEvents data, If i enable the continuous export, the old data like 1 year ago can still accessed by the Application Insights APIs, or the APIs will show me only 90 days ?
As per the official doc
After Continuous Export copies your data to storage (where it can stay
for as long as you like), it's still available in Application Insights
for the usual retention period.
Which means you can only get the usual retention period of 90 days and not the old data like 1 year ago in the application insights. However, you can still get the data from your Azure storage, download it and write whatever code you need to process it.
Related
About 10 days ago I created my first Azure Sql Database. I choose the Basic Plan (4.21 €/month). This database is used only for testing purpose. Today I received an email from Microsoft Azure.
Subject of the mail : Your services were disabled because you reached your spending limit
Body of the mail : Keep building in Azure by adjusting your spending limit. Your services were disabled on May 7, 2020 because you’ve reached the monthly Azure spending limit provided by your Visual Studio subscription benefit. To keep using Azure, either:
1. Wait for your monthly spending limit to reset at the start of next month, or
2. Adjust your monthly limit for a specific month or for the life of your subscription—you only pay for the extra amount you use each month.
Why did Azure changed the Pricing Plan of my database without notifying me ? Can some actions cause this ?
I know that I did an Export Data-tier Application from Microsoft SQL Server Management Studio from which I was connected to my Azure Database (I made a backup from there). I doubt this explains that.
UPDATE
As suggested by NillsF i checked the deployment history and I can confirm I choose the Basic Plan when I created the database (see below). So I still have no clue what's happening to my database.
You can check the activity log on your subscription to see who initiated the switch from Basic to Vcore. It seems strange that MSFT would have done this on your behalf.
You can also check the deployment history on your resource group to verify the tier you picked when you created the resource itself:
I have an app service plan which is hosting 2 Web APIs, the issue I am facing is that I am unable to view details such as: CPU Usage, Memory Percentage, Requests, Average Response time etc.
These can be found under the Overview tab for both App Service and App Service Plan but no data is being recorded, even if I retrieve data for the whole week rather than the last hour only.
I have also confirmed that I am hitting the correct App hosted on the correct Plan. Have I missed anything? Do I need to enable something?
I have also generated around 20k requests in the last few hours so I expect something to show up.
I have an MVC based web app running on Azure. The CPU performance of it has been very predictable over the past five months. However, over the past 24 hours, and most recently, from 1:00 pm to 1:30 pm Eastern time, today, in the USA, I have had CPU spikes nearing 100%. The image below, which is for the past 7 days shows this.
This CPU spike is not coming from my app or my users. There has not been an abnormal increase in users, user activity or queries. I also checked Google Analytics to see if perhaps my site was getting hammered by random users etc. It showed nothing out of the ordinary.
There also was a corresponding huge jump in data going out of my site, which is highly unusual. The second image shows data egress for the past week. However, as I said, I checked my Azure SQL Database Query Store and it shows absolutely nothing out of the ordinary. Furthermore, my DTU percentage never even neared 100% during this time, which it certainly would have if this much data was pulled from the database.
I have basically ruled out anything amiss on my end. Is there some way I can check to see if there were issues with Azure causing this?
If you are suspecting an underlying Azure platform issue, both Azure Service Health and Azure Resource Health are useful resources to determine if you are being impacted by platform issue.
Azure Service Health provides personalized service health information when Azure platform issues impact your resources.
https://learn.microsoft.com/en-us/azure/service-health/service-health-overview
Azure Resource Health provides visibility into whether your Azure resources are healthy or unhealthy.
https://learn.microsoft.com/en-us/azure/service-health/resource-health-overview
For a list of supported Azure resources, you can refer to this article which also describes the set of health checks being performed.
https://learn.microsoft.com/en-us/azure/service-health/resource-health-checks-resource-types
I am trying to use application Insights API get some telemetary and use it in my website.
During my testing i navigate to some pages in website and then call api but that doesnt get the latest pages i go to
Is there any setting that should be done on azure to make make api get latest pages ?
You should expect a delay between the data generation and data available for querying. Typically latency is in single minutes range. And it is typically smaller for metrics than for raw events.
SLA for Application Insights is 2 hours:
"Data Latency" is the number of minutes that data received from the instrumentation in Customer’s application is delayed from appearing in Application Insights service where the delay is greater than 2 hours.
My 3 months trial account Windows Azure has already been disabled in 3 weeks. How's that possible? I had nothing on my project, just a simple Asp.Net web page.
I don't think anybody knew my page and made constant requests.
I can't find a statistics section on Management Portal to check what was my traffic...etc.
Does anybody know where I can check my Hosted Service's statistics?
Thanks
The trial provides 750 compute hours monthly. Once you deploy your app, the meter is ticking. That is, as long as something is deployed, it's a metered resource. Whether consuming 0% or 100% cpu/network/memory, you pay hourly.
Now: If you deployed a single Small instance for your asp.net site, you shouldn't have consumed 750 hours in 3 weeks. Is it possible you deployed with Medium instances? Or deployed with 2 Small instances? Do you have more than one Role in your deployment (since each would have at least one instance)?
One bit of advice I give, when doing dev work: at the end of the day, when you're not actively working on a project, delete the deployment (you can always re-deploy to the same place later). This helps save tremendously on consumed hours.
You aren't charged based on access to a hosted service (i.e. compute instance). As long as you have something deployed there, you are charged. No one can ever go to it and it still costs you money.
At the portal, you have to delete whatever hosted services you have there in order to preserve credits. For example if you had up both a Production and a Staging instance, they would charge you compute hours for both. You have to actually delete the instances in order to conserve compute hours on your bill.
As for stats, the only way you can get access stats that I know of is by using the azure diagnostics features. They used to have a lot more detail on their bills (in / out transfers, etc) but the bills are a lot shorter now.