I'd like to know who have created some resource/service in Azure? I know that I can check Activity log of a service and check Event initiated by and I'll know my answer. However, logs are persisted for 90 days only, so if service has been running for a year I don't have access to that information.
In short I'd like to know if, I select a random service in Azure Cloud that is older than 90 days, I'd like to know who created it?
Related
I have had an Azure SQL DB point in time restore running for two days. I want to cancel it as I think there is an issue. I can see the DB restoring in SSMS but can't find the deployment in my Azure Portal. Does anyone know how to cancel it? I have tried using Azure CLI but I can't see the resource.
It's called Azure Hiccups, it happened to me yesterday on Switzerland West region between 10:20 and 10:40.
I re-run it and everything was fixed.
If I check the Activity Log I can see the error:
But if I browse in the Service Health it says everything was good:
What to do in case of Azure Hiccups:
FIX: Re-run the task, hopefully it will fix the issue, like when you hit an old TV with your fist.
PREVENT: You can try to create an Activity Log alert but once again it will be based on Service Health (which says that everything is good) and not on the actual Activity Log. So you will probably miss issues like this and will discover the problem 24h later.
POST-MORTEM: You can take a screenshot of the failed task/service in the Activity Log, show it to Microsoft and ask for a refund if possible. For the future you can check the current status of Azure in the official Status page and subscribe to the RSS feed. You can browse the Azure Status History. But as I said none of the last two reports the Azure Hiccups so the screenshot of the Activity Log is still the only proof that a tree yesterday has fallen in the forest.
As Microsoft SLA says that the High availability for Azure SQL Database and SQL Managed Instance is 99.99% of the year you can start collecting those screenshot and open tickets with their support.
After dropping the Database this morning, the operation status of which was unsuccessful. The Restore has finally been canceled 8 hrs after attempting to drop the database.
Found a solution, just create a new database of the same name. And the restoring one will be replaced with the one created, then you can delete it.
From the documentation it is unclear how Azure Application Insights pings back to the Azure cloud service.
The only documentation here that gives a clue doesn't explain exactly how this works. I think what makes it a little more difficult to plan for is that the Azure monitor service is one piece and then the actual application's telemetry service applied through code is another part to make up the whole.
Here is the statement from this documentation:
Does the SDK create temporary local storage?
Yes, certain Telemetry Channels will persist data locally if an endpoint cannot be reached. Please review below to see which frameworks and telemetry channels are affected.
Telemetry channels that utilize local storage create temp files in the TEMP or APPDATA directories which are restricted to the specific account running your application. This may happen when an endpoint was temporarily unavailable or you hit the throttling limit. Once this issue is resolved, the telemetry channel will resume sending all the new and persisted data.
For our purposes the plan would be to use Azure Application Insights but connectivity would be spotty or "planned" at best. I.e. every 12 or 24 hours.
Is there a way to plan when the service is actually pinged and used or is there a way to just submit "logs" at certain time intervals?
If not, what happens with spotty/intermittent connectivity in general?
The ServerTelemetryChannel, which is the one that persists data in local storage will hold data for up to 24 hours and transmit once it gets a connection. If there is no connectivity for longer than that it will keep trying, but it won't transmit anything older than 24 hours once it does get a connection. There is also a storage size limit of 50 MB. If you can reliably send data once every 12 hours you would be fine, but there isn't an easy way to tell when all the data has been transmitted, so the planned connection needs to be long enough to ensure you can transmit all the data.
Another option would be to create your own ITelemetryChannel that implements a scheduled data transmission. Since the SDK is open source, you can use the Server Telemetry code to help guide you.
The final option would be to persist logs on local storage through some other means and then import them into Azure Monitor. The Data Collector API is in preview, but it would allow you to import data into Azure Monitor on whatever schedule you want as long as it's sent with a schema that Monitor understands.
I am using the Application Insights APIs to get my customEvents data, If i enable the continuous export, the old data like 1 year ago can still accessed by the Application Insights APIs, or the APIs will show me only 90 days ?
As per the official doc
After Continuous Export copies your data to storage (where it can stay
for as long as you like), it's still available in Application Insights
for the usual retention period.
Which means you can only get the usual retention period of 90 days and not the old data like 1 year ago in the application insights. However, you can still get the data from your Azure storage, download it and write whatever code you need to process it.
We shut down our azure vm's every day at 11pm and start them at 7am, except for weekends.
Is there a way we can check the heartbeat on weekdays only between 7.30am and 10.30pm to see if the server is alive and working?
If so, how can I send a mail for the servers that miss the heartbeat during that time?
There doesn't seem to be any support from any Azure service for creating monitoring or alerts that fire only between certain hours.
It seems Azure monitoring, Log Analytics or specific service alerts (like specific a VM) all assume the alerts should be active at all times.
Some possible solutions that might work that I came up with:
If you need to use Azure services, you could create an Azure Automation PowerShell runbook
Set it to trigger regularly (every hour?) and in code handle the time interval you are interested in
Also, in code do the testing if the service is up
In my opinion, feels a bit hackish..
Use some external tool for monitoring your services, like e.g. Nagios
See e.g. this link for how it could work
Is there any way to get Azure status update only for some services and regions I am using? For example, I am using Cloud Services in West US. When this service in West US is down, I want to get an alert for it. I don't care about other services and other regions.
If you set up alert notifications for your application, you'll get notified when any of the underlying services you're using are not functioning properly. An alert will ensure that your service is available and working.
https://azure.microsoft.com/en-us/documentation/articles/insights-receive-alert-notifications/
If you get an alert about a service issue, that's when I would first take a look at the Azure status dashboard, and then take a look at your application logs to troubleshoot.
Another trick is to create simple URL's in your application that do a quick service test. For example, let's say you're using blob storage in the west datacenter. You could set up a page that does a test write/read to ensure that service is working. This will give you a 100% accurate indication if there is a problem. Since the cloud is highly distributed, and services statuses don't update immediately, I find this method highly preferable.
You would then point your alert monitoring at URL's like this:
http://yourapp.com/
http://yourapp.com/blobtest
http://yourapp.com/redistest
The Azure Status website has the information your need for all Azure regions.
https://azure.microsoft.com/en-us/status/