Azure - 12 hour logging - azure

So I'm trying to familiarise myself with Azure and have started work on a website which is currently being deployed on git commit to Azure. I decided I had to look at logging and so turned on application diagnostics in the Azure portal. I logged via a trace statement in my code and sure enough it writes to a log file.
I noticed that on hover of the info icon at the side of the "application logging (filesystem)" toggle, that it notes it will be turned off after 12 hours. I presumed that meant diagnostic logging will be turned off after 12 hours, but over 20 hours later that seems not to be the case.
Does the 12 hours refer to the retention of file logs post creation or geniunely that logging will (at some point) be switched off?
From the little I've read if I want durable logging I need to consider pushing log files to blob storage or azure tables (possibly writing directly). Are my thoughts on the 12 hour retention to be correct?
Thanks
Tim

This 12 hours limit is about application logging into text file(s): if you use an ILogger instance to log data (ie. logger.LogInformation(...)) then this feature will be disabled after 12 hours.

Related

Azure Web App for Linux (Container) HTTP logs stop being provided

I've got an Azure Web App for Linux (Container) running and I have setup Diagnostic Settings so that AppServiceHTTPLogs are exposed and I also set Application logging under App Service Logs to Filesystem. I can view and query them in the Logs under the Monitoring section of the Web App settings.
This works fine for a few days but then it just stops logging. At first I thought it was a space issue, so I changed the quota from 35mb to 100mb, and it started logging again. Then a day later it stopped logging again, so I changed the retention from 7 days to 1 day. It started logging again. Now it has stopped and I can't go higher than 100mb or lower than 1 day. Additionally, when I look at the Filesystem storage used it's only sitting at a few megabytes.
I have no idea why it just stops logging. Has anyone experienced this?
EDIT:
As a wild experiment I just set the retention days back to 7, and low and behold it started logging again. It's as if it's just seeking attention.

Blob trigger affecting application insight logging in azure functions

I have two azure functions that exist in the same azure function app and they are both connected to the same instance of application insights:
TimerFunction uses a TimerTrigger and executes every 60 seconds and logs each log type for testing purposes.
BlobFunction uses a BlobTrigger and its functionality is irrelevant for this question.
It appears that when BlobFunction is enabled (it isn't being triggered by the way), it clogs up the application insights with polling, as I don't receive some of the log messages written in TimerFunction. If I disable BlobFunction, then the logs I see in the development tools monitor for TimerFunction are all there.
This is shown in the screenshot below. TimerFunction and BlobFunction were both running until I disabled BlobFunction at 20:24, where you can clearly see the logs working "normally", then at 20:26 I re-enabled BlobFunction and the logs written by TimerFunction are again intermittent, and missing my own logged info.
Here is the sample telemetry from the live metrics tab:
Am I missing something glaringly obvious here? What is going on?
FYI: My host.json file does not set any log levels, I took them all out in the process of testing this and it is currently a near-skeleton. I also changed the BlobFunction to use a HttpTrigger instead, and the issue disappeared, so I'm 99% certain it's because of the BlobTrigger.
EDIT:
I tried to add an Event Grid trigger instead as Peter Bons suggested, but my resource group shows no storage account for some reason. The way the linked article shows, and the way this video shows (https://www.youtube.com/watch?v=0sEzimJYhME&list=WL) just don't work for me. The options are just different, as shown below:
It is normal behavior that the polling is cluttering your logs. You can of course set a log level in host.json to filter out those message, though you might loose some valueable other logging as well.
As for possible missing telemetry: it could very well be that some logs are dropped due to sampling that is enabled by default. I would also not be suprised if some logging is not shown on the portal. I've personally experienced logging being delayed up to 10 minutes or not available at all in the azure function log page on the portal. Try a direct query in App Insights as well.
Or you can go directly to the App Insights resource and create some queries yourself that filter out those messages using Search or Logs.
The other option is to not rely on polling using the blobtrigger but instead use an event grid trigger that invocates the function once a blob is added. Here is an example of calling a function when an image is uploaded to an azure storage blob container. Because there is no polling involved this is a much more efficient way of reacting to storage events.

Azure functions portal log / monitor isn't very accurate

I've been using functions for a while and it seems the longer the Function is around, the less accurate the Portal logs are. When I first was using my functions for maybe 3 months everything monitor/logging wise was fine. Over time things starting getting less accurate.
Now I see the real logs by going to the ms azure storage explorer and checking the AzureWebJobsStorage.
First when I bring up the code/logs the last log it brings up isn't accurate. It will be from a few days ago usually, or the last error. When it triggers though, it does get the live feed. This isn't that big a deal, it's the monitor being inactive that and not being able to see the logs from that which is bad. I suppose I just use the Azure Storage explorer.
Monitor Invocation Logs, always seems a few days behind. This used to be accurate, but the last month or so, it's always a few days behind
Dan,
The local, file based logs, exist primarily to support the portal experience, so the behavior you're observing on the log window is expected as the logs are not written by the runtime as part of the normal invocation process, but only when you're actively developing/testing on the portal.
The issue you're experiencing with the monitor is due to a regression that has been patched and should be fully rolled out today (you can see more details here)
We've been listening to feedback on our logging capabilities, and there has been a lot of investment in that area, resulting in the recently announced built in integration with Application Insights. That integration addresses some of the pain points you've brought up as well as other issues, so I'd strongly recommend trying it out. You can find more information about it here.

Does the Diagnostic Logging setting turn itself off by design?

I have enabled diagnostic logging (Error level only to file system or blob) on my azure website several times and confirmed that it is working. When I come back and check the next day it is switched off. I can't seem to find any documentation that suggest that this is by design.
If you're logging to File System, then it does disable itself after 12 hours. You can see this if you click the help bubble:
The reason is that it could affect site performance due to excessive writing to the (slow) file system.
However, if you set it up for blob, it should never get turned off until you do it.
If you turn on Application Logging to the File System, then yes, it will turn itself off after 12 hours. You can see this in the portal if you hover over the information icon for Application Logging (see below). This behavior is also document here for reference.
The reason why this is disabled after 12 hours has to do with the limited set of storage you have on the local file system, which will be 1GB - 250GB depending on your App Service Plan (size).
If you enable application logging to Azure Storage (blob), then you have up to 500TB of potential storage. In this scenario, your logging should not be getting disabled after 12 hours.

Azure Websites automated and manual backups are not created

Whilst accepting that Backups in Windows Azure Websites are a preview feature, I can't seem to get them working at all. My site is approximately 3GB and on the standard tier. The settings are configured to move to a Geo-Redundant storage account with no other containers. There is no database selected, I'm only backing up the files.
In the Admin Portal, if I use the manual Backup Now button, a 0 bytes file is created within the designated storage account, dated 01/01/0001 00:00:00. However even after several days, it is not replaced with the 'actual' file.
If I use the automated backup scheduler, nothing happens at all - no errors, no 0 byte files.
Can anyone shed any light on this please?
The backup/restore feature is still in a preview mode and officially supports only 2 GB of data. From the error message you posted ("backup is currenly in progress") it seems you probably hit a bug which was there and was fixed last week (the result of that bug was that there were some lingering backups which blocked subsequent backups).
Please try it again, you should be able to invoke it now. If you find another error message in operational logs, feel free to post it here (just leave the RequestId in it unscrambled - we can correlate using that) and we can take a look.
However, as I mentioned in the beginning, more than 2 GBs are not fully supported yet (you might not be able to do e.g. roundtrip with your data - backup and then restore).
Thanks,
Petr

Resources