Cleaning of logs in Azure Web Job - azure

I am using the default logging mechanism that Azure web job provides. Type of logger is 'TextWriter'. I have 3 functions in the same web job with extensive logging. A number of logs being generated every minute. As with the default settings of azure web job, all the logs go to the storage account into blobs. I do not want my storage account to just keep on growing with months and months of old logs.
I need a way of cleaning the logs on a periodic basis. Or is there any setting/configuration that can be done so that my logs get cleaned on a periodic basis? Or should I write code to monitor the blob container 'azure-webjobs-hosts' and then the files inside 'output-logs'. Is that the only place where the logs for my application are stored by default by the web job?
I tried searching the web but couldn't find any related posts. Any pointers would be of great help.

Based on my experience, we can achieve this purpose by define the azure storage container name. We can define weekly/monthly/daily as container name. Then use a time trigger function to delete the container. For example, if we need delete weekly data, then we set container for this weekly data, then delete it in the next week via time trigger.

Related

How to periodically run a script on Azure App Service filesystem (logs deletion)

I know I can delete old files manually but I need to automate the process. Some cron script would do the job but as far as I know when the app service will be reproviosioned my changes will be lost.
The App Service runs Ubuntu.
Yes, by default, logs are not automatically deleted (with the exception of Application Logging (Filesystem)). To automatically delete logs, set the Retention Period (Days) field (it's one of the way to do that).
You could automate the deletion by leveraging KUDU Virtual File System (VFS) Rest API. For a sample script, checkout this discussion thread for a similar approach.
While WebJobs is not yet supported for App Service on Linux.You could use Azure Functions for running scripts, if your requirement fits.

Best practices for azure webapp loging and automate log clearing policy (in the case log size is increasing )

I'm setting up Azure WebApp logging. My concern is that error logs are stored in webapp server level, the size increasing day by day from Elmah. Is there a best approach to maintaining the logs, both storing and automating archiving or deleting?
My web development is based on angular. Any suggestion for aggregating logs, like what kind of logs would be generated?
Yes, by default, logs are not automatically deleted (with the exception of Application Logging (Filesystem)). To automatically delete logs, set the Retention Period (Days) field. You could automate the deletion by leveraging KUDU Virtual File System (VFS) Rest API. For a sample script, checkout this discussion thread for a similar approach:
How can you delete all log files from an Azure WebApp using powershell?
Just to highlight, these are the logging that you could capture on WebApps:
• Detailed Error Logging
• Failed Request Tracing
• Web Server Logging
• Application logging - you can turn on the file system option temporarily for debugging purposes. This option turns off automatically in 12 hours. You can also turn on the blob storage option to select a blob container to write logs to.
For log directory information kindly refer to this document: https://learn.microsoft.com/azure/app-service/troubleshoot-diagnostic-logs

How can I find the source of my Hot LRS Write Operations on Azure Storage Account?

We are using an Azure Storage account to store some files that shall be downloaded by our app on the users demand.
Even though there should be no write operations (at least none I could think of), we are exceeding the included write operations just some days into the billing period (see image).
Regarding the price it's still within limits, but I'd still like to know whether this is normal and how I can analyze the matter. Besides the storage we are using
Functions and
App Service (mobile app)
but none of them should cause that many write operations. I've checked the logs of our functions and none of those that access the queues or the blobs have been active lately. There are are some functions that run every now and then, but only once every few minutes and those do not access the storage at all.
I don't know if this is related, but there is a kind of periodic ingress on our blob storage (see the image below). The period is roundabout 1 h, but there is a baseline of 100 kB per 5 min.
Analyzing the metrics of the storage account further, I found that there is a constant stream of 1.90k transactions per hour for blobs and 1.3k transactions per hour for queues, which seems quite exceptional to me. (Please not that the resolution of this graph is 1 h, while the former has a resolution of 5 minutes)
Is there anything else I can do to analyze where the write operations come from? It kind of bothers me, since it does not seem as if it's supposed to be like that.
I 've had the exact same problem; after enabling Storage Analytics and inspecting the $logs container I found many log entries that indicate that upon every request towards my Azure Functions, these write operations occur against the following container object:
https://[function-name].blob.core.windows.net:443/azure-webjobs-hosts/locks/linkfunctions/host?comp=lease
In my Azure Functions code I do not explicitly write in any of container or file as such but I have the following two Application Settings configured:
AzureWebJobsDashboard
AzureWebJobsStorage
So I filled a support ticker in Azure with the following questions:
Are the write operation triggered by these application settings? I
believe so but could you please confirm.
Will the write operation stop if I delete these application settings?
Could you please describe, in high level, in what context these operations occur (e.g. logging? resource locking, other?)
and I got the following answers from Azure support team, respectively:
Yes, you are right. According to the logs information, we can see “https://[function-name].blob.core.windows.net:443/azure-webjobs-hosts/locks/linkfunctions/host?comp=lease”.
This azure-webjobs-hosts folder is associated with function app and it’s created by default as well as creating function app. When function app is running, it will record these logs in the storage account which is configured with AzureWebJobsStorage.
You can’t stop the write operations because these operations record necessary logs to storage account used by Azure Functions runtime. Please do not remove application setting AzureWebJobsStorage. The Azure Functions runtime uses this storage account connection string for all functions except for HTTP triggered functions. Removing this Application Settings will cause your function app unable to start. By the way, you can remove AzureWebJobsDashboard and it will stop Monitor rather than the operation above.
These operations is to record runtime logs of function app. These operations will occur when our backend allocates instance for running the function app.
Best place to find information about storage usage is to make use of Storage Analytics especially Storage Analytics Logging.
There's a special blob container called $logs in the same storage account which will have detailed information about every operation performed against that storage account. You can view the blobs in that blob container and find the information.
If you don't see this blob container in your storage account, then you will need to enable storage analytics on your storage account. However considering you can see the metrics data, my guess is that it is already enabled.
Regarding the source of these write operations, have you enabled diagnostics for your Functions and App Service? These write diagnostics logs to blob storage. Also, storage analytics is also writing to the same account and that will also cause these write operations.
For my case, I have a Azure App Insight which took 10K transactions on its storage per mintues for functions and app services, even thought there are only few https requests among them. I'm not sure what triggers them, but once I removed app insights, everything becomes normal.

Azure Web Jobs long history affecting app service plan performance

I have been running my web jobs for a few months now and the history includes hundreds of thousands of instances when some of them ran, mainly TimerTriggers. When I go in the portal to the "Functions" view of web jogs logs I have noticed that my app service plan shoots up to 100% CPU while I am sat on that page. The page constantly says "Indexing...."
When I close the "Functions" view down the CPU goes straight back down to a few percent, it's normal range.
I assume it must be down to the fact that it has been running for so long and the number of records to search through is so vast. I cannot see any option to archive or remove old records of when jobs ran.
Is there a way I can reduce the history of the jobs? Or is there another explanation?
I'm not familiar with Azure Web Jobs, but I am familiar with Azure Functions which is built on top of Web Jobs, so this might work.
In Azure Functions, each execution is stored in Azure Storage Table. There, you can see all of the parameters that were passed in, as well as the result. I could go into the Storage Table and truncate the records I do not need, so you might be able to do the same with Web Jobs.
Here is how to access this information:
Table Storage in markheath.net/post/three-ways-view-error-logs-azure-functions
Based on your description, I checked my webjob and found the related logs for azure webjobs dashboard as follows:
For Invocation Log Recently executed functions, you could find them as follows:
Note: The list records for Invocation Log are under azure-webjobs-dashboard\functions\recent\flat, the detailed invocation logs are under azure-webjobs-dashboard\functions\instances.

How to control and maintain Azure web jobs control panel logging?

I am using azure web jobs in a system to process xml data received in real-time via a web service that is queued for later processing. Some of the webjob functions are invoked at quite a high frequency (100s per minute). When I first trialed the system the logging seemed to perform well. However now after several weeks a large volume of log data has accumulated it seems to stop updating and displays "indexing in process" fairly constantly.
How do I 'purge' or clear-out the logs?
Can I and should I turn off logging selectively for the frequently updated jobs? How can this be achieved?
My web jobs are continuous and use the c# api. My question isn't the same as Azure webjobs output logs indexing taking very long although this answer is also relevant - I was specifically asking how to purge the logs and turn off logging selectively.
You would have configured the AzureWebJobsStorage application setting or connection string in your WebJob. The logs are stored in the blob storage of this storage account. You should be able to manually clear them up there.
Assuming that you are using the Azure WebJobs SDK, you can plug in a custom logger
JobHostConfiguration config = new JobHostConfiguration();
config.Tracing.Tracers.Add(new CustomTraceWriter(TraceLevel.Info));
JobHost host = new JobHost(config);
host.RunAndBlock();
CustomTraceWriter can wrap all writes within a check for an application setting
if(CloudConfigurationManager.GetSetting("EnableWebJobLogging"))
{
....
}

Resources