Azure webjob logging is slow - azure

I have a continuous running webjob that pulls message from a service bus queue, processes them, and persists data to a SQL database. The processing can sometimes be database intensive.
In trying to increase the performance of the webjob, I noticed one of the largest bottlenecks seems to be logging. I have logging enabled to blob storage and set the level to informational. When I turn off logging (via the portal) the message processing rate triples! Re-enabling the logging brings the performance back down.
Are there any tricks to get the logging performance up? I have checked the obvious things like setting up the storage account in the same location and resource group.

I believe that some improvements were made in WebJobs 2.0, which is still in pre-release (https://www.nuget.org/packages/Microsoft.Azure.WebJobs/2.0.0-beta1). Can you give that a shot to see if that helps?

Related

What is the best way to document and keep a change log with azure?

We're looking for a lightweight solution that keeps track of our reasons and execution of changes made to azure tenant, there is no approval processes necessary but we would like a tracking system that allows us to easily and quickly catch up on history and existing state.
By default, everything you do to an azure resource is recorded in the azure activity log. You can learn more about it from here. But I would recommend enabling Diagnostic Logging to your default Log Analytics workspace which will be part of your Azure Monitor Logs now. Learn more about Diagnostic Logging from here.

How can I find the source of my Hot LRS Write Operations on Azure Storage Account?

We are using an Azure Storage account to store some files that shall be downloaded by our app on the users demand.
Even though there should be no write operations (at least none I could think of), we are exceeding the included write operations just some days into the billing period (see image).
Regarding the price it's still within limits, but I'd still like to know whether this is normal and how I can analyze the matter. Besides the storage we are using
Functions and
App Service (mobile app)
but none of them should cause that many write operations. I've checked the logs of our functions and none of those that access the queues or the blobs have been active lately. There are are some functions that run every now and then, but only once every few minutes and those do not access the storage at all.
I don't know if this is related, but there is a kind of periodic ingress on our blob storage (see the image below). The period is roundabout 1 h, but there is a baseline of 100 kB per 5 min.
Analyzing the metrics of the storage account further, I found that there is a constant stream of 1.90k transactions per hour for blobs and 1.3k transactions per hour for queues, which seems quite exceptional to me. (Please not that the resolution of this graph is 1 h, while the former has a resolution of 5 minutes)
Is there anything else I can do to analyze where the write operations come from? It kind of bothers me, since it does not seem as if it's supposed to be like that.
I 've had the exact same problem; after enabling Storage Analytics and inspecting the $logs container I found many log entries that indicate that upon every request towards my Azure Functions, these write operations occur against the following container object:
https://[function-name].blob.core.windows.net:443/azure-webjobs-hosts/locks/linkfunctions/host?comp=lease
In my Azure Functions code I do not explicitly write in any of container or file as such but I have the following two Application Settings configured:
AzureWebJobsDashboard
AzureWebJobsStorage
So I filled a support ticker in Azure with the following questions:
Are the write operation triggered by these application settings? I
believe so but could you please confirm.
Will the write operation stop if I delete these application settings?
Could you please describe, in high level, in what context these operations occur (e.g. logging? resource locking, other?)
and I got the following answers from Azure support team, respectively:
Yes, you are right. According to the logs information, we can see “https://[function-name].blob.core.windows.net:443/azure-webjobs-hosts/locks/linkfunctions/host?comp=lease”.
This azure-webjobs-hosts folder is associated with function app and it’s created by default as well as creating function app. When function app is running, it will record these logs in the storage account which is configured with AzureWebJobsStorage.
You can’t stop the write operations because these operations record necessary logs to storage account used by Azure Functions runtime. Please do not remove application setting AzureWebJobsStorage. The Azure Functions runtime uses this storage account connection string for all functions except for HTTP triggered functions. Removing this Application Settings will cause your function app unable to start. By the way, you can remove AzureWebJobsDashboard and it will stop Monitor rather than the operation above.
These operations is to record runtime logs of function app. These operations will occur when our backend allocates instance for running the function app.
Best place to find information about storage usage is to make use of Storage Analytics especially Storage Analytics Logging.
There's a special blob container called $logs in the same storage account which will have detailed information about every operation performed against that storage account. You can view the blobs in that blob container and find the information.
If you don't see this blob container in your storage account, then you will need to enable storage analytics on your storage account. However considering you can see the metrics data, my guess is that it is already enabled.
Regarding the source of these write operations, have you enabled diagnostics for your Functions and App Service? These write diagnostics logs to blob storage. Also, storage analytics is also writing to the same account and that will also cause these write operations.
For my case, I have a Azure App Insight which took 10K transactions on its storage per mintues for functions and app services, even thought there are only few https requests among them. I'm not sure what triggers them, but once I removed app insights, everything becomes normal.

Microsoft Azure Reports "Your app experienced failure(s) due to a transient storage access issue."

I have an Azure WebApp that continually reports "Your app experienced failure(s) due to a transient storage access issue." The suggested solution is "Explore Local Cache feature for your web app." but my webapp exceeds the maximum storage (3GB) for this option.
The problem mostly occurs between midnight and 6am in the morning when the site is LEAST active, but there seems to be an increasing number of occurrences during the day.
What are the underlying causes of this problem? Is this something to do with my WebApp or is it the Azure Infrastructure. In either case, how do I determine the underlying issue(s) and resolve?
"Your app experienced failure(s) due to a transient storage access issue."
The Web Apps environment provides diagnostic functionality for logging information from both the web server and the web application. You could try to enable Logging and check the logs that generated within that period of time.
According to the error, it seems that a temporary issue causes app failure, and it suggests enabling local cache. You could follow the suggested solution and make sure if it helps resolve issue.
Besides, you could try to scale your web app (which would take additional charge) and check if it could mitigate the issue.
Updates:
As we know App Service offers a shared, persistent storage for the application. Maybe something wrong with the shared storage when the instances in farm access the storage, which may be the cause of the issue.
To determine the underlying issue, you may try to enable diagnostics logging for your web app. This should provide more information on what is happening at the storage level and what kind of activity is going on.

How to control and maintain Azure web jobs control panel logging?

I am using azure web jobs in a system to process xml data received in real-time via a web service that is queued for later processing. Some of the webjob functions are invoked at quite a high frequency (100s per minute). When I first trialed the system the logging seemed to perform well. However now after several weeks a large volume of log data has accumulated it seems to stop updating and displays "indexing in process" fairly constantly.
How do I 'purge' or clear-out the logs?
Can I and should I turn off logging selectively for the frequently updated jobs? How can this be achieved?
My web jobs are continuous and use the c# api. My question isn't the same as Azure webjobs output logs indexing taking very long although this answer is also relevant - I was specifically asking how to purge the logs and turn off logging selectively.
You would have configured the AzureWebJobsStorage application setting or connection string in your WebJob. The logs are stored in the blob storage of this storage account. You should be able to manually clear them up there.
Assuming that you are using the Azure WebJobs SDK, you can plug in a custom logger
JobHostConfiguration config = new JobHostConfiguration();
config.Tracing.Tracers.Add(new CustomTraceWriter(TraceLevel.Info));
JobHost host = new JobHost(config);
host.RunAndBlock();
CustomTraceWriter can wrap all writes within a check for an application setting
if(CloudConfigurationManager.GetSetting("EnableWebJobLogging"))
{
....
}

Azure Storage Queue - Retrieving hidden messages

Is there a way to retrieve azure storage queue messages that are hidden? Background - I have been searching for an app/cmdlet/third party tool that would let me backup the entire queue including hidden messages (for troubleshooting purposes) but unable to find one.
I have also considered writing a powershell script to download all messages, but couldn't find a way to retrieve hidden ones.
Help will be greatly appreciated!
While I don't know if such a tool exists for Azure Storage Queues, have you considered Azure Service Bus Topics and Subscriptions for your queueing system? Under a topic and subscription model, you can set up the following architecture:
[Topic] Place messages on this queue. They get replicated to each subscription.
[Subscription1] Your backup process reads this queue and persists messages.
[Subscription2] Your application reads from this queue for normal operation.
This has a few benefits:
it decouples your backup and production systems, making it less likely that, for example, a faulty backup script ends up impacting production behavior
Locked ("hidden") messages apply only to the given subscription, so your backup queue will never have to deal with a message that is hidden or locked by the production queue.
Similar setups can certainly be achieved using storage queues, but Azure Service Bus has this sort of behavior built in.
Simple answer is that you can't download all messages from a queue. Messages that are hidden are hidden from all other callers including any 3rd party apps so you can't read those messages other than from the application which made them hidden in the 1st place.
You mention the reason for wanting to backup the queue as being for troubleshooting problems, depending on where your issues lie it might be worth taking at look at Azure Storage's Analytics capabilities. The logging infrastructure actually allows you to log every single transaction and greatly simplifies many troubleshooting scenarios. Take a look here for more information: http://blogs.msdn.com/b/windowsazurestorage/archive/tags/analytics+2d00+logging+_2600_amp_3b00_+metrics/.

Resources