I have been running my web jobs for a few months now and the history includes hundreds of thousands of instances when some of them ran, mainly TimerTriggers. When I go in the portal to the "Functions" view of web jogs logs I have noticed that my app service plan shoots up to 100% CPU while I am sat on that page. The page constantly says "Indexing...."
When I close the "Functions" view down the CPU goes straight back down to a few percent, it's normal range.
I assume it must be down to the fact that it has been running for so long and the number of records to search through is so vast. I cannot see any option to archive or remove old records of when jobs ran.
Is there a way I can reduce the history of the jobs? Or is there another explanation?
I'm not familiar with Azure Web Jobs, but I am familiar with Azure Functions which is built on top of Web Jobs, so this might work.
In Azure Functions, each execution is stored in Azure Storage Table. There, you can see all of the parameters that were passed in, as well as the result. I could go into the Storage Table and truncate the records I do not need, so you might be able to do the same with Web Jobs.
Here is how to access this information:
Table Storage in markheath.net/post/three-ways-view-error-logs-azure-functions
Based on your description, I checked my webjob and found the related logs for azure webjobs dashboard as follows:
For Invocation Log Recently executed functions, you could find them as follows:
Note: The list records for Invocation Log are under azure-webjobs-dashboard\functions\recent\flat, the detailed invocation logs are under azure-webjobs-dashboard\functions\instances.
Related
I have a very simple asp.net core hosted blazor wasm application on a S1 App Service with 2 slots: Integration and production. When I just publish into my integration slot that takes around 50 seconds. But when I swap the integration with the production slot it takes over 7 minutes. Both apps slots are after that very slow, often taking over a minute before they react again. During the swap both are not responding at all.
There are only 2 settings and the connection string to change. And I don't have any manual warmup.
Is the swap functionality just not really meant the be used on such a low configuration or can I adjust something in my configuration to speed things up?
Adding some information here regarding Azure App service deployment slot swap might be helpful:
Some apps might require custom warm-up actions before the swap. The applicationInitialization configuration element in web.config lets you specify custom initialization actions. The swap operation waits for this custom warm-up to finish before swapping with the target slot.
During the swap operation the Web App’s worker process may get restarted in order for some settings to take effect. Even though the swap does not proceed until the restarted worker process comes back online on every VM instance, it may still not be enough for application to be completely ready to take on production traffic.
Try enabling Application Initialization Module to completely warm up your application prior to swapping it into production.
More detailed explanation of that process is available at How to warm up Azure Web App during deployment slots swap
To add to this if the swap operation takes a long time to complete, you can also get information on the swap operation in the activity log. On your app's resource page in the portal, in the left pane, select Activity log. A swap operation appears in the log query as Swap Web App Slots. You can expand it and select one of the sub operations or errors to see the details.
Please refer to below links for more details on this:
https://learn.microsoft.com/en-us/azure/app-service/deploy-staging-slots#what-happens-during-a-swap
https://ruslany.net/2017/11/most-common-deployment-slot-swap-failures-and-how-to-fix-them/
https://ruslany.net/2019/06/azure-app-service-deployment-slots-tips-and-tricks/
There has been a previous question on this and the accepted answer was Azure Elastic Job agent. The problem I have is that the feature is in preview and it still lacks a lot of functionality like diagnostics and alerting. I also find it to be very unreliable as job get randomly cancelled because of service restarts.
Azure Automation Accounts also works, but it only has a execution/running time of 3 hours. So if your maintenance takes more than 3 hours, this is not an option.
I have previously developed my own application for doing this, but the maintenance and management of this can become a headache.
Another alternative could be to just leverage Azure Data Factory perhaps, but this is a route I have not yet followed.
So what are people actually using to do long running maintenance against Azure SQL Databases that has enough diagnostic information in case something goes wrong and has at least some level of alerting?
PS: The database I need to do maintenance on is not small.
I am using the default logging mechanism that Azure web job provides. Type of logger is 'TextWriter'. I have 3 functions in the same web job with extensive logging. A number of logs being generated every minute. As with the default settings of azure web job, all the logs go to the storage account into blobs. I do not want my storage account to just keep on growing with months and months of old logs.
I need a way of cleaning the logs on a periodic basis. Or is there any setting/configuration that can be done so that my logs get cleaned on a periodic basis? Or should I write code to monitor the blob container 'azure-webjobs-hosts' and then the files inside 'output-logs'. Is that the only place where the logs for my application are stored by default by the web job?
I tried searching the web but couldn't find any related posts. Any pointers would be of great help.
Based on my experience, we can achieve this purpose by define the azure storage container name. We can define weekly/monthly/daily as container name. Then use a time trigger function to delete the container. For example, if we need delete weekly data, then we set container for this weekly data, then delete it in the next week via time trigger.
I'm using the Azure WebJob dashboard for monitoring my jobs. I'm not happy with how far I have to drill into the into the interface to determine what's happening. I'd like to leverage the "Status" field on the webjob details page to show if a particular invocation needs attention and in cases where I consider an invocation a failure, even if it didn't blow up.
I've searched through the Azure WebJobs docs and the features of the Azure WebJobs SDK Extensions package with no luck (but I don't doubt I might have missed it). I manually setting this field possible?
I'm not happy with how far I have to drill into the into the interface to determine what's happening. I'd like to leverage the "Status" field on the webjob details page to show if a particular invocation needs attention and in cases where I consider an invocation a failure, even if it didn't blow up.
As far as I know, it seems that it does not enable us to set status field by ourselves on Azure WebJob Dashboard. If you’d like to display WebJob run details without clicking into the interface, you could try to call WebJobs API to get job runs history and retrieve output or error information from logs by requesting output_url or error_url, and then you could create a custom dashboard and populate it with the output and error details data.
No, you can't set it yourself.
The Kudu APIs may not give you enough detail for individual function instances.
Consider putting a feature request on https://github.com/Azure/azure-webjobs-sdk/
There has been some more investment in exposing a logging API directly over the storage account.
I'm trying to figure out a solution for recurring data aggregation of several thousand remote XML and JSON data files, by using Azure queues and WebJobs to fetch the data.
Basically, an input endpoint URL of some sort would be called (with a data URL as parameter) on an Azure website/app. It should trigger a WebJobs background job (or can it continuously running and checking the queue periodically for new work), fetch the data URL and then callback an external endpoint URL on completion.
Now the main concern is the volume and its performance/scaling/pricing overhead. There will be around 10,000 URLs to be fetched every 10-60 minutes (most URLs will be fetched once every 60 minutes). With regards to this scenario of recurring high-volume background jobs, I have a couple of questions:
Is Azure WebJobs (or Workers?) the right option for background processing at this volume, and be able to scale accordingly?
For this sort of volume, which Azure website tier will be most suitable (comparison at http://azure.microsoft.com/en-us/pricing/details/app-service/)? Or would only a Cloud or VM(s) work at this scale?
Any suggestions or tips are appreciated.
Yes, Azure WebJobs is an ideal solution to this. Azure WebJobs will scale with your Web App (formerly Websites). So, if you increase your web app instances, you will also increase your web job instances. There are ways to prevent this but that's the default behavior. You could also setup autoscale to automatically scale your web app based on CPU or other performance rules you specify.
It is also possible to scale your web job independently of your web front end (WFE) by deploying the web job to a web app separate from the web app where your WFE is deployed. This has the benefit of not taking up machine resources (CPU, RAM) that your WFE is using while giving you flexibility to scale your web job instances to the appropriate level. Not saying this is what you should do. You will have to do some load testing to determine if this strategy is right (or necessary) for your situation.
You should consider at least the Basic tier for your web app. That would allow you to scale out to 3 instances if you needed to and also removes the CPU and Network I/O limits that the Free and Shared plans have.
As for the queue, I would definitely suggest using the WebJobs SDK and let the JobHost (from the SDK) invoke your web job function for you instead of polling the queue. This is a really slick solution and frees you from having to write the infrastructure code to retrieve messages from the queue, manage message visibility, delete the message, etc. For a working example of this and a quick start on building your web job like this, take a look at the sample code the Azure WebJobs SDK Queues template punches out for you.