I am planning to deploy a website to Azure, that will utilize Web Jobs.
If the site is scaled up to run on multiple instances, should I expect the job to be started on all the instances as well (running concurrently), or can I expect there to only be one instance of the job running at a time ? Can this be configured in Azure ?
I assume you are using Continuous WebJobs and not Manual/Scheduled. In that case, it will run on all the instances at once. For this to happen correctly, you need to be running in Standard mode, and have the Always On setting enabled.
If you don't want it to run on all instances, you can set it to be in 'singleton' mode by creating a file called settings.job alongside your WebJob files. It should contain:
{ "is_singleton": true }
Note that when using the Kudu Process Explorer UI, you are connecting to a single instance, and won't see processes from other instances. Instead, use the Processes UI built in the Preview Portal (https://portal.azure.com/), which shows processes in all instances.
Related
I have a small website depends on Azure WebJobs for some background tasks.
Due to my basic plan, WebJobs just stops working within 1-2 hours after deployment to Azure, because I need to upgrade my plan.
Meanwhile I run the task on my local machine, and it works fine unless there is a wireless/network failure, apparently it is not network failure resilient. So it does not keep running when network is backup.
I have following basic setup:
hostBuilder.ConfigureWebJobs(webJobsBuilder =>
{
webJobsBuilder
.AddTimers()
.AddAzureStorageCoreServices()
.AddAzureStorage()
.AddServiceBus();
});
What I am thinking to run another console app to monitor running WebJobs executable and re-run process if it does not exists. But before I do that, I want to learn if there is anything else I can configure in the WebJobs app? I am considering Windows Task scheduler as well, but I don't want to over complicate things.
I need an Azure VM (Ubuntu) to do some task (java application) every 10 minutes. Because the task lasts usually less than a minute I would save money if could start the machine every 10 minutes and stop it when the task accomplishes. I learned that I can schedule start and stop times in automation account, but more optimal would be to stop the VM in the very moment that task is completed. Is there a simple way to do that?
This really sounds like a job for Azure Batch. If you are looking for an IaaS solution, Azure Batch will do the job for you. Have a look at it: https://azure.microsoft.com/en-gb/services/batch/#overview.
It allows you to use VM's with your preferred OS (in Azure Batch it is called a node), and run a set of tasks. Once finished, the VM will be de-allocated.
So each node runs a set of pools, in each pool you have a job, and in each job you can have tasks. A task can be for example a cmd line that runs a specific app. So for instance you could just run example.exe 1 2 on a windows OS or the equivalent command line for an Ubuntu OS.
The power here is that it will allocate the tasks to run on the VM when you add them to the job, and then the VM will be disposed off once finished, and you would only pay for the compute time.
The disadvantages of this is that it is a stateless VM, therefore anything that you need installing or storing you would have to use alternative methods. Azure Batch allows you to pre-install a program (for example your Java application) each time it initiates. Also if you are using files and/or expecting files to be created, you would need a blob storage to support this. So if you are expecting it to use a certain amount of files, store them on blob storage and then write back to the blob storage if your program is doing this.
Finally your scheduler, this really depends on how you want to deal with it, if you have a local server or a server on Azure that is already running 24/7 you can add a scheduled job to the scheduler and run a program that will add the task to the Azure Batch. Or if you don't mind using Azure Functions, you can just add a timer Azure Function that will add a task to the job. There are multiple ways of dealing with this, you may already have an existing solution.
Hope you find this useful!
Have an all RESTful service built using .NET core 1.1. No front end. This includes a couple of background tasks that run every other hour. These tasks get bootstrapped (invoked) under the Configure method in the Startup class.
For some reason, nothing gets called when I publish my app onto Azure. It seems that nothing gets runned in the Startup class at all. I have to explicitly invoke a RESTful service to "start" it up and then everything seems to run fine.
I believe I'm doing something wrong here. Is there a way to bootstrap my background tasks immediately when application gets published onto Azure? I don't want to have to manually invoke a rest service just for the app to start up.
tasks get bootstrapped (invoked) under the Configure method in the
Startup class.
You could manage to make it work, if the application runs inside a single instance. All you have to do is ping it from external service like monitis.
However, we need to deploy two or more instances in Azure to avoid a single point of failure. That means we should not run Background Tasks inside those instances; otherwise, we will end up with race condition.
For Background Tasks, you might want to consider using -
Azure Scheuler
Azure WebJob
I have a few continuous web jobs setup on a azure website scaled to two to three large instances (standard mode with always on). My jobs only ever run on one of the w3wp processes. I need these to scale out but they won't. I've watched a few videos and read the docs. I have no settings.job file or anything set that should be limiting these.
Here is the source to my job runner
"Note that when using the Kudu UI, you are connection to a single instance, and won't see processes from other instances. Try using the Processes UI built in the Preview Portal, which shows processes in all instances. I think you'll find that your WebJobs are running everywhere." #DavidEbbo
I have a task (couple of C# functions) that updates the DB after every interval. How do I achieve this on Windows Azure (assuming after deployment the DB would also move to SQL Azure)
There's several options:
- use a 3rd party job scheduler to initiate the process remotely
- deploy a single "worker instance" that uses the task schedule built into Server 2008 to schedule the processes (this will require statup tasks)
- deploy a timer process as part of another role, just make sure you put in a traffic cop or singleton style pattern to prevent multiple instances rom simultaneously trying to execute the same process.
You can develop and deploy a Windows Azure Compute Worker Role. This would be the right tool for long running and background operations hosted in Azure. Depending on what your task is doing (how CPU intensive it is) you could choose a very small role size to minimize cost.
You could probably also put such a task in a preexisting web or worker role (but that might not be a clean solution depending on what your task is doing and how reliably it should run).