Run a task repeatedly after some interval in Windows Azure - c#-4.0

I have a task (couple of C# functions) that updates the DB after every interval. How do I achieve this on Windows Azure (assuming after deployment the DB would also move to SQL Azure)

There's several options:
- use a 3rd party job scheduler to initiate the process remotely
- deploy a single "worker instance" that uses the task schedule built into Server 2008 to schedule the processes (this will require statup tasks)
- deploy a timer process as part of another role, just make sure you put in a traffic cop or singleton style pattern to prevent multiple instances rom simultaneously trying to execute the same process.

You can develop and deploy a Windows Azure Compute Worker Role. This would be the right tool for long running and background operations hosted in Azure. Depending on what your task is doing (how CPU intensive it is) you could choose a very small role size to minimize cost.
You could probably also put such a task in a preexisting web or worker role (but that might not be a clean solution depending on what your task is doing and how reliably it should run).

Related

Deallocation of Azure VM on a task completion

I need an Azure VM (Ubuntu) to do some task (java application) every 10 minutes. Because the task lasts usually less than a minute I would save money if could start the machine every 10 minutes and stop it when the task accomplishes. I learned that I can schedule start and stop times in automation account, but more optimal would be to stop the VM in the very moment that task is completed. Is there a simple way to do that?
This really sounds like a job for Azure Batch. If you are looking for an IaaS solution, Azure Batch will do the job for you. Have a look at it: https://azure.microsoft.com/en-gb/services/batch/#overview.
It allows you to use VM's with your preferred OS (in Azure Batch it is called a node), and run a set of tasks. Once finished, the VM will be de-allocated.
So each node runs a set of pools, in each pool you have a job, and in each job you can have tasks. A task can be for example a cmd line that runs a specific app. So for instance you could just run example.exe 1 2 on a windows OS or the equivalent command line for an Ubuntu OS.
The power here is that it will allocate the tasks to run on the VM when you add them to the job, and then the VM will be disposed off once finished, and you would only pay for the compute time.
The disadvantages of this is that it is a stateless VM, therefore anything that you need installing or storing you would have to use alternative methods. Azure Batch allows you to pre-install a program (for example your Java application) each time it initiates. Also if you are using files and/or expecting files to be created, you would need a blob storage to support this. So if you are expecting it to use a certain amount of files, store them on blob storage and then write back to the blob storage if your program is doing this.
Finally your scheduler, this really depends on how you want to deal with it, if you have a local server or a server on Azure that is already running 24/7 you can add a scheduled job to the scheduler and run a program that will add the task to the Azure Batch. Or if you don't mind using Azure Functions, you can just add a timer Azure Function that will add a task to the job. There are multiple ways of dealing with this, you may already have an existing solution.
Hope you find this useful!

How to tell Azure not to remove particular server during scale down

I have a .NET app running on Azure App Service.
The auto-scale is setup and sometimes it goes up to 10 instances and then back to 3.
I have a background task (hangfire) that runs every hour on one of the instances (I don't know on which one, it is random).
Is there a way to tell Azure, during scale down, not to remove the server where the task is currently executing on?
You should never rely on such thing but design your background job processors to be able to shutdown gracefully.
This is why you should be using cancellation tokens in you jobs, and job should be able to pick up from where it left.
For hangfire there is custom implementation. In some other cases you can use .net CancellationToken

How to host long running process into Azure Cloud?

I have a C# console application which extracts 15GB FireBird database file on a server location to multiple files and loads the data from files to SQLServer database. The console application uses System.Threading.Tasks.Parallel class to perform parallel execution of the dataload from files to sqlserver database.
It is a weekly process and it takes 6 hours to complete.
What is best option to move this (console application) process to azure cloud - WebJob or WorkerRole or Any other cloud service ?
How to reduce the execution time (6 hrs) after moving to cloud ?
How to implement the suggested option ? Please provide pointers or code samples etc.
Your help in detail comments is very much appreciated.
Thanks
Bhanu.
let me give some thought on this question of yours
"What is best option to move this (console application) process to
azure cloud - WebJob or WorkerRole or Any other cloud service ?"
First you can achieve the task with both WebJob and WorkerRole, but i would suggest you to go with WebJob.
PROS about WebJob is:
Deployment time is quicker, you can turn your console app without any change into a continues running webjob within mintues (https://azure.microsoft.com/en-us/documentation/articles/web-sites-create-web-jobs/)
Build in timer support, where WorkerRole you will need to handle on your own
Fault tolerant, when your WebJob fail, there is built-in resume logic
You might want to check out Azure Functions. You pay only for the processing time you use and there doesn't appear to be a maximum run time (unlike AWS Lambda).
They can be set up on a schedule or kicked off from other events.
If you are already doing work in parallel you could break out some of the parallel tasks into separate azure functions. Aside from that, how to speed things up would require specific knowledge of what you are trying to accomplish.
In the past when I've tried to speed up work like this, I would start by spitting out log messages during the processing that contain the current time or that calculate the duration (using the StopWatch class). Then find out which areas can be improved. The slowness may also be due to slowdown on the SQL Server side. More investigation would be needed on your part. But the first step is always capturing metrics.
Since Azure Functions can scale out horizontally, you might want to first break out the data from the files into smaller chunks and let the functions handle each chunk. Then spin up multiple parallel processing of those chunks. Be sure not to spin up more than your SQL Server can handle.

Will an Azure Web job run on multiple instances?

I am planning to deploy a website to Azure, that will utilize Web Jobs.
If the site is scaled up to run on multiple instances, should I expect the job to be started on all the instances as well (running concurrently), or can I expect there to only be one instance of the job running at a time ? Can this be configured in Azure ?
I assume you are using Continuous WebJobs and not Manual/Scheduled. In that case, it will run on all the instances at once. For this to happen correctly, you need to be running in Standard mode, and have the Always On setting enabled.
If you don't want it to run on all instances, you can set it to be in 'singleton' mode by creating a file called settings.job alongside your WebJob files. It should contain:
{ "is_singleton": true }
Note that when using the Kudu Process Explorer UI, you are connecting to a single instance, and won't see processes from other instances. Instead, use the Processes UI built in the Preview Portal (https://portal.azure.com/), which shows processes in all instances.

Long running (or forever) task on Windows Azure

I need to write some data to database every 50 seconds or so. It's similar to a Windows service that's running on background and silently doing its job. Starting and stopping is not an option in my case as I need a small amount of previously inserted data to be stored in memory. What's the best solution for this when using Windows Azure or AWS?
Thank you.
With Windows Azure, you can choose either a Web or Worker role (both basically Windows 2008 Server R2 or SP2) and have some type of timed event, as #Lucifure suggested. You could also run a scheduler, like Quartz.net, or take advantage of windows Azure queues or service bus queues to have messages show up at a certain time. However: You cannot have a "forever" task in a given role instance, in that periodically your VM instances will be rebooted (e.g. for host OS maintenance every month). With role shutdowns, you'll get notice, which you can handle these shutdown notices in Stopping() or OnStop(). If you have multiple instances, you can use a scheduler or queue to ensure your events still trigger every 50 seconds or so, and get handled across multiple instances (but only by one instance at any given time).
To preserve your in-memory information, one idea is to store that information in a cache. You have 2 choices:
Distributed (shared) cache service, which has been around for some time now. It runs independently of your role instances.
In-memory cache, just introduced in June 2012. Assuming you have more than one instance, the cache is spread across those instances. You can even run the cache inside of memory of your existing roles.
More information on caching is here.
There are a few StackOverflow answers regarding Quartz.net and Windows Azure, such as this one.
On Windows Azure, you can use a Worker Role, which can do this. It can be simple as a while loop.
Try this article for an introduction.
http://www.c-sharpcorner.com/uploadfile/40e97e/windows-azu-creating-and-deploying-worker-role/
You could setup a System.Threading.Timer to fire every 50 seconds or so, and do your work whenever the event occurs.

Resources