Make the azure batch job schedule not wait on the previous iteration - azure

I have an Azure Batch service set up with a job schedule that runs every minute.
The job manager task creates 3-10 tasks within the same job.
Sometimes, one of these tasks within the job may take extremely long to complete but usually are very fast.
In the event that one of the tasks takes long to apply, the next iteration of the job manager task does not begin in that case. It basically waits till all the tasks from the previous iteration have completed.
Is there a way to ensure that the job schedule keeps creating a version of the job every minute even if all the tasks from its previous iteration have not been completed?
I know one option is to make the job manager task create additional jobs instead of tasks. But preferably, I was hoping there is some configuration at the job schedule level that I can turn on that will allow the schedule to create tasks without the dependency of completion on the previous job.

This seems like more towards design question, AFAIK, No, the duplicate active job names should not be doable from az batch perspective. (I will get corrected if at all this is doable somehow)
Although in order to further think this you can read through various design recommendations via Azure batch technical overview page or posts like:
How to use Azure Batch in an event based design and terminate/cleanup finished jobs or
Add Tasks to a running Azure batch job and manually control termination
I think simplicity will be better like handling each iteration with unique job name or some thing of other sort but you will know your scenario better. Hope this helps.

Currently, a Job Schedule can have at most one active Job under it at any given time (link) so the behavour you're seeing is expected.
We don't have any simple feature you can just "turn on" to achieve concurrent jobs from a single job schedule - but I do have a suggestion:
Instead of using the JobSchedule to run all the processing directly, use it to create "worker" jobs that do the processing.
E.g.
At 10:03 am, your job schedule triggers to create job processing-20191031-1003.
At 10:04 am, your job schedule triggers to create job processing-20191031-1004.
At 10:05 am, your job schedule triggers to create job processing-20191031-1005.
and so on
Because the only thing your job schedule does is create another job, it will finish very quickly, ensuring the next job is created on time.
Since your existing jobs already create a variable number of tasks (you said 3-10 tasks, above), I'm hoping this won't be a very complex change for your code.
Note that you will need to ensure your concurrent worker jobs don't step on each others toes by trying to do the same work multiple times.

Related

Add Tasks to a running Azure batch job and manually control termination

We have an Azure-batch job that uses some quite large files which we are uploading to Azure Blob storage asynchronously so that we don't have to wait for all files to upload before starting our batch job made up of a collection of Tasks that will process each file and generate output. All good so far - this is working fine.
I'd like to be able to create an Azure Task and Add it to an existing, running Azure Job increasing the length of the Task list but I cant find how to do this. It seems that Azure expects you to define ALL jobs for a Task before the Job starts and then it runs until all tasks are complete and terminates the job (which makes sense in some scenarios - but not mine).
I would like to suppress this Job completion behavior and be able to queue up additional Azure Tasks for the same job. I could then monitor the Azure Job status (via the Tasks) and determine myself if the Job is complete.
Our issue is that uploads of multi-MB files takes time and we want Task processing to start as soon as the first file is available. If we have to wait until all files are available, then our processing start is delayed which is not what we need.
We 'could'create a job per task and manage it in our application but that is a little 'messy' and I would like to use the encapsulating Azure Job entity and supporting functionality if I possibly can.
Has anyone done this and can offer some guidance? Many thanks?
You can add new tasks to an existing Azure Batch job in the active state. There is no running state for an Azure Batch job. You can find a list of Azure Batch job states here.
Azure Batch Jobs, by default, do not automatically complete by terminating upon all tasks completing. You can view this related question regarding this subject.

Does Azure start up another instance of a scheduled web job if it is already running?

I have a Azure web job that is scheduled to run every 5 minutes using the cron expression in the settings.job file. If the process doesn't finish within 5 minutes will Azure kick off another instance of the job or will it wait until the first one finishes?
I would like to make sure it waits until the first one finishes so it isn't running multiple instances.
When a scheduled webjob is started, Azure places a lock file. This lock file will remain until the scheduled webjob is completed. If there's an attempt to fire up another scheduled job, you'll get a ConflictException.
You can read about the lock file here. And the code that checks to see if another job is already running is here.

Do Azure Batch jobs need a watcher process?

We have a very long running operation (potentially days) that we would like to have triggered from a BLOB file written to a Azure Storage. This job could be started once year, never, or many times over a few days.
Azure Batch jobs look exactly like what we need assuming there doesn't need to be a 'watcher' process on the batch job as it runs. For example, if we can have a Azure Function catch the BLOB event, fire up a Batch job, start the job in a "fire and forget" type fashion, and then the Function ends it is exactly what we need. We aren't really too worried about reporting progress of the job (we are using a SQL table for that), we just want to start the job then monitor it out of band.
Is there a way to start a batch job and let the instigator process disappear while the job continues to run in the background? If not, is there any way to do this without having to have a constantly running process (Worker Role or Fabric Worker)? We are trying to avoid having a process (Worker/Fabric Role, Function using the App Function Plan, etc.) running all the time when 99.9% of the time it isn't doing anything.
Short answer: No, you don't need a watcher process.
Azure Batch tasks are asynchronous in nature. When you add a task (under a job), your call against the Batch service immediately returns with success or failure of the submission action itself (and not if the task completed successfully on a compute node or not). The Batch service takes care of scheduling the task among the available compute nodes in your pool, internally monitoring the progress of the task, updating stats, etc.
If you elect to do so, you can monitor the progress of your task independently of the submitting actor using any SDK, REST API or client tool. Or you can opt to monitor it out-of-band yourself as you have described if your task is updating an external monitor or data store. Or you can schedule a task and not monitor it, the service does not force you to monitor/watch the task.

How to check whether a Timer Job has run

Is it possible to check whether a SharePoint (actually WSS 3.0) timer job has run when it was scheduled to ?
Reason is we have a few daily custom jobs and want to make sure they're always run, even if the server has been down during the time slot for the jobs to run, so I'd like to check them and then run them
And is it possible to add a setting when creating them similar to the one for standard Windows scheduled tasks ... "Run task as soon as possible after a scheduled start is missed" ?
check it in job status page and then you can look at the logs in 12 hive folder for further details
central administration/operations/monitoring/timer jobs/check jobs status
As far as the job restart is concerned when it is missed that would not be possible with OOTB features. and it make sense as well since there are lot of jobs which are executed at particular interval if everything starts at the same time load on server would be very high
You can look at the LastRunTime property of an SPJobDefinition to see when the job was actually executed. As far as I can see in Reflector, the value of this property is loaded from the database and hence it should reflect the time it was actually executed.

Eclipse RCP: Only one Job runs at a time?

The Jobs API in Eclipse RCP apparently works much differently than I expected. I thought that creating and scheduling multiple Jobs would actually cause multiple worker threads to be created, executing the Jobs in parallel unless there was an ISchedulingRule conflict.
I went back and read the documentation more closely, and also discovered this comment in the JobManager class:
/**
* Returns a running or blocked job whose scheduling rule conflicts with the
* scheduling rule of the given waiting job. Returns null if there are no
* conflicting jobs. A job can only run if there are no running jobs and no blocked
* jobs whose scheduling rule conflicts with its rule.
*/
Now it looks to me like the Job manager will only ever attempt to use one background worker thread. Am I completely wrong about this? If I'm right,
what is the point of scheduling rules and locks? If there is only one worker thread, Jobs can never preemt each other. Wouldn't these only ever be used in case a Job's sleep() method is called (e.g. sleeping while holding a Lock)?
does any part of the platform allow two Jobs to actually run concurrently, on multiple worker threads, thus making the above features useful somehow?
What am I missing here?
Take a look at the run method in the documentation, specifically this part:
Jobs can optionally finish their
execution asynchronously (in another
thread) by returning a result status
of ASYNC_FINISH. Jobs that finish
asynchronously must specify the
execution thread by calling setThread,
and must indicate when they are
finished by calling the method done.
ASYNC_FINISH there looks interresting.
AFAIK creating and scheduling multiple Jobs DO actually cause multiple worker threads to be created and to be executed in parallel.
However if you specify an optional scheduling rule to your job (using the setRule() method) and if that rule conflicts with another job's scheduling rule then those two jobs can't run simultaneously.
This Eclipse Corner article provides good description as well as few code samples for Eclipse Job API.
The IJobManager API is only needed for advanced job manipulation, e.g. when you need to use locks, synchronize between several jobs, terminate jobs, etc.
Note: Eclipse 4.5M4 will include now (Q4 2014) a way to Support for Job Groups with throttling
See bug 432049:
Eclipse provides a simple Jobs API to perform different tasks in parallel and in asynchronous fashion. One limitation of the Eclipse Jobs is that there is no easy way to limit the number of worker threads being used to execute jobs.
This may lead to a thread pool explosion when many jobs are scheduled in quick succession. Due to that it’s easy to use Jobs to perform different unrelated tasks in parallel, but hard to implement thousands of Jobs co-operating to complete a single large task.
Eclipse currently supports the concept of Job Families, which provides one way of grouping with support for join, cancel, sleep, and wakeup operations on the whole family.
To address all these issue we would like to propose a simple way to group a set of Eclipse Jobs that are responsible for pieces of the same large task.
The API would support throttling, join, cancel, combined progress and error reporting for all of the jobs in the group and the job grouping functionality can be used to rewrite performance critical algorithms to use parallel execution of cooperating jobs.
You can see the implementation in this commit 26471fa

Resources