TimerJob development, what does the EndSecond property of a SPMinuteSchedule mean - sharepoint

I'm currently developing on a project which uses some TimerJobs. One of the jobs should check the MySites of some special users about every 2 minutes. So I create a SPMinuteSchedule object and set the BeginSecond property to 0 and the Interval property to 2. I think the use of both properties seem to be obvious but I'm not really sure how to interpret the EndSecond property.
If EndSecond is set to 30 and BeginSecond to 0, does it mean that the Timer Service will start the job some where within these 30 seconds and the job takes as long as it needs to execute its code? Or does it mean that the job can only run for 30 seconds? What happens if the code executed within the Execute() method needs more time to complete?
Whatever might be the answer, the property's name "EndSecond" was not chosen very well.

Refer to this post for more details to re-iterate below is the info extract from the Post
Notice how the schedule is set for the timer job. The SPMinuteSchedule.BeginSecond property and the SPMinuteSchedule.EndSecond property specify a start window of execution. The SharePoint Timer service starts the timer job at a random time between the BeginSecond property and the EndSecond property. This aspect of the timer service is designed for expensive jobs that execute on all servers in the farm. If all the jobs started at the same time, it could place an unwanted heavy load on the farm. The randomization helps spread the load out across the farm.

Related

JMeter - Wait until the predefined time to start next thread

I configured Java based Selenium WebDriver test in Apache JMeter with the following setup:
Number of Threads (Users): 10
Ramp-up period (Second): 120
Loop Count: 1
I ticked the Delay Thread Creation until needed to save resources.
My expectation regarding the functionality:
I expected that if I have 10 users with 120 seconds ramp up time, then every user activity will start each other and the Jmeter will wait at least 12 seconds to start the next thread.
The issue is:
The threads start sometimes within 11 seconds, sometimes 12 seconds.
I don't know why does it happen because I would like to see the threads start after each other exactly in 12 seconds.
The question is
Are there any solution that to tell the JMeter to wait exactly 12 seconds for next thread start?
Here is the picture about started jobs with date time stamp:
I don't think you will be able to achieve this level of precision using ramp-up period approach of the normal Thread Group, a better idea would be going for the Ultimate Thread Group (can be installed using JMeter Plugins Manager) which allows absolute flexibility in terms of definition of ramp-up, ramp-down and time to hold the load.
Example setup:
Example output:
In order to get only one execution of the "job" per each virtual user you can use Throughput Controller configured like:
You can add Flow Control Action for pausing exact time
it allows pauses to be included without needing to generate a sample. For variable delays, set the pause time to zero, and add a Timer as a child.

azure function max execution time

I would like to have a function called on a timer (every X minutes) but I want to ensure that only one instance of this function is running at a time. The work that is happening in the function shouldn't take long, but if for some reason it takes longer than the scheduled timer (X minutes) I don't want another instance to start and the processes to step on each other.
The simplest way that I can think of would be to set a maximum execution time on the function to also be X minutes. I would want to know how to accomplish this in both the App Service and Consumption plans, even if they are different approaches. I also want to be able to set this on an individual function level.
This type of feature is normally built-in to a FaaS environment, but I am having the hardest time google-binging it. Is this possible in the function.json? Or also are there different ways to make sure that this runs only once?
(PS. I know I could this in my own code by wrapping the work in a thread with a timeout. But I was hoping for something more idiomatic.)
Timer functions already have this behavior - they take out a blob lease from the AzureWebJobsStorage storage account to ensure that only one instance is executing the timer function. Also, the timer will not execute while a previous scheduled execution is in flight.
Another roll-your-own possibility is to handle this with storage queues and visibility timeout - when the queue has finished processing, push a new queue message with visibility timeout to match the desired schedule.
I want to mention that the functionTimeout host.json property will add a timeout to all of your functions, but has the side effect that your function will fail with a timeout error and that function instance will restart, so I wouldn't rely on it in this case.
You can specify 'functionTimeout' property in host.json
https://github.com/Azure/azure-webjobs-sdk-script/wiki/host.json
// Value indicating the timeout duration for all functions.
// In Dynamic SKUs, the valid range is from 1 second to 10 minutes and the default value is 5 minutes.
// In Paid SKUs there is no limit and the default value is null (indicating no timeut).
"functionTimeout": "00:05:00"
There is a new Azure Functions plan called Premium (in public preview as of May 2019) that allows for unlimited execution duration:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-scale
It will probably end up the goto plan for most Enterprise scenarios.

How to define frequency of a job in application by users?

I have an application that has to launch jobs repeatingly. But (yes, that would have been to easy without a but...) I would like users to define their backup frequency in application.
In worst case, they would have to choose between :
weekly,
daily,
every 12 hours,
every 6 hours,
hourly
In best case, they should be able to use crontab expressions (see documentation for example)
How to do this? Do I launch a job every minutes that check for last execution time, frequency and then launches another job if needed? Do I create a sort of queue that will be executed by a masterjob?
Any clues, ideas, opinions, best pratices, experiences are welcome!
EDIT : Solved this problem using Akka scheduler. Ok, this is a technical solution not a design answer but still everything works great.
Each user defined repetition is an actor that send messages every period to a new actor to execute the actual job.
There may be two ways to do this depending on your requirements/architecture:
If you can only use Play:
The user creates the job and the frequency it will run (crontab, whatever).
On saving the job, you calculate the first time it will have to be run. You then add an entry to a table JOBS with the execution time, job id, and any other information required. This is required as Play is stateless and information must be stored in the DB for later retrieval.
You have a job that queries the table for entries whose execution date is less than now. Retrieves the first, runs it, removes it from the table and adds a new entry for next execution. You should keep some execution counter so if a task fails (which means the entry is not removed from DB) it won't block execution of the other tasks by the job trying again and again.
The frequency of this job is set to run every second. That way while there is information in the table, you should execute the request around as often as they are required. As Play won't spawn a new job while the current one is working if you have enough tasks this one job will serve all. If not, it will be killed at some point and restored when required.
Of course, the crons of the users will not be too precise, as you have to account for you own cron delays plus execution delays on all the tasks in queue, which will be run sequentially. Not the best approach, unless you somehow disallow crons which run every second or more often than every minute (to be safe). Doing a check on execution time of the crons to kill them if they are over a certain amount of time would be a good idea.
If you can use more than Play:
The better alternative I believe is to use Quartz (see this) to create a future execution when the user creates the job, and reproram it once the execution is over.
There was a discussion on google-groups about it. As far as I remember you must define a job which start every 6 hours and check which backups must be done. So you must remember when the last backup job was finished and make the control yourself. I'm unsure if Quartz can handle such a requirement.
I looked in the source-code (always a good source ;-)) and found a method every, where I think this should be do what you want. How ever I'm unsure if this is a clever design, because if you have 1000 user you will have then 1000 Jobs. I'm unsure if Play was build to handle such a large number of jobs.
[Update] For cron-expressions you should have a look into JobPlugin.scheduleForCRON()
There are several ways to solve this.
If you don't have a really huge load of jobs, I'd just persist them to a table using the required flexibility. Then check all of them every hour (or the lowest interval you support) and run those eligible. Simple.
Or, if you prefer to use cron syntax anyway, just write (export) jobs to a user crontab using a wrapper which calls back to your running app, or starts the job in a standalone process if that's possible.

Active vs. Running Workflow

At SharePoint Saturday in Lisle, IL this weekend, Robert Bogue said there's a difference between active and running workflows. I've looked on the web, but can someone clarify?
If I can have up to millions of active workflows on the server, why can I only have 15 or so running workflows per server?
Yes, there is a difference:
"Running" Workflows are all which currently are doing something (i.e. executing an activity).
"Active" Workflows are simply all which are "running" but currently are not doing anything - e.g. waiting for OnItemChanged or DelayActivity.
The key to understand this is WorkflowEventDeliveryThrottle (here for SP2007, because the documentation for 2010 doesn't exist). The standard value for this is property is 15. That means that there are only 15 concurrent workflow which can run at the same time. After this limit is reached the workflows get queued to the OWSTimer which executes the workflows after some arbitrary time (I think the workflow timer job is set to every 5 minutes).
This Throttle can be changed by using stsadm (AFAIK Powershell doesn't work - you can change the property via code of course setting SPWebService.WorkflowEventDeliveryThrottle):
stsadm -o setproperty -pn workflow-eventdelivery-throttle -pv "20"
Now the maximum number of "running" workflows (better would be "maximum number of workflow events that can be processes simultaneously") would be 20. See some other SO post where someone plays with the parameter.
There is a nice technical blog post to understand Workflow Event Processing: About the “workflow-eventdelivery-throttle” parameter.
Similar to the throttle is the WorkflowEventDeliveryBatchSize which denotes the maximum number of workflow events that are processed in a batch.
Summary:
You can have thousands of active workflows, e.g. all waiting for the workflow item to be changed. They are not running, not finished - simply active.
There is a limited number of workflow events that can be processed at the same time (you called it "running" workflows)
You could also have thousands of running workflows, e.g. all of them might get triggered by a delay activity set to 5 minutes, but only a limited number of them is running simultaneously, the rest of them gets queued for later execution.

Meaning of the workflow Event Log?

What is the meaning of the below sharepoint 2007 log?
Log:
The previous instance of the timer job 'Config Refresh', id '{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}' for service '{xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx}' is still running, so the current instance will be skipped. Consider increasing the interval between jobs.
Means a job is schedule to run, say, every 30 minutes, but when the next one was going to run the previous was still running so it could not start and was skipped.

Resources