Priority of Sitecore scheduled jobs - multithreading

I have two Sitecore agents that run as scheduled jobs. Agent A is a long-running task that has low priority. Agent B is a short-running task that has high priority. B runs with an interval that is shorter than A's interval.
The problem is that B is never run, when A is already running.
I have implemented this to be able to run agents manually inside the content editor. When I do this, I am able to run B although A is already running (even though I set them to the same thread priority in the custom dialog).
How can I specify in my configuration file that B has a higher priority than A? Or make my scheduled job setup multi-threaded so simultaneously running jobs are possible in Sitecore? Is there a standard way to do this?
I have tried something like this where I set the thread priority inside the agent implementation, but this code is never invoked in B, when A is already running. So the prioritizing should somehow be done "before" the job implementation themselves.

As already mentioned in the other answer, Sitecore Scheduled Tasks are run sequentially, so each agent runs only after the previous one has finished.
However, tasks defined in the Database can be run async which means you can schedule multiple tasks to run in Parallel.
You'll need to create a Command and define a Schedule:
Defining a Command
Commands are defined in the Sitecore client.
In Content Editor navigate to /sitecore/system/Tasks/Commands
Create a new item using the Command template
Specify the Type and Method fields.
Defining a Schedule
The database agent will execute the command based on the settings in the schedule.
In Content Editor navigate to /sitecore/system/Tasks/Schedules
Create a new item using the Schedule template
For the Command field select the Command item you just created
If the task applies to specific items, you can identify those items in the Items field. This field supports a couple of formats.
For the Schedule field, you identify when the task should run. The value for this field is in a pipe-separated format.
On the schedule, check the field marked Async.
You can read more about the Database Scheduler in the Sitecore Community Docs.
Note that this may lead to performance issues if you schedule too many tasks to run in parallel.
The downside to running scheduled tasks from the Database if you cannot pass in parameters like you can for tasks defined in config. If you cannot simply access config settings from code (and they need to be passed in) then for a scheduled task defined in config you could then invoke a Sitecore Job from your Scheduled Task code. Sitecore Jobs run as a thread and each job will spin up a new thread so can be run in parallel, just make sure that Job names are unique (jobs of the same name will queue up).

The reason is because Sitecore Scheduled Job runs in sequence. So if you have a job being executed, it will not trigger the other jobs until it finishes.
If I am not mistaken, sitecore will queued the other jobs that will need to be executed after the current running job.
Since you trigger the job using the Run Agent Tool, it will run because you are forcing it to execute. It will not check if there is another job being ran except for publishing, it will queued because it is transferring item from Source to Target database.
EDIT:
You can check the <job> from the Web.config for Sitecore v7.x or Sitecore.config for Sitecore v8.x. You will see the pipeline being used for the Job. If I am not mistaken, I think you will need to check the code for the scheduler. The namespace is Sitecore.Tasks.Scheduler, Sitecore.Kernel
Thanks

As you might already understand from the answer from Hishaam (not going to repeat that good info), using the Sitecore agents might not be the best solution for what you are trying to do. For a similar setup (tasks that need to perform import, export or other queued tasks on an e-commerce site) I used an external scheduling engine (in my case Hangfire which did the job fine, but you could use an other as well) that called services in my Sitecore solution. The services performed as a layer to get to Sitecore.
You can decide how far you want to go with these services (could even start new threads) but they will be able to run next to each other. This way you will not bump into issues that another process is still running. I went for an architecture where the service was very thin layer towards the real business logic.
You might need to make sure though that the code behind one service cannot be called whilst already running (I needed to do that in case of handling a queue), but those things are all possible in .net code.
I find this setup more robust, especially for tasks that are important. It's also easier to configure when the tasks need to run.

I ended up with the following solution after realising that it was not a good solution to run my high priority scheduled task, B, as a background agent in Sitecore. For the purpose of this answer I will now call B: ExampleJob
I created a new agent class called ExampleJobStarter. The purpose of this job was simply to start another thread that runs the actual job, ExampleJob:
public class ExampleJobStarter
{
public override void Run()
{
if (ExampleJob.RunTheExampleJob) return;
ExampleJob.RunTheExampleJob = true;
Task.Run(() => new ExampleJob().Run());
}
}
public class ExampleJob
{
public static bool RunTheExampleJob;
public override void Run()
{
while (RunTheExampleJob)
{
DoWork();
Thread.Sleep(10000);
}
}
private void DoWork()
{
... // here I perform the actual work
}
}
ExampleJobStarter was now registered in my Sitecore config file to run every 10 minutes. I also removed ExampleJob from the Sitecore config, so it will not run automatically (thus, it is no longer a Sitecore job per se). ExampleJobStarter will simply ensure that ExampleJob is running in another thread. ExampleJob itself will do its work every 10 seconds, without being interfeered by the low-priority job agent, that still run as normal background agent.
Be aware of deadlock-issues if you go down this path (not an issue for the data I am working with in this case).

Related

Shopware 6 get context in scheduled task

I just wondered how to get the context in scheduled tasks. There is the static method
Context::createDefaultContext()
but it's marked as internal. I know that the context shouldn't be created but rather passed down to your service from higher services. But this doesn't seem possible in scheduled tasks.
I think I found the answer in a GitHub issue comment: https://github.com/shopware/platform/issues/1245#issuecomment-673537348
The only valid use case to create a default context is when you are in a CLI context, meaning when you write your own CLI command (or scheduled task in that regard). But you should be fully aware that you need to take care of handling translations and currency for example by yourself in that case.
Another valid use case for the default context is the usage in tests of course, that was the original intent behind that method.
In the docs the method is used to not bloat the code examples.
So it seems to be the way to go in scheduled tasks to create the context there.

Background job with a thread/process?

Technology used: EJB 3.1, Java EE 6, GlassFish 3.1.
I need to implement a background job that is execute every 2 minutes to check the status of a list of servers. I already implemented a timer and my function updateStatus get called every two minutes.
The problem is I want to use a thread to do the update because in case the timer is triggered again but my function called is not done, i will like to kill the thread and start a new one.
I understand I cannot use thread with EJB 3.1 so how should I do that? I don't really want to introduce JMS either.
You should simply use and EJB Timer for this.
When the job finishes, simply have the job reschedule itself. If you don't want the job to take more that some amount of time, then monitor the system time in the process, and when it goes to long, stop the job and reschedule it.
The other thing you need to manage is the fact that if the job is running when the server goes down, it will restart automatically when the server comes back up. You would be wise to have a startup process that scans the current jobs that exist in the Timer system, and if yours is not there, then you need to submit a new one. After that the job should take care of itself until your deploy (which erases existing Timer jobs).
The only other issue is that if the job is dependent upon some initialization code that runs on server startup, it is quite possible that the job will start BEFORE this happens when the server is firing up. So, may need to have to manage that start up race condition (or simply ensure that the job "Fails fast", and resubmits itself).

Scheduled Tasks with Sql Azure?

I wonder if there's a way to use scheduled tasks with SQL Azure?
Every help is appreciated.
The point is, that I want to run a simple, single line statement every day and would like to prevent setting up a worker role.
There's no SQL Agent equivalent for SQL Azure today. You'd have to call your single-line statement from a background task. However, if you have a Web Role already, you can easily spawn a thread to handle this in your web role without having to create a Worker Role. I blogged about the concept here. To spawn a thread, you can either do it in the OnStart() event handler (where the Role instance is not yet added to the load balancer), or in the Run() method (where the Role instance has been added to the load balancer). Usually it's a good idea to do setup in the OnStart().
One caveat that might not be obvious, whether you execute this call in its own worker role or in a background thread of an existing Web Role: If you scale your Role to, say, two instances, you need to ensure that the daily call only occurs from one of the instances (otherwise you could end up with either duplicates, or a possibly-costly operation being performed multiple times). There are a few techniques you can use to avoid this, such as a table row-lock or an Azure Storage blob lease. With the former, you can use that row to store the timestamp of the last time the operation was executed. If you acquire the lock, you can check to see if the operation occurred within a set time window (maybe an hour?) to decide whether one of the other instances already executed it. If you fail to acquire the lock, you can assume another instance has the lock and is executing the command. There are other techniques - this is just one idea.
In addition to David's answer, if you have a lot of scheduled tasks to do then it might be worth looking at:
lokad.cloud - which has good handling of periodic tasks - http://lokadcloud.codeplex.com/
quartz.net - which is a good all-round scheduling solution - http://quartznet.sourceforge.net/
(You could use quartz.net within the thread that David mentioned, but lokad.cloud would require a slightly bigger architectural change)
I hope it is acceptable to talk about your own company. We have a web based service that allows you to do this. You can click this link to see more details on how to schedule execution of SQL Azure queries.
The overcome the issue of multiple roles executing the same task, you can check for role instance id and make sure that only the first instance will execute the task.
using Microsoft.WindowsAzure.ServiceRuntime;
String g = RoleEnvironment.CurrentRoleInstance.Id;
if (!g.EndsWith("0"))
{
return;
}

How to check whether a Timer Job has run

Is it possible to check whether a SharePoint (actually WSS 3.0) timer job has run when it was scheduled to ?
Reason is we have a few daily custom jobs and want to make sure they're always run, even if the server has been down during the time slot for the jobs to run, so I'd like to check them and then run them
And is it possible to add a setting when creating them similar to the one for standard Windows scheduled tasks ... "Run task as soon as possible after a scheduled start is missed" ?
check it in job status page and then you can look at the logs in 12 hive folder for further details
central administration/operations/monitoring/timer jobs/check jobs status
As far as the job restart is concerned when it is missed that would not be possible with OOTB features. and it make sense as well since there are lot of jobs which are executed at particular interval if everything starts at the same time load on server would be very high
You can look at the LastRunTime property of an SPJobDefinition to see when the job was actually executed. As far as I can see in Reflector, the value of this property is loaded from the database and hence it should reflect the time it was actually executed.

How do you instruct a SharePoint Farm to run a Timer Job on a specific server?

We have an SP timer job that was running fine for quite a while. Recently the admins enlisted another server into the farm, and consequently SharePoint decided to start running this timer job on this other server. The problem is the server does not have all the dependencies installed (i.e., Oracle) on it and so the job is failing. I'm just looking for the path of least resistance here. My question is there a way to force a timer job to run on the server you want it to?
[Edit]
If I can do it through code that works for me. I just need to know what the API is to do this if one does exist.
I apologize if I'm pushing for the obvious; I just haven't seen anyone drill down on it yet.
Constraining a custom timer job (that is, your own timer job class that derives from SPJobDefinition) is done by controlling constructor parameters.
Timer jobs typically run on the server where they are submitted (as indicated by vinny) assuming no target server is specified during the creation of the timer job. The two overloaded constructors for the SPJobDefinition type, though, accept an SPServer and an SPJobLockType as the third and fourth parameters, respectively. Using these two parameters properly will allow you to dictate where your job runs.
By specifying your target server as the SPServer and an SPJobLockType of "Job," you can constrain the timer job instance you create to run on the server of your choice.
For documentation on what I've described, see MSDN: http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spjobdefinition.spjobdefinition.aspx.
I don't know anything about the code you're running, but custom timer jobs are commonly setup during Feature activation. I got the sense that your codebase might not be your own (?); if so, you might want to look for the one or more types/classes that derive from SPFeatureReceiver. In the FeatureActivated method of such classes is where you might find the code that actually carries out the timer job instantiation.
Of course, you'll also want to look at the custom timer job class (or classes) themselves to see how they're being instantiated. Sometimes developers will build the instantiation of the class into the class itself (via Factory Method pattern, for example). Between the timer job class and SPFeatureReceiver implementations, though, you should be on the way towards finding what needs to change.
I hope that helps!
Servers in a farm need to be identical.
If you happen to use VMs for your web front ends, you can snap a server and provision copies so that you know they are all identical.
Timer jobs per definition run on all web front ends.
If you need scheduled logic to run on a specific server, you either need to specifically code this in the timer job, or to use a "standard" NT Service instead.
I think a side effect of setting SPJobLockType to 'Job' is that it'll execute on the server where the job is submitted.
You could implement a Web Service with the business logig and deploy that Web Service to one machine. Then your Timer Job could trigger your web service periodically.
The it sould be not that important wher your timer job is running. SharePoint decides itself where to run the timer job.

Resources