Shopware 6 get context in scheduled task - shopware

I just wondered how to get the context in scheduled tasks. There is the static method
Context::createDefaultContext()
but it's marked as internal. I know that the context shouldn't be created but rather passed down to your service from higher services. But this doesn't seem possible in scheduled tasks.

I think I found the answer in a GitHub issue comment: https://github.com/shopware/platform/issues/1245#issuecomment-673537348
The only valid use case to create a default context is when you are in a CLI context, meaning when you write your own CLI command (or scheduled task in that regard). But you should be fully aware that you need to take care of handling translations and currency for example by yourself in that case.
Another valid use case for the default context is the usage in tests of course, that was the original intent behind that method.
In the docs the method is used to not bloat the code examples.
So it seems to be the way to go in scheduled tasks to create the context there.

Related

Priority of Sitecore scheduled jobs

I have two Sitecore agents that run as scheduled jobs. Agent A is a long-running task that has low priority. Agent B is a short-running task that has high priority. B runs with an interval that is shorter than A's interval.
The problem is that B is never run, when A is already running.
I have implemented this to be able to run agents manually inside the content editor. When I do this, I am able to run B although A is already running (even though I set them to the same thread priority in the custom dialog).
How can I specify in my configuration file that B has a higher priority than A? Or make my scheduled job setup multi-threaded so simultaneously running jobs are possible in Sitecore? Is there a standard way to do this?
I have tried something like this where I set the thread priority inside the agent implementation, but this code is never invoked in B, when A is already running. So the prioritizing should somehow be done "before" the job implementation themselves.
As already mentioned in the other answer, Sitecore Scheduled Tasks are run sequentially, so each agent runs only after the previous one has finished.
However, tasks defined in the Database can be run async which means you can schedule multiple tasks to run in Parallel.
You'll need to create a Command and define a Schedule:
Defining a Command
Commands are defined in the Sitecore client.
In Content Editor navigate to /sitecore/system/Tasks/Commands
Create a new item using the Command template
Specify the Type and Method fields.
Defining a Schedule
The database agent will execute the command based on the settings in the schedule.
In Content Editor navigate to /sitecore/system/Tasks/Schedules
Create a new item using the Schedule template
For the Command field select the Command item you just created
If the task applies to specific items, you can identify those items in the Items field. This field supports a couple of formats.
For the Schedule field, you identify when the task should run. The value for this field is in a pipe-separated format.
On the schedule, check the field marked Async.
You can read more about the Database Scheduler in the Sitecore Community Docs.
Note that this may lead to performance issues if you schedule too many tasks to run in parallel.
The downside to running scheduled tasks from the Database if you cannot pass in parameters like you can for tasks defined in config. If you cannot simply access config settings from code (and they need to be passed in) then for a scheduled task defined in config you could then invoke a Sitecore Job from your Scheduled Task code. Sitecore Jobs run as a thread and each job will spin up a new thread so can be run in parallel, just make sure that Job names are unique (jobs of the same name will queue up).
The reason is because Sitecore Scheduled Job runs in sequence. So if you have a job being executed, it will not trigger the other jobs until it finishes.
If I am not mistaken, sitecore will queued the other jobs that will need to be executed after the current running job.
Since you trigger the job using the Run Agent Tool, it will run because you are forcing it to execute. It will not check if there is another job being ran except for publishing, it will queued because it is transferring item from Source to Target database.
EDIT:
You can check the <job> from the Web.config for Sitecore v7.x or Sitecore.config for Sitecore v8.x. You will see the pipeline being used for the Job. If I am not mistaken, I think you will need to check the code for the scheduler. The namespace is Sitecore.Tasks.Scheduler, Sitecore.Kernel
Thanks
As you might already understand from the answer from Hishaam (not going to repeat that good info), using the Sitecore agents might not be the best solution for what you are trying to do. For a similar setup (tasks that need to perform import, export or other queued tasks on an e-commerce site) I used an external scheduling engine (in my case Hangfire which did the job fine, but you could use an other as well) that called services in my Sitecore solution. The services performed as a layer to get to Sitecore.
You can decide how far you want to go with these services (could even start new threads) but they will be able to run next to each other. This way you will not bump into issues that another process is still running. I went for an architecture where the service was very thin layer towards the real business logic.
You might need to make sure though that the code behind one service cannot be called whilst already running (I needed to do that in case of handling a queue), but those things are all possible in .net code.
I find this setup more robust, especially for tasks that are important. It's also easier to configure when the tasks need to run.
I ended up with the following solution after realising that it was not a good solution to run my high priority scheduled task, B, as a background agent in Sitecore. For the purpose of this answer I will now call B: ExampleJob
I created a new agent class called ExampleJobStarter. The purpose of this job was simply to start another thread that runs the actual job, ExampleJob:
public class ExampleJobStarter
{
public override void Run()
{
if (ExampleJob.RunTheExampleJob) return;
ExampleJob.RunTheExampleJob = true;
Task.Run(() => new ExampleJob().Run());
}
}
public class ExampleJob
{
public static bool RunTheExampleJob;
public override void Run()
{
while (RunTheExampleJob)
{
DoWork();
Thread.Sleep(10000);
}
}
private void DoWork()
{
... // here I perform the actual work
}
}
ExampleJobStarter was now registered in my Sitecore config file to run every 10 minutes. I also removed ExampleJob from the Sitecore config, so it will not run automatically (thus, it is no longer a Sitecore job per se). ExampleJobStarter will simply ensure that ExampleJob is running in another thread. ExampleJob itself will do its work every 10 seconds, without being interfeered by the low-priority job agent, that still run as normal background agent.
Be aware of deadlock-issues if you go down this path (not an issue for the data I am working with in this case).

Threaded Windows Service - Ninject Injected DbContext Instance Lifetime - Best Practice?

I'm currently building a Windows Service which creates a thread which runs every 30 minutes and updates a few fields on some entities which meet a criteria (and in the future, I want to add further threads to do other things, eg run one every 24 hours, one every week, etc). The service will get and update it's data using services which are already used by a web application and are injected using Ninject (along with the DbContext) - this all works fine using the InRequestScope() binding.
However, I've been thinking about how best to approach this using a Windows Service. Essentially, once the service starts and the thread is created, it runs continuously (aside from sleeping every 30 minutes) - and only ever stops if the Windows Service stops or the process gets terminated. My initial thought was to inject my DbContext and services using the InThreadScope() option of Ninject - so I'd have a DbContext for each thread. But is it generally good practice to have a DbContext that has such a long lifetime? This DbContext could be sitting there for weeks without being disposed - and I have a sneaking suspicion that this isn't such a a good idea (memory leaks, etc).
But if not, what is the best way to handle this scenario. I could create a new DbContext each time the thread runs, but how would I set up the Ninject bindings? Currently they're defined as:
Bind<IDbContext>().To<EnterpriseDbContext>().InThreadScope();
Bind<IUserRepository>().To<UserRepository>().InThreadScope();
// Etc
So I'm not entirely sure how I would structure this to create a new one within the thread. I looked at InTransientScope(), but this is causing my services to use different versions of the context, so my changes never get saved.
This seems like it would be such a common scenario - so would be great to hear anyone else's view.
Instead of creating threads that sleep until the next run you can use Quartz.Net as scheduling mechanism. In combination with InCallScope from the NamedScope extension you have the same behavior as InRequestScope of a WebService. Additionally you get a superior scheduling than just spawning threads that run in intervals. You can specify when they shall run using Cron Job syntax.
See:
ASP.Net MVC 3, Ninject and Quartz.Net - How to?
Quartz.NET, NH ISession & Ninject Scope
I particularly like #Remo's suggestions, and will be trying these on the next project I work on. However, I eventually solved the problem by using a custom InScope implementation - but setting a custom object that there should be scope to each time the thread ran. This way the context was recreated every time it ran (as disposed once the scope object was disposed).

What is the best approach for long running Java code in an XPage?

So I have some Java code that takes some time to complete (about 2 minutes). Nothing I can do about that.
But I am wondering how best to approach this in the XPages UI so that the user may still have to wait but has more control/interaction while it is running (not just a spinning wheel).
So from what I can see I can do the following.
Java class called in XPage wrapped in a thread.
Java Agent called from XPage in a thread.
Java Agent called from an XPage, but waits for a document to be updated.
Eclipse plugin (For in the client) is activated. Not sure how it would talk back to XPage though (via document?).
Any other methods?
If you created the thread in the XPage, is that going to cause any problems at the server end? Will I have to avoid using Notes objects in the Java class?
I would suggest using the OSGi Tasklet service, a.k.a. DOTS. This approach allows Java tasks to be scheduled or bound to events, just like agents, but perform significantly more efficiently than agents. Perhaps most pertinent to your need is the additional ability to trigger DOTS tasks via the console, which would allow your XPages code to start the Java code merely by issuing a remote console command via the session object.
In addition, check out the technique used in the XSP Starter Kit to provide a serverScope variable. If your code is running in a DOTS task (or even an agent), it's running in a different Java application, so it can't talk directly to the standard scope variables. The serverScope approach would theoretically allow you to store objects that can be accessed from both the XPage and the triggered task. This could aid in using Mark's technique, as mentioned above by Per, to convey progress to the user while the task is running: you'd just be storing the progress information in serverScope instead of sessionScope.
A solution would be to have an agent react on saving new documents in the database instead of kicking of the agent in your application and use threads ( because threads can be very dangerous and could easily kill your http task )
Another thing you could look into is why the code that you want to execute is taking 2 minutes to complete. What is the code for? Doing things in other databases or connect to other non notes resources?

Scheduled Tasks with Sql Azure?

I wonder if there's a way to use scheduled tasks with SQL Azure?
Every help is appreciated.
The point is, that I want to run a simple, single line statement every day and would like to prevent setting up a worker role.
There's no SQL Agent equivalent for SQL Azure today. You'd have to call your single-line statement from a background task. However, if you have a Web Role already, you can easily spawn a thread to handle this in your web role without having to create a Worker Role. I blogged about the concept here. To spawn a thread, you can either do it in the OnStart() event handler (where the Role instance is not yet added to the load balancer), or in the Run() method (where the Role instance has been added to the load balancer). Usually it's a good idea to do setup in the OnStart().
One caveat that might not be obvious, whether you execute this call in its own worker role or in a background thread of an existing Web Role: If you scale your Role to, say, two instances, you need to ensure that the daily call only occurs from one of the instances (otherwise you could end up with either duplicates, or a possibly-costly operation being performed multiple times). There are a few techniques you can use to avoid this, such as a table row-lock or an Azure Storage blob lease. With the former, you can use that row to store the timestamp of the last time the operation was executed. If you acquire the lock, you can check to see if the operation occurred within a set time window (maybe an hour?) to decide whether one of the other instances already executed it. If you fail to acquire the lock, you can assume another instance has the lock and is executing the command. There are other techniques - this is just one idea.
In addition to David's answer, if you have a lot of scheduled tasks to do then it might be worth looking at:
lokad.cloud - which has good handling of periodic tasks - http://lokadcloud.codeplex.com/
quartz.net - which is a good all-round scheduling solution - http://quartznet.sourceforge.net/
(You could use quartz.net within the thread that David mentioned, but lokad.cloud would require a slightly bigger architectural change)
I hope it is acceptable to talk about your own company. We have a web based service that allows you to do this. You can click this link to see more details on how to schedule execution of SQL Azure queries.
The overcome the issue of multiple roles executing the same task, you can check for role instance id and make sure that only the first instance will execute the task.
using Microsoft.WindowsAzure.ServiceRuntime;
String g = RoleEnvironment.CurrentRoleInstance.Id;
if (!g.EndsWith("0"))
{
return;
}

How do you instruct a SharePoint Farm to run a Timer Job on a specific server?

We have an SP timer job that was running fine for quite a while. Recently the admins enlisted another server into the farm, and consequently SharePoint decided to start running this timer job on this other server. The problem is the server does not have all the dependencies installed (i.e., Oracle) on it and so the job is failing. I'm just looking for the path of least resistance here. My question is there a way to force a timer job to run on the server you want it to?
[Edit]
If I can do it through code that works for me. I just need to know what the API is to do this if one does exist.
I apologize if I'm pushing for the obvious; I just haven't seen anyone drill down on it yet.
Constraining a custom timer job (that is, your own timer job class that derives from SPJobDefinition) is done by controlling constructor parameters.
Timer jobs typically run on the server where they are submitted (as indicated by vinny) assuming no target server is specified during the creation of the timer job. The two overloaded constructors for the SPJobDefinition type, though, accept an SPServer and an SPJobLockType as the third and fourth parameters, respectively. Using these two parameters properly will allow you to dictate where your job runs.
By specifying your target server as the SPServer and an SPJobLockType of "Job," you can constrain the timer job instance you create to run on the server of your choice.
For documentation on what I've described, see MSDN: http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spjobdefinition.spjobdefinition.aspx.
I don't know anything about the code you're running, but custom timer jobs are commonly setup during Feature activation. I got the sense that your codebase might not be your own (?); if so, you might want to look for the one or more types/classes that derive from SPFeatureReceiver. In the FeatureActivated method of such classes is where you might find the code that actually carries out the timer job instantiation.
Of course, you'll also want to look at the custom timer job class (or classes) themselves to see how they're being instantiated. Sometimes developers will build the instantiation of the class into the class itself (via Factory Method pattern, for example). Between the timer job class and SPFeatureReceiver implementations, though, you should be on the way towards finding what needs to change.
I hope that helps!
Servers in a farm need to be identical.
If you happen to use VMs for your web front ends, you can snap a server and provision copies so that you know they are all identical.
Timer jobs per definition run on all web front ends.
If you need scheduled logic to run on a specific server, you either need to specifically code this in the timer job, or to use a "standard" NT Service instead.
I think a side effect of setting SPJobLockType to 'Job' is that it'll execute on the server where the job is submitted.
You could implement a Web Service with the business logig and deploy that Web Service to one machine. Then your Timer Job could trigger your web service periodically.
The it sould be not that important wher your timer job is running. SharePoint decides itself where to run the timer job.

Resources