Sharepoint Timer Job - Which server does the job execute on? - sharepoint

If I install a (timer job) feature to a Sharepoint front end server within a farm, which server executes the job? All of them?
The job is locked at the job level, and the Execute method calls a web service on one specific machine on the farm, which handles all the processing. My question is whether all the front end servers will try and run this job?
Or network guys want to provide a new server in the farm, so that this job doesn't eat up the resources of the main server, but it sounds to me like we will duplicate the execution of the job.
Confused. Anyone know the answer to this question?

The timer job can actually be deployed to a single instance (or to all of them if you like). This link provides a good answer:
Timer Job deployment via constructors

For SharePoint 2010, see How to: Run Code on All Web Servers:
MyTimerJob myTJ = new MyTimerJob(
"contoso-job-add-mobile-adapter",
webApp,
null,
SPJobLockType.None);
Note the following about this code:
The third parameter can be used to specify a particular server on which the job should run. This is null when the job should run on all front-end Web servers.
The fourth parameter determines whether the job executes on all front-end Web servers. Passing SPJobLockType.None ensures that it runs on all servers on which the Microsoft SharePoint Foundation Web Application service is running. By contrast, passing SPJobLockType.Job ensures that it runs only on the first available server on which the Microsoft SharePoint Foundation Web Application service is running. (There is a third possible value. For more information, see SPJobDefinition and the topics for its constructors and other members.)

Related

Background tasks/timer jobs in SharePoint O365 with Azure provider-hosted app

I am building a provider-hosted app for SharePoint (O365) which is hosted in Azure. I do all of my logic through CSOM, more specifically using an MVC web project. At the moment, I have some branding logic being executed by the application after an AJAX call to a controller action.
If I have a lot of subsites in my hierarchy, this can take a very long time to execute, which is bad because, while the app will still process my request, leaving the page from which I called the action will prevent me from having any feedback concerning the completion of the task. This is of course because the state of the request is tied directly to the callback of that request in the calling page. This also means that someone could very well launch the request, refresh the page, and then launch it again, since I have no way to tell if a previous request is still executing. Furthermore, 2 different users could launch the same request, resulting in 2 simultaneous executions of that request's logic. Both situations can result in some nasty concurrent modification errors on server side artifacts.
So, what I need is to find a way to check if a certain request is already running, and if that is not the case, launch one that is stateful and asynchronous. The best example I can think of is simply SharePoint O365's own long running tasks mechanics: time intensive tasks (such as installing an app or creating a new site collection) can get launched from a page, and any subsequent refresh or access to that page will display the task as currently running, an even sometimes provide the possibility to cancel it (such as in an app install). The state will also get updated on its own (such as when the site collection creation finishes), which I am not sure is the result of client-side polling or some other mechanic I do not know about.
I have seen some solutions that seemed promising, like using Windows Services directly on Azure or this poor man's timer job, although none seem to fulfill all the requirements I listed above and/or seem complciated to implement for what I wan to do. I have a feeling that Timer Jobs could potentially help, but I wanted to have your advice on the situation.
Thanks for your input
Try using a Azure Worker Role. Use CSOM and side-loading of a SharePoint Provider Hosted App with Tenant Full Control Permissions. The Side-loading part enables your worker to read / write to SharePoint Online.
Side-loading is made via /_layouts/appregnew.aspx and _layouts/appinv.aspx.

WF4 Affinity on Windows Azure and other NLB environments

I'm using Windows Azure and WF4 and my workflow service is hosted in a web-role (with N instances). My job now is find out how
to do an affinity, in a way that I can send messages to the right workflow instance. To explain this scenario, my workflow (attached) starts with a "StartWorkflow" receive activity, creates 3 "Person" and, in a parallel-for-each, waits for the confirmation of these 3 people ("ConfirmCreation" Receive Activity).
I then started to search how the affinity is made in others NLB environments (mainly looked for informations about how this works on Windows Server AppFabric), but I didn't find a precise answer. So how is it done in others NLB environments?
My next task is find out how I could implement a system to handle this affinity on Windows Azure and how much would this solution cost (in price, time and amount of work) to see if its viable or if it's better to work with only one web-role instance while we wait for the WF4 host for the Azure AppFabric. The only way I found was to persist the workflow instance. Is there other ways of doing this?
My third, but not last, task is to find out how WF4 handles multiple messages received at the same time. In my scenario, this means how it would handle if the 3 people confirmed at the same time and the confirmation messages are also received at the same time. Since the most logical answer for this problem seems to be to use a queue, I started looking for information about queues on WF4 and found people speaking about MSQM. But what is the native WF4 messages handler system? Is this handler really a queue or is it another system? How is this concurrency handled?
You shouldn't need any affinity. In fact that's kinda the whole point of durable Workflows. Whilst your workflow is waiting for this confirmation it should be persisted and unloaded from any one server.
As far as persistence goes for Windows Azure you would either need to hack the standard SQL persistence scripts so that they work on SQL Azure or write your own InstanceStore implementation that sits on top of Azure Storage. We have done the latter for a workflow we're running in Azure, but I'm unable to share the code. On a scale of 1 to 10 for effort, I'd rank it around an 8.
As far as multiple messages, what will happen is the messages will be received and delivered to the workflow instance one message at a time. Now, it's possible that every one of those messages goes to the same server or maybe each one goes to a diff. server. No matter how it happens, the workflow runtime will attempt to load the workflow from the instance store, see that it is currently locked and block/retry until the workflow becomes available to process the next message. So you don't have to worry about concurrent access to the same workflow instance as long as you configure everything correctly and the InstanceStore implementation is doing its job.
Here's a few other suggestions:
Make sure you use the PersistBeforeSend option on your SendReply actvities
Configure the following workflow service options
<workflowIdle timeToUnload="00:00:00" />
<sqlWorkflowInstanceStore ... instanceLockedExceptionAction="AggressiveRetry" />
Using the out of the box SQL instance store with SQL Azure is a bit of a problem at the moment with the Azure 1.3 SDK as each deployment, even if you made 0 code changes, results in a new service deployment meaning that already persisted workflows can't continue. That is a bug that will be solved but a PITA for now.
As Drew said your workflow instance should just move from server to server as needed, no need to pin it to a specific machine. And even if you could that would hurt scalability and reliability so something to be avoided.
Sending messages through MSMQ using the WCF NetMsmqBinding works just fine. Internally WF uses a completely different mechanism called bookmarks that allow a workflow to stop and resume. Each Receive activity, as well as others like Delay, will create a bookmark and wait for that to be resumed. You can only resume existing bookmarks. Even resuming a bookmark is not a direct action but put into an internal queue, not MSMQ, by the workflow scheduler and executed through a SynchronizationContext. You get no control over the scheduler but you can replace the SynchronizationContext when using the WorkflowApplication and so get some control over how and where activities are executed.

How to create a job in IIS capable of running an extended process

I have a web service app, I have 1 web service call that could take anything from 1 hour to 14 hours, depending on the data that needs to be processed and the time of the month.
Is there any way to create a job in IIS that could be capable of running this extended process. I also need job management and reporting to be able to see if jobs are running, so that new jobs aren't created on top of others.
I will be working with IIS6 primarily. And would like to use C# code.
Right now I am using a web service call, but I don't like the idea of having web services run for such a long time, and due to the nature of the web service, I can't split the functionality any more.
IIS jobs would be awesome if they are available. Any ideas?
If I were you, I would make a command line app that is kicked off by the web service. Running a commandline app is pretty straight forward, basically
Process p = new Process();
p.StartInfo.UseShellExecute = false;
p.StartInfo.FileName = "appname.exe";
p.Start();
There are a limited amount of worker processes per machine, they aren't really meant for long running jobs.
One possibility, with a bit of setup cost, is to have your processing run as a Windows service that listens to a message queue (MSMQ or similar), and have your web service simply post the request onto the message queue to be handled by the processing service.
Monitoring jobs is more difficult; your web service would need to have a way of querying your processing service to find out its state. This is an IPC (interprocess communication) problem, which has many different solutions with various tradeoffs that depend on your environment and circumstances.
That said, for simple cases, Matt's solution is probably sufficient.

SharePoint timer jobs not getting invoked

I have a timer job which has been deployed to a server with multiple Web front ends.
This timer job reads it's configuration from a Hierarchical Object Store.
This timer job is scheduled to run daily on the server.
But the problem is that this timer job is not getting invoked daily. I have implemented event logging in the timer job's Execute() method, but I dont see any logs being generated.
Any ideas as to what could cause a timer job to be not picked up for execution by the SharePoint Timer Service? How can I troubleshoot this problem?
Are there any "gotcha"s for running timer jobs in servers from multiple front ends? Will the timer job get execute in all the web front ends, or any one of them arbitarily? How to know which machine will have my event logs?
This might be a stupid question, but does having multiple front ends for load balancing affect the way Hierarchical Object Stores behave?
EDIT:
One of the commenters, Sean McDounough, (Thanks Sean!! ) made a very good point that:
"whether or not the timer job runs on all WFEs will be a function of the SPJobLockType enum value you specified in the constructor. Using a value of "None" means that the job will run on all WFEs."
Now, my timer job is responsible for sending periodic mails to a list of users. Currently it is marked as SPJobLockType.Job"
If I change this to SPJobLockType.None, does this mean that my timer job will be executed in all the WFEs separately? (THis is not desired, it will spam all the users with multiple emails)
Or does it mean that the timer job will execute in any one of the WFEs, arbitarily?
Try restarting the SharePoint timer service from the command-line using NET STOP SPTIMERV3 followed by a NET START SPTIMERV3. My guess is that the timer service is running with an older version of your .NET assembly. The timer service does not automatically reload assemblies when you upgrade the WSP solution.
To do this, follow these steps:
Stop the Timer service.
Click Start, point to Administrative Tools, and then click Services.
Right-click Windows SharePoint Services Timer, and then click Stop, or Restart service.
This URL helped me.

How do you instruct a SharePoint Farm to run a Timer Job on a specific server?

We have an SP timer job that was running fine for quite a while. Recently the admins enlisted another server into the farm, and consequently SharePoint decided to start running this timer job on this other server. The problem is the server does not have all the dependencies installed (i.e., Oracle) on it and so the job is failing. I'm just looking for the path of least resistance here. My question is there a way to force a timer job to run on the server you want it to?
[Edit]
If I can do it through code that works for me. I just need to know what the API is to do this if one does exist.
I apologize if I'm pushing for the obvious; I just haven't seen anyone drill down on it yet.
Constraining a custom timer job (that is, your own timer job class that derives from SPJobDefinition) is done by controlling constructor parameters.
Timer jobs typically run on the server where they are submitted (as indicated by vinny) assuming no target server is specified during the creation of the timer job. The two overloaded constructors for the SPJobDefinition type, though, accept an SPServer and an SPJobLockType as the third and fourth parameters, respectively. Using these two parameters properly will allow you to dictate where your job runs.
By specifying your target server as the SPServer and an SPJobLockType of "Job," you can constrain the timer job instance you create to run on the server of your choice.
For documentation on what I've described, see MSDN: http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.administration.spjobdefinition.spjobdefinition.aspx.
I don't know anything about the code you're running, but custom timer jobs are commonly setup during Feature activation. I got the sense that your codebase might not be your own (?); if so, you might want to look for the one or more types/classes that derive from SPFeatureReceiver. In the FeatureActivated method of such classes is where you might find the code that actually carries out the timer job instantiation.
Of course, you'll also want to look at the custom timer job class (or classes) themselves to see how they're being instantiated. Sometimes developers will build the instantiation of the class into the class itself (via Factory Method pattern, for example). Between the timer job class and SPFeatureReceiver implementations, though, you should be on the way towards finding what needs to change.
I hope that helps!
Servers in a farm need to be identical.
If you happen to use VMs for your web front ends, you can snap a server and provision copies so that you know they are all identical.
Timer jobs per definition run on all web front ends.
If you need scheduled logic to run on a specific server, you either need to specifically code this in the timer job, or to use a "standard" NT Service instead.
I think a side effect of setting SPJobLockType to 'Job' is that it'll execute on the server where the job is submitted.
You could implement a Web Service with the business logig and deploy that Web Service to one machine. Then your Timer Job could trigger your web service periodically.
The it sould be not that important wher your timer job is running. SharePoint decides itself where to run the timer job.

Resources