In Kentico (9) when I run the task "Delete inactive contacts" it never actually runs and the result is always "Rescheduled to delete more contacts in next off-peak period"
I've tried changing the settings to run once a week and I've tried creating a custom IDeleteContacts then setting it to use that custom class, but I always get the same result.
Any ideas?
By default, Kentico runs it's scheduled tasks in the tail of regular web requests. That's fine if you have traffic 24/7. If you don't, then you can run into all kinds of nastyness including the issue you're describing now because scheduled tasks are not executing.
If you're running on a Windows server you can setup a service to trigger scheduled tasks. If that's not an option, you can setup monitoring to hit your site every couple of minutes, for example UptimeRobot or Application Insights. You'll get the added bonus of being notified whenever the site goes down.
If you really need to clean up the EMS contacts because it's getting out of control, you can access the database directly and trigger the same stored procedure that the scheduled task uses. It's called [Proc_OM_Contact_MassDelete] and takes a where clause and a batch size. The where clause is where you specify the delete policy. For example
ContactCreated < GETDATE()-60 AND ([ContactEmail] IS NULL PR [ContactEmail]='')
With this where clause the stored proc would process contacts that were created over 60 days ago and don't have an e-mail address yet.
Please be aware that large volumes of EMS data will require database index tuning for this procedure to run within an acceptable period of time. This is true for EMS in general when your site has a decent amount of traffic.
If the standard Kentico cleanup doesn't work, for example because the database is unable to deal with millions of contacts, we've written a script to purge all EMS data. Use with caution ;)
do you have applied the latest hotfix (9.0.50) on your project? There was a bug when the deletion of inactive contacts took longer than 1 minute, the next run of the "Delete inactive contacts" scheduled task was not set, and the task did not execute again. You can download the package directly from this page: https://devnet.kentico.com/download/hotfixes
The "Delete inactive contacts" scheduled task only runs between 2am and 6am based on the servers time the site is running on. You can see this in the documentation. It only ever deletes a batch of 1000 contacts and never more. If you want to "trick" the site into running the scheduled task more, update the time on the server to 1:58am and restart the site.
Related
I use Power Automate to refresh a PowerBI dashboard when I update its input files on Sharepoint.
Since the inputs are generated via another automated process, the updates follow each other very closely, which triggers the start of one Power Automate job per updated file.
This situation leads to failed jobs since they are all running at the same time. Even worse, the first job succeeds but the refresh then happens when not all files are updated yet. It is also a waste, since I would only need one to run.
To accomodate for that, I tried to introduce a delay in the job. This makes sure that when the first job runs, refreshing powerBI will work. Howerver, the subsequent runs all fail, so I would still like to find a way not to run them at all.
Can anyone guide me in the right direction ?
For this requirement, you can set the Share Point trigger can run only 1 instance at same time. Please refer to the steps below:
1. Click "..." button of the trigger and click "Settings".
2. Enable "Concurrency Control" limit and set Degree of Parallelism as 1.
Then your logic app can not run multiple instances at same time.
I am developing an application using Azure Cloud Service and web api. I would like to allow users that create a consultation session the ability to change the price of that session, however I would like to allow all users 30 days to leave the session before the new price affects the price for all members currently signed up for the session. My first thought is to use queue storage and set the visibility timeout for the 30 day time limit, but this seems like this could grow the queue really fast over time, especially if the message should not run for 30 days; not to mention the ordering issues. I am looking at the task scheduler as well but the session pricing changes are not a recurring concept but more random. Is the queue idea a good approach or is there a better and more efficient way to accomplish this?
The stuff you are trying to do should be done with a relational database. You can use timestamps to record when prices for session changed. I wouldn't use a queue at all for this. A queue is more for passing messages in a distributed system. Your problem is just about tracking what prices changed on what sessions and when. That data should be modeled in a database.
I think this scenario is more suitable to use Azure Scheduler. Programatically create a Job with one time recurrence with set date as 30 days later to run once. Once this job gets triggered automatically by scheduler, assign an action to callback to one of your API/Service to do the price & other required updates and also remove this Job from the scheduler as part of this action to have a clean jobs list. Anyways premium plan of Azure Scheduler Job Collection will give you unlimited number of jobs to run.
Hope this is exactly what you were looking for...
I would consider using Azure WebJobs. A WebJob basically gives you the ability to run a .NET console application within the context of an Azure Web App. It can be run on demand, continuously, or in response to a reoccurring schedule. If your processing requirements are low and allow for it they can also run in the same process that your Web App is running in to save you $$$ as they are free that way.
You could schedule the WebJob to run once or twice per day and examine the situation and react as is appropriate. Since it's really just a .NET worker role you have ultimate flexibility.
I've created a simple Azure WebJob that uses a QueueInput trigger. It deployed without any problems and I've schedule it via the management portal so that it 'Runs continuously'
Initial testing seemed fine, with the job triggering shortly after placing anything in the queue.
By chance I then left it about a day before placing anything else in the queue. This time the job hadn't triggered within a few minutes so I logged in to the portal to view the invocation logs - which showed that the job had just that moment been triggered.
That seemed too much of a coincidence so I left it another day before placing something in the queue. Again, the job didn't trigger. I left it overnight and by morning it still hadn't triggered.
When I logged in to the management portal this time I noticed that the job was marked as 'Aborted' on the WebJobs page. It was like that only for about 10 seconds before the status changed to 'Running'. And then the job immediately triggered from what was placed in the queue the night before, as expected.
As it's an alpha release I'm expecting glitches. Just wondering whether anyone else has had a similar experience.
For WebJobs SDK, your job must be running in order to listen for triggers (new queue messages, new blobs, etc). Azure Websites free tier has quotas and will put your job to sleep which means it's no longer listening on triggers. Using the site may cause it to come back to life and start listening to triggers again.
The SDK dashboard will show a warning icon next to functions if the hosting job is not running (it detects this via heartbeats).
Make sure that your website is configured with the "Always On" setting Enabled.
If your site contains continuously running jobs they may not perform reliably if this setting is disabled.
http://azure.microsoft.com/en-us/documentation/articles/web-sites-configure/
By default, web sites are unloaded if they have been idle for some period of time. This lets the system conserve resources. You can enable the Always On setting for a site in Standard mode if the site needs to be loaded all the time. Because continuous web jobs may not run reliably if Always On is disabled, you should enable Always On when you have continuous web jobs running on the site.
Is it possible to check whether a SharePoint (actually WSS 3.0) timer job has run when it was scheduled to ?
Reason is we have a few daily custom jobs and want to make sure they're always run, even if the server has been down during the time slot for the jobs to run, so I'd like to check them and then run them
And is it possible to add a setting when creating them similar to the one for standard Windows scheduled tasks ... "Run task as soon as possible after a scheduled start is missed" ?
check it in job status page and then you can look at the logs in 12 hive folder for further details
central administration/operations/monitoring/timer jobs/check jobs status
As far as the job restart is concerned when it is missed that would not be possible with OOTB features. and it make sense as well since there are lot of jobs which are executed at particular interval if everything starts at the same time load on server would be very high
You can look at the LastRunTime property of an SPJobDefinition to see when the job was actually executed. As far as I can see in Reflector, the value of this property is loaded from the database and hence it should reflect the time it was actually executed.
I've hit an issue with creating a timer job on demand from within an event handler. It works fine on my dev machine where the user is also the farm administrator. On the staging server (and production too), this user will be different. Apparently it needs to be a farm admin who creates/updates timer jobs as they have access to the configuration db.
I used a timer job to cope with the notion that many items could be updated at once using the datasheet and if that happened, I wanted an update rollup to take place at a defined period after the edits.
I'm now thinking I may have to set up a recurring timer job instead of a "once" job and within the timer job, check for certain conditions being true before doing any work.
Any suggestions on how I could achieve my desired result of having a rollup function run after any updates, but not after every one?
The previous answer is not correct, or at least not correct for SharePoint 2010. You cannot create job definitions in 2010 in this way even with elevated privileges, as they must be created from central administration. I had a similar problem and this was my finding
this is a blog I wrote about that
I would suggest that you make an event receiver that with a delay of say 10 minutes (timer or thread sleep) and register itself in say web property bag, so that another instance would not run. This could solve the problem.
See http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spsecurity.runwithelevatedprivileges.aspx to fix the permissions problem.