Testing Workflow History Autocleanup - sharepoint

I am facing a rather peculiar problem here. We have an OOTB approval workflow, which logs all the workflow history into a workflow history list. This list gets purged every 60 days. To increase this time period for which the workflow history is retained, I googled around and found that I have run this code:
using (SPSite wfSite = new SPSite(siteUrl))
{
using (SPWeb wfWeb = wfSite.OpenWeb(webName))
{
SPList wfList = wfWeb.Lists[listName];
SPWorkflowAssociation _wfAssociation = null;
foreach (SPWorkflowAssociation a in wfList.WorkflowAssociations)
{
if("approval 1" == wfAssociationName.ToLowerInvariant())
{
a.AutoCleanupDays = newCleanupDays;
_wfAssociation = a;
assoCounter++;
}
else
{
_wfAssociation = a;
}
}
wfList.UpdateWorkflowAssociation(_wfAssociation);
}
}
The code works fine, in the sense that it does not throw any exceptions. So far so good. But my problem now is that I need to test whether my code works. So i set the newCleanupDays variable to 0. But i see that new workflow activities are still getting logged in the workflow history list. I can set the variable to 1, but that would mean waiting an entire day to see if the code works..
Is there any way in which I can test my scenario, so that I can set the autocleanup days to 1, and I dont't have to wait an entire day to see if my code works? Is there any way I can "Fool" the system into thinking that 1 day has elapsed? I tried changing the system time and restarting the server and everything, but it didn't work for me.

Changing the system time should work, but you are going to have to kick off the timer job that initiates the workflows. Don't restart the server after the time change.
One warning is that SharePoint really really does not like travelling back in time. Any documents that are created in the "future" are going to have issues. So remember to test in a new web site that can be deleted when you roll back to "now".

Related

Cleaning up Netsuite scripts

Are there any suggestions for cleaning up unused scripts in NetSuite? We have an implementation that includes scripts and bundles from a third party and then innumerable scripts (especially restlets and workflows) that we have written, changed, rewritten, tossed aside, etc by multiple developers. Many of the scripts were released in error logging or don't have any debug log statements, which is the only way I can think to determine when, and how many times a script is run.
I am looking for a way to to determine just that - when and how often every script and/or deployment is run (hopefully without going into each script and adding log info), so we can clean up before the new version is implemented.
Thanks!
In version 14.2 (coming soon), there is a script queue monitor tool that should tell you when scripts are running, which queue is being used, etc (SuiteCloud Plus customers). See the release notes for 14.2 for more detail.
The best way I can find is doing a Script Deployment search. You can condition on is Deployed = Yes/No, Status is anyof/noneOf Released/Scheduled, and Execution Log: Date within last year.
I am only giving example conditions based on what you mentioned. The Yes/No and anyof/Noneof depends on if you want to see those that are inactive (to clean them up) or those that are active. The execution log condition would only work if either the script errored (which does not require a nlapiLogExecution() call) or if there is a logExecution call in the script.
You could at least play with this a bit from what you know of your scripts to work off that. You can do a similar thing for Workflows by doing a workflow search.
You could undeploy the scripts using a saved search. For example I want to undeploy the scripts which were created before a year ago.
var start = function(request, response)
{
var filters = new Array();
var columns = new Array();
filters[0] = new nlobjSearchFilter('formuladate', null, 'before', 'lastfiscalyear');
columns[0] = new nlobjSearchColumn('internalid');
var search = nlapiSearchRecord('scriptdeployment', null, filters, columns);
for(var i in search)
{
var rec = nlapiLoadRecord('scriptdeployment', search[i].getValue('internalid'));
rec.setFieldValue('isdeployed', 'F');
nlapiSubmitRecord(rec, true);
}
}

CRM 2011: ExecuteMultipleRequest Freezing

The scenario I have is that I have a plugin which needs to run a whole bunch of AddMembersTeamRequest and RemoveMembersTeamRequest (around 2000 of each)
I am having trouble with the following code:
var executeMultipleRequest = new ExecuteMultipleRequest();
executeMultipleRequest.Settings = new ExecuteMultipleSettings() { ContinueOnError = false, ReturnResponses = false };
var organizationRequestCollection = new OrganizationRequestCollection();
foreach (var sharedRecordsOwningTeam in sharedRecordsOwningTeams)
{
organizationRequestCollection.Add(CreateAddMembersTeamRequest(userId, sharedRecordsOwningTeam.Id));
}
executeMultipleRequest.Requests = organizationRequestCollection;
service.Execute(executeMultipleRequest);
However it doesn't matter how many requests are part of that ExecuteMultipleRequest as it just seems to freeze the process (I have tried having just one request in the ExecuteMultipleRequest collection)
But the following code seems to work fine:
foreach (var sharedRecordsOwningTeam in sharedRecordsOwningTeams)
{
service.Execute(CreateAddMembersTeamRequest(userId, sharedRecordsOwningTeam.Id));
}
As you can see, the problem with my code above is that it causes the process to execute around 2000+ requests.
Would anyone know why using the ExecuteMultipleRequest freezes the process entirely? (Even when there is only 1 add/remove team member request in the request collection)
I think I figured it out.
I think it was freezing on me because I was trying to remove a user from the default team of their current Business Unit.
For some reason the request wasn't erroring and instead just sat there.
However I think I should also point out that using an ExecuteMultipleRequest wasn't any faster than running multiple AddMembersTeamRequest.
Even a giant AssociateRequest wasn't any faster.

How can I logout an administrator in SilverStripe 3.1.x after period of inactivity?

How do I expire the administrator session after a period of inactivity in SilverStripe 3.1.x? Is there a config option for this?
I searched and found the following code snippet, which, when placed in the Page_Controller class, works for frontend users, but totally ineffective in the administration area.
public function init() {
parent::init();
self::logoutInactiveUser();
}
public static function logoutInactiveUser() {
$inactivityLimit = 1; // in Minutes - deliberately set to 1 minute for testing purposes
$inactivityLimit = $inactivityLimit * 60; // Converted to seconds
$sessionStart = Session::get('session_start_time');
if (isset($sessionStart)){
$elapsed_time = time() - Session::get('session_start_time');
if ($elapsed_time >= $inactivityLimit) {
$member = Member::currentUser();
if($member) $member->logOut();
Session::clear_all();
$this->redirect(Director::baseURL() . 'Security/login');
}
}
Session::set('session_start_time', time());
}
After over 1 minute of inactivity, the admin user is still logged in and the session has not timed out.
For people like myself still searching for a solution to this, there's a much simpler alternative. As it turns out, the only good solution at the moment is indeed to disable LeftAndMain.session_keepalive_ping and simon_w's solution will not work precisely because of this ping. Also, disabling this ping should not cause data loss (at least not for SilverStripe 3.3+) because the user will be presented with an overlay when they attempt to submit their work. After validating their credentials, their data will be submitted to the server as usual.
Also, for anyone who (like myself) was looking for a solution on how to override the CMS ping via LeftAndMain.session_keepalive_ping using _config.yml keep reading.
Simple Fix: In your mysite/_config.php, simply add:
// Disable back-end AJAX calls to /Security/ping
Config::inst()->update('LeftAndMain', 'session_keepalive_ping', false);
This will prevent the CMS from refreshing the session which will naturally expire on it's own behind the scenes (and will not be submitted on the next request). That way, the setting you may already have in _config.yml dictating the session timeout will actually be respected and allowing you to log out a user who's been inactive in the CMS. Again, data should not be lost for the reasons mentioned in the first paragraph.
You can optionally manually override the session timeout value in mysite/_config/config.yml to help ensure it actually expires at some explicit time (e.g. 30min below):
# Set session timeout to 30min.
Session:
timeout: 1800
You may ask: Why is this necessary?
Because, while the bug (or functionality?) preventing you from overriding the LeftAndMain.session_keepalive_ping setting to false was supposedly fixed in framework PR #3272 it was actually reverted soon thereafter in PR #3275
I hope this helps anyone else confused by this situation like I was!
This works, but would love to hear from the core devs as to whether or not this is best practice.
In mysite/code I created a file called MyLeftAndMainExtension.php with the following code:
<?php
class MyLeftAndMainExtension extends Extension {
public function onAfterInit() {
self::logoutInactiveUser();
}
public static function logoutInactiveUser() {
$inactivityLimit = 1; // in Minutes - deliberately set to 1 minute for testing
$inactivityLimit = $inactivityLimit * 60; // Converted to seconds
$sessionStart = Session::get('session_start_time');
if (isset($sessionStart)){
$elapsed_time = time() - Session::get('session_start_time');
if ($elapsed_time >= $inactivityLimit) {
$member = Member::currentUser();
if($member) $member->logOut();
Session::clear_all();
Controller::curr()->redirect(Director::baseURL() . 'Security/login');
}
}
Session::set('session_start_time', time());
}
}
Then I added the following line to mysite/_config.php
LeftAndMain::add_extension('MyLeftAndMainExtension');
That seemed to do the trick. If you prefer to do it through yml, you can add this to mysite/_config/config.yml :
LeftAndMain:
extensions:
- MyLeftAndMainExtension
The Session.timeout config option is available for setting an inactivity timeout for sessions. However, setting it to anything greater than 5 minutes isn't going to work in the CMS out of the box.
Having a timeout in the CMS isn't productive, and your content managers will end up ruing the timeout. This is because it is possible (and fairly common) to be active in the CMS, while appearing inactive from the server's perspective (say, you're writing a lengthy article). As such, the CMS is designed to send a ping back to the server every 5 minutes to ensure users are logged in. While you can stop this behaviour by setting the LeftAndMain.session_keepalive_ping config option to false, I strongly recommended against doing so.

Error registering SharePoint WebDeleting event receiver in some environments

I am trying to register a WebDeleting event receiver within SharePoint. This works fine in my development environment, but not in several staging environments. The error I get back is "Value does not fall within the expected range.". Here is the code I use:
SPSecurity.RunWithElevatedPrivileges(delegate()
{
using (SPSite elevatedSite = new SPSite(web.Site.ID))
{
using (SPWeb elevatedWeb = elevatedSite.OpenWeb(web.ID))
{
try
{
elevatedWeb.AllowUnsafeUpdates = true;
SPEventReceiverDefinition eventReceiver = elevatedWeb.EventReceivers.Add(new Guid(MyEventReciverId));
eventReceiver.Type = SPEventReceiverType.WebDeleting;
Type eventReceiverType = typeof(MyEventHandler);
eventReceiver.Assembly = eventReceiverType.Assembly.FullName;
eventReceiver.Class = eventReceiverType.FullName;
eventReceiver.Update();
elevatedWeb.AllowUnsafeUpdates = false;
}
catch (Exception ex)
{
// Do stuff...
}
}
}
});
I realize that I can do this through a feature element file (I am trying that approach now), but would prefer to use the above approach.
The errors I consistently get in the ULS logs are:
03/11/2010 17:16:57.34 w3wp.exe (0x09FC) 0x0A88 Windows SharePoint Services Database 6f8g Unexpected Unexpected query execution failure, error code 3621. Additional error information from SQL Server is included below. "The statement has been terminated." Query text (if available): "{?=call proc_InsertEventReceiver(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}"
03/11/2010 17:16:57.34 w3wp.exe (0x09FC) 0x0A88 Windows SharePoint Services General 8e2s Medium Unknown SPRequest error occurred. More information: 0x80070057
Any ideas?
UPDATE - Some interesting things I have learned...
I modified my code to do the following:
I call EventReceivers.Add() without the GUID since most examples I see do not do that
Gave the event receiver a Name and Sequence number since most examples I see do that
I deployed this change along with some extra trace statements that go to the ULS logs and after doing enough iisresets and clearing the GAC of my assembly, I started to see my new trace statements in the ULS logs and I no longer got the error!
So, I started to go back towards my original code to see what change exactly helped. I finally ended up with the original version in source control and it still worked :-S.
So the answer is clearly that it is some caching issue. However, when I was originally trying to get it to work I tried IISRESETs, restarting some SharePoint services OWSTimer (this, I believe runs the event hander, but probably isn't involved in the event registration where I am getting the error), and even a reboot to make sure no assembly caching was going on - and that did not help before.
The only thing I have to go on is maybe following steps such as:
Clear the GAC of the assembly that contains the registration code and event hander class.
Do an IISRESET.
Uninstall the WSP.
Do an IISRESET.
Install the WSP.
Do an IISRESET.
To get it working I never did a reboot or restarted SharePoint services, but I had done those prior to getting it working (before changing my code).
I suppose I could dig more with Reflector to see what I can find, but I believe you get to a dead end (unmanaged code) pretty quick. I wonder what could be holding on to the old DLL? I can't imagine that SQL Server would be in some way. Even so, a reboot would have fixed that (the entire farm, including SQL Server are on the same machine in this environment).
So, it appears that the whole problem was creating the event receiver by providing the GUID.
EventReceiverDefinition eventReceiver = elevatedWeb.EventReceivers.Add(new Guid(MyEventReciverId));
Now I am doing:
EventReceiverDefinition eventReceiver = elevatedWeb.EventReceivers.Add();
Unfortunately this means when I want to find out if the event is already registered, I must do something like the code below instead of a single one liner.
// Had to use the below instead of: web.EventReceivers[new Guid(MyEventReceiverId)]
private SPEventReceiverDefinition GetWebDeletingEventReceiver(SPWeb web)
{
Type eventReceiverType = typeof(MyEventHandler);
string eventReceiverAssembly = eventReceiverType.Assembly.FullName;
string eventReceiverClass = eventReceiverType.FullName;
SPEventReceiverDefinition eventReceiver = null;
foreach (SPEventReceiverDefinition eventReceiverIter in web.EventReceivers)
{
if (eventReceiverIter.Type == SPEventReceiverType.WebDeleting)
{
if (eventReceiverIter.Assembly == eventReceiverAssembly && eventReceiverIter.Class == eventReceiverClass)
{
eventReceiver = eventReceiverIter;
break;
}
}
}
return eventReceiver;
}
It's still not clear why things seemed to linger and require some cleanup (iisreset, reboots, etc.) so if anyone else has this problem keep that in mind.

Why does my SharePoint workflow fail when the client is running Vista or Windows 7?

I have a similar situation to this question.
I have a custom sequential SharePoint workflow, deleoped in Visual Studio 2008. It is associated with an InfoPath form submitted to a form library. It is configured to automatically start when an item is created.
It works sometimes. Sometimes it just fails to start.
Just like the question linked above, I checked in the debugger, and the issue is that the InfoPath fields published as columns in the library are empty when the workflow fires. (I access the fields with workflowProperties.Item["fieldName"].) But there appears to be a race condition, as those fields actually show up in the library view, and if I terminate the failed workflow and restart it manually, it works fine!
After a lot of head-scratching and testing, I've determined that the workflow will start successfully if the user is running any version of IE on Windows XP, but it fails if the same user submits the same form data from a Vista or Windows 7 client machine.
Does anyone have any idea why this is happening?
I have used another solution which will only wait until InfoPath property is available (or max 60 seconds):
public SPWorkflowActivationProperties workflowProperties =
new SPWorkflowActivationProperties();
private void onOrderFormWorkflowActivated_Invoked(object sender, ExternalDataEventArgs e)
{
SPListItem workflowItem;
workflowItem = workflowProperties.List.GetItemById(workflowProperties.ItemId);
int waited = 0;
int maxWait = 60000; // Max wait time in ms
while (workflowItem["fieldName"] == null && (waited < maxWait))
{
System.Threading.Thread.Sleep(1);
waited ++;
workflowItem = workflowProperties.List.GetItemById(workflowProperties.ItemId);
}
// For testing: Write delay time in Workflow History Event
SPWorkflow.CreateHistoryEvent(
workflowProperties.Web,
workflowProperties.WorkflowId,
(int)SPWorkflowHistoryEventType.WorkflowComment,
workflowProperties.OriginatorUser, TimeSpan.Zero,
waited.ToString() + " ms", "Waiting time", "");
}
workflowProperties.Item will never get the InfoPath property in the code above.
workflowProperties.List.GetItemById(workflowProperties.ItemId) will after some delay.
This occurs due to the fact that Vista/7 saves InfoPath forms through WebDAV, however XP uses another protocol (sorry, can't remember at the time). SharePoint catches the "ItemAdded" event before the file is actually uploaded (that is, the item is already created, but file upload is currently in progress).
What you can do for a workaround is to add a dealay activity and wait for 10 seconds as the first thing in your workflow (will actually be longer than ten seconds due to the way workflows are built in SPPS). This way the upload will already have ended when you perform to read the item. To inform the users about what's happening, you can add a "logToHistoryList" activity before the delay.

Resources