CRM 2011: ExecuteMultipleRequest Freezing - dynamics-crm-2011

The scenario I have is that I have a plugin which needs to run a whole bunch of AddMembersTeamRequest and RemoveMembersTeamRequest (around 2000 of each)
I am having trouble with the following code:
var executeMultipleRequest = new ExecuteMultipleRequest();
executeMultipleRequest.Settings = new ExecuteMultipleSettings() { ContinueOnError = false, ReturnResponses = false };
var organizationRequestCollection = new OrganizationRequestCollection();
foreach (var sharedRecordsOwningTeam in sharedRecordsOwningTeams)
{
organizationRequestCollection.Add(CreateAddMembersTeamRequest(userId, sharedRecordsOwningTeam.Id));
}
executeMultipleRequest.Requests = organizationRequestCollection;
service.Execute(executeMultipleRequest);
However it doesn't matter how many requests are part of that ExecuteMultipleRequest as it just seems to freeze the process (I have tried having just one request in the ExecuteMultipleRequest collection)
But the following code seems to work fine:
foreach (var sharedRecordsOwningTeam in sharedRecordsOwningTeams)
{
service.Execute(CreateAddMembersTeamRequest(userId, sharedRecordsOwningTeam.Id));
}
As you can see, the problem with my code above is that it causes the process to execute around 2000+ requests.
Would anyone know why using the ExecuteMultipleRequest freezes the process entirely? (Even when there is only 1 add/remove team member request in the request collection)

I think I figured it out.
I think it was freezing on me because I was trying to remove a user from the default team of their current Business Unit.
For some reason the request wasn't erroring and instead just sat there.
However I think I should also point out that using an ExecuteMultipleRequest wasn't any faster than running multiple AddMembersTeamRequest.
Even a giant AssociateRequest wasn't any faster.

Related

Trouble Updating Observable Collection when Bound to Gridview - UWP C#

Observablecollection is bound to gridview in UWP project. If I try to clear and add data it fails with an error because it can only be modified on the UI thread.
I have set up service broker with SQL to notify the app when there is a change to the data. This is working correctly. However, every time I try to clear and modify the observablecollection I get an exception thrown.
using (SqlDataReader dr = cmd.ExecuteReader())
{
while (dr.Read())
{
EmployeeLists.Add(new Employee { Name = dr[0].ToString(), Loc = dr[2].ToString() });
}
}
This is the code I'm using at first to populate the observable collection. I want to listen for changes which is working. But how do I update the changes and sync them to the observable collection?
I have tried clearing the employeelists observablecollection and then adding everything again. It seems clunky, but doesn't work anyway because It says I cannot modify from another thread. I have tried several solutions online, but I'm not that familiar with ASYNC programming. Can anyone point me in the right direction?!

range.address throws context related errors

We've been developing using Excel JavaScript API for quite a few months now. We have been coming across context related issues which got resolved for unknown reasons. We weren't able to replicate these issues and wondered how they got resolved. Recently similar issues have started popping up again.
Error we consistently get:
property 'address' is not available. Before reading the property's
value, call the load method on the containing object and call
"context.sync()" on the associated request context.
We thought as we have multiple functions defined to modularise code in project, may be context differs somewhere among these functions which has gone unnoticed. So we came up with single context solution implemented via JavaScript Module pattern.
var ContextManager = (function () {
var xlContext;//single context for entire project/application.
function loadContext() {
xlContext = new Excel.RequestContext();
}
function sync(object) {
return (object === undefined) ? xlContext.sync() : xlContext.sync(object);
}
function getWorksheetByName(name) {
return xlContext.workbook.worksheets.getItem(name.toString());
}
//public
return {
loadContext: loadContext,
sync: sync,
getWorksheetByName: getWorksheetByName
};
})();
NOTE: above code shortened. There are other methods added to ensure that single context gets used throughout application.
While implementing single context, this time round, we have been able to replicate the issue though.
Office.initialize = function (reason) {
$(document).ready(function () {
ContextManager.loadContext();
function loadRangeAddress(rng, index) {
rng.load("address");
ContextManager.sync().then(function () {
console.log("Address: " + rng.address);
}).catch(function (e) {
console.log("Failed address for index: " + index);
});
}
for (var i = 1; i <= 1000; i++) {
var sheet = ContextManager.getWorksheetByName("Sheet1");
loadRangeAddress(sheet.getRange("A" + i), i);//I expect to see a1 to a1000 addresses in console. Order doesn't matter.
}
});
}
In above case, only "A1" gets printed as range address to console. I can't see any of the other addresses (A2 to A1000)being printed. Only catch block executes. Can anyone explain why this happens?
Although I've written for loop above, that isn't my use case. In real use case, such situations occur where one range object in function a needs to load range address. In the mean while another function b also wants to load range address. Both function a and function b work asynchronously on separate tasks such as one creates table object (table needs address) and other pastes data to sheet (there's debug statement to see where data was pasted).
This is something our team hasn't been able to figure out or find a solution for.
There is a lot packed into this code, but the issue you have is that you're calling sync a whole bunch of times without awaiting the previous sync.
There are several problems with this:
If you were using different contexts, you would actually see that there is a limit of ~50 simultaneous requests, after which you'll get errors.
In your case, you're running into a different (and almost opposite) problem. Given the async nature of the APIs, and the fact that you're not awaiting on the sync-s, your first sync request (which you'd think is for just A1) will actually contain all the load requests from the execution of the entire for loop. Now, once this first sync is dispatched, the action queue will be cleared. Which means that your second, third, etc. sync will see that there is no pending work, and will no-op, executing before the first sync ever came back with the values!
[This might be considered a bug, and I'll discuss with the team about fixing it. But it's still a very dangerous thing to not await the completion of a sync before moving on to the next batch of instructions that use the same context.]
The fix is to await the sync. This is far and away the simplest to do in TypeScript 2.1 and its async/await feature, otherwise you need to do the async version of the for loop, which you can look up, but it's rather unintuitive (requires creating an uber-promise that keeps chaining a bunch of .then-s)
So, your modified TypeScript-ified code would be
ContextManager.loadContext();
async function loadRangeAddress(rng, index) {
rng.load("address");
await ContextManager.sync().then(function () {
console.log("Address: " + rng.address);
}).catch(function (e) {
OfficeHelpers.Utilities.log(e);
});
}
for (var i = 1; i <= 1000; i++) {
var sheet = ContextManager.getWorksheetByName("Sheet1");
await loadRangeAddress(sheet.getRange("A" + i), i);//I expect to see a1 to a1000 addresses in console. Order doesn't matter.
}
Note the async in front of the loadRangeAddress function, and the two await-s in front of ContextManager.sync() and loadRangeAddress.
Note that this code will also run quite slowly, as you're making an async roundtrip for each cell. Which means you're not using batching, which is at the very core of the object-model for the new APIs.
For completeness sake, I should also note that creating a "raw" RequestContext instead of using Excel.run has some disadvantages. Excel.run does a number of useful things, the most important of which is automatic object tracking and un-tracking (not relevant here, since you're only reading back data; but would be relevant if you were loading and then wanting to write back into the object).
Finally, if I may recommend (full disclosure: I am the author of the book), you will probably find a good bit of useful info about Office.js in the e-book "Building Office Add-ins using Office.js", available at https://leanpub.com/buildingofficeaddins. In particular, it has a very detailed (10-page) section on the internal workings of the object model ("Section 5.5: Implementation details, for those who want to know how it really works"). It also offers advice on using TypeScript, has a general Promise/async-await primer, describes what .run does, and has a bunch more info about the OM. Also, though not available yet, it will soon offer information on how to resume using the same context (using a newer technique than what was originally described in How can a range be used across different Word.run contexts?). The book is a lean-published "evergreen" book, son once I write the topic in the coming weeks, an update will be available to all existing readers.
Hope this helps!

In CRM 2011 SQL Timeout/Locking issue when processing AssociateRequest

I have a plugin that I want to run synchronously as a Post Process because if any of the requests it makes fail I'd like to roll everything back in one transaction. I've encountered an issue where if I try to run an AssociateRequest to associate another record to the record that triggered the plugin it returns a SQL timeout. I know that my code is correct because I'm using the same code to associate other records (that aren't firing the plugin) and they execute fine.
var referencedEntityRelationship = new Relationship(ReferencedEntityRelationshipName);
var referencedEntityEntities = new EntityReferenceCollection(new EntityReference[] { new EntityReference(ReferencedEntityLogicalName, new Guid(receivedRequest.ReferencingEntityId)) });
var rtn = new AssociateRequest() { RelatedEntities = referencedEntityEntities, Relationship = referencedEntityRelationship };
rtn.Target = new EntityReference() { Id = responseId, LogicalName = dataEntityLogicalName };
I know I can avoid the lock by running the plugin asynchronously. But if there is a failure for some reason I'm unable to roll back the initial request that fired the plugin and notify the user that they need to fix something. Any advice on how to execute associate requests within a plugin against the record that fired the plugin are appreciated.

Cleaning up Netsuite scripts

Are there any suggestions for cleaning up unused scripts in NetSuite? We have an implementation that includes scripts and bundles from a third party and then innumerable scripts (especially restlets and workflows) that we have written, changed, rewritten, tossed aside, etc by multiple developers. Many of the scripts were released in error logging or don't have any debug log statements, which is the only way I can think to determine when, and how many times a script is run.
I am looking for a way to to determine just that - when and how often every script and/or deployment is run (hopefully without going into each script and adding log info), so we can clean up before the new version is implemented.
Thanks!
In version 14.2 (coming soon), there is a script queue monitor tool that should tell you when scripts are running, which queue is being used, etc (SuiteCloud Plus customers). See the release notes for 14.2 for more detail.
The best way I can find is doing a Script Deployment search. You can condition on is Deployed = Yes/No, Status is anyof/noneOf Released/Scheduled, and Execution Log: Date within last year.
I am only giving example conditions based on what you mentioned. The Yes/No and anyof/Noneof depends on if you want to see those that are inactive (to clean them up) or those that are active. The execution log condition would only work if either the script errored (which does not require a nlapiLogExecution() call) or if there is a logExecution call in the script.
You could at least play with this a bit from what you know of your scripts to work off that. You can do a similar thing for Workflows by doing a workflow search.
You could undeploy the scripts using a saved search. For example I want to undeploy the scripts which were created before a year ago.
var start = function(request, response)
{
var filters = new Array();
var columns = new Array();
filters[0] = new nlobjSearchFilter('formuladate', null, 'before', 'lastfiscalyear');
columns[0] = new nlobjSearchColumn('internalid');
var search = nlapiSearchRecord('scriptdeployment', null, filters, columns);
for(var i in search)
{
var rec = nlapiLoadRecord('scriptdeployment', search[i].getValue('internalid'));
rec.setFieldValue('isdeployed', 'F');
nlapiSubmitRecord(rec, true);
}
}

Testing Workflow History Autocleanup

I am facing a rather peculiar problem here. We have an OOTB approval workflow, which logs all the workflow history into a workflow history list. This list gets purged every 60 days. To increase this time period for which the workflow history is retained, I googled around and found that I have run this code:
using (SPSite wfSite = new SPSite(siteUrl))
{
using (SPWeb wfWeb = wfSite.OpenWeb(webName))
{
SPList wfList = wfWeb.Lists[listName];
SPWorkflowAssociation _wfAssociation = null;
foreach (SPWorkflowAssociation a in wfList.WorkflowAssociations)
{
if("approval 1" == wfAssociationName.ToLowerInvariant())
{
a.AutoCleanupDays = newCleanupDays;
_wfAssociation = a;
assoCounter++;
}
else
{
_wfAssociation = a;
}
}
wfList.UpdateWorkflowAssociation(_wfAssociation);
}
}
The code works fine, in the sense that it does not throw any exceptions. So far so good. But my problem now is that I need to test whether my code works. So i set the newCleanupDays variable to 0. But i see that new workflow activities are still getting logged in the workflow history list. I can set the variable to 1, but that would mean waiting an entire day to see if the code works..
Is there any way in which I can test my scenario, so that I can set the autocleanup days to 1, and I dont't have to wait an entire day to see if my code works? Is there any way I can "Fool" the system into thinking that 1 day has elapsed? I tried changing the system time and restarting the server and everything, but it didn't work for me.
Changing the system time should work, but you are going to have to kick off the timer job that initiates the workflows. Don't restart the server after the time change.
One warning is that SharePoint really really does not like travelling back in time. Any documents that are created in the "future" are going to have issues. So remember to test in a new web site that can be deleted when you roll back to "now".

Resources