Cleaning up Netsuite scripts - netsuite

Are there any suggestions for cleaning up unused scripts in NetSuite? We have an implementation that includes scripts and bundles from a third party and then innumerable scripts (especially restlets and workflows) that we have written, changed, rewritten, tossed aside, etc by multiple developers. Many of the scripts were released in error logging or don't have any debug log statements, which is the only way I can think to determine when, and how many times a script is run.
I am looking for a way to to determine just that - when and how often every script and/or deployment is run (hopefully without going into each script and adding log info), so we can clean up before the new version is implemented.
Thanks!

In version 14.2 (coming soon), there is a script queue monitor tool that should tell you when scripts are running, which queue is being used, etc (SuiteCloud Plus customers). See the release notes for 14.2 for more detail.

The best way I can find is doing a Script Deployment search. You can condition on is Deployed = Yes/No, Status is anyof/noneOf Released/Scheduled, and Execution Log: Date within last year.
I am only giving example conditions based on what you mentioned. The Yes/No and anyof/Noneof depends on if you want to see those that are inactive (to clean them up) or those that are active. The execution log condition would only work if either the script errored (which does not require a nlapiLogExecution() call) or if there is a logExecution call in the script.
You could at least play with this a bit from what you know of your scripts to work off that. You can do a similar thing for Workflows by doing a workflow search.

You could undeploy the scripts using a saved search. For example I want to undeploy the scripts which were created before a year ago.
var start = function(request, response)
{
var filters = new Array();
var columns = new Array();
filters[0] = new nlobjSearchFilter('formuladate', null, 'before', 'lastfiscalyear');
columns[0] = new nlobjSearchColumn('internalid');
var search = nlapiSearchRecord('scriptdeployment', null, filters, columns);
for(var i in search)
{
var rec = nlapiLoadRecord('scriptdeployment', search[i].getValue('internalid'));
rec.setFieldValue('isdeployed', 'F');
nlapiSubmitRecord(rec, true);
}
}

Related

WifiLock under-locked my_lock

I'm tring to download an offline map pack. Trying to reverse engineer the example project from the skobbler support website, however when trying to start a download the download manager crashes.
What my usecase is: show a list of available countries (within the EUR continent) and make the user select a single one, and that will be downloaded at that time. So far I have gotten a list where those options are available. Upon selecting an item (and starting the download) it crashes.
For the sake of the question I commented out some things.
Relevant code:
// Get the information about where to obtain the files from
SKPackageURLInfo urlInfo = SKPackageManager.getInstance().getURLInfoForPackageWithCode(pack.packageCode);
// Steps: SKM, ZIP, TXG
List<SKToolsFileDownloadStep> downloadSteps = new ArrayList<>();
downloadSteps.add(new SKToolsFileDownloadStep(urlInfo.getMapURL(), pack.file, pack.skmsize)); // SKM
//downloadSteps.add(); // TODO ZIP
//downloadSteps.add()); // TODO TXG
List<SKToolsDownloadItem> downloadItems = new ArrayList<>(1);
downloadItems.add(new SKToolsDownloadItem(pack.packageCode, downloadSteps, SKToolsDownloadItem.QUEUED, true, true));
mDownloadManager.startDownload(downloadItems); // This is where the crash is
I am noticing a running download, since the onDownloadProgress() is getting triggered (callback from the manager). However the SKToolsDownloadItem that it takes as a parameter says that the stepIndex starts at 0. I don't know how this can be, since I manually put that at (byte) 0, just like the example does.
Also, the logs throw a warning on SingleClientConnManager, telling me:
Invalid use of SingleClientConnManager: connection still allocated.
This is code that gets called from within the manager somewhere. I am thinking there is some vital setup steps missing from the documentation and the example project.

CRM 2011: ExecuteMultipleRequest Freezing

The scenario I have is that I have a plugin which needs to run a whole bunch of AddMembersTeamRequest and RemoveMembersTeamRequest (around 2000 of each)
I am having trouble with the following code:
var executeMultipleRequest = new ExecuteMultipleRequest();
executeMultipleRequest.Settings = new ExecuteMultipleSettings() { ContinueOnError = false, ReturnResponses = false };
var organizationRequestCollection = new OrganizationRequestCollection();
foreach (var sharedRecordsOwningTeam in sharedRecordsOwningTeams)
{
organizationRequestCollection.Add(CreateAddMembersTeamRequest(userId, sharedRecordsOwningTeam.Id));
}
executeMultipleRequest.Requests = organizationRequestCollection;
service.Execute(executeMultipleRequest);
However it doesn't matter how many requests are part of that ExecuteMultipleRequest as it just seems to freeze the process (I have tried having just one request in the ExecuteMultipleRequest collection)
But the following code seems to work fine:
foreach (var sharedRecordsOwningTeam in sharedRecordsOwningTeams)
{
service.Execute(CreateAddMembersTeamRequest(userId, sharedRecordsOwningTeam.Id));
}
As you can see, the problem with my code above is that it causes the process to execute around 2000+ requests.
Would anyone know why using the ExecuteMultipleRequest freezes the process entirely? (Even when there is only 1 add/remove team member request in the request collection)
I think I figured it out.
I think it was freezing on me because I was trying to remove a user from the default team of their current Business Unit.
For some reason the request wasn't erroring and instead just sat there.
However I think I should also point out that using an ExecuteMultipleRequest wasn't any faster than running multiple AddMembersTeamRequest.
Even a giant AssociateRequest wasn't any faster.

My program turns a spreadsheet into an Excel-file. But it only works for one user

I've made a larger (1000+ lines of code) App Script. It works great, for me. No other user can run it. I want them to be able to run it as well, and I can not figure out why they can't.
The problem occurs on this part:
var id = 'A_correct_ID_of_a_Google_Spreadsheet';
var SSurl = 'https://docs.google.com/feeds/';
var doc = UrlFetchApp.fetch(SSurl+'download/spreadsheets/Export?key='+id+'&exportFormat=xls',googleOAuth_('docs',SSurl)).getBlob();
var spreadsheet = DocsList.createFile(doc);
The function (and structure) was published here: other thread
function googleOAuth_(name,scope) {
var oAuthConfig = UrlFetchApp.addOAuthService(name);
oAuthConfig.setRequestTokenUrl("https://www.google.com/accounts/OAuthGetRequestToken?scope="+scope);
oAuthConfig.setAuthorizationUrl("https://www.google.com/accounts/OAuthAuthorizeToken");
oAuthConfig.setAccessTokenUrl("https://www.google.com/accounts/OAuthGetAccessToken");
oAuthConfig.setConsumerKey('anonymous');
oAuthConfig.setConsumerSecret('anonymous');
return {oAuthServiceName:name, oAuthUseToken:"always"};
}
I can't see any reason why the program only would run for one user. All the files are shared between all the users and ownership have been swapped around.
When a script uses oAuth (googleOAuth_(name,scope) in you case) it needs to be authorized from the script editor, independently from the other authorization that the user grands with the "normal" usual procedure.
This has been the object of an enhancement request for quite a long time and has no valid workaround as far as I know.
So, depending on how your script is deployed (in a SS or a Doc or as webApp) you might find a solution or not... if, as suggested in the first comment on your post, you run this from a webapp you can deploy it to run as yourself and allow anonymous access and it will work easily, but in every other case your other users will have to open the script editor and run a function that triggers the oAuth authorization process.

Fire Off an asynchronous thread and save data in cache

I have an ASP.NET MVC 3 (.NET 4) web application.
This app fetches data from an Oracle database and mixes some information with another Sql Database.
Many tables are joined together and lot of database reading is involved.
I have already optimized the best I could the fetching side and I don't have problems with that.
I've use caching to save information I don't need to fetch over and over.
Now I would like to build a responsive interface and my goal is to present the users the order headers filtered, and load the order lines in background.
I want to do that cause I need to manage all the lines (order lines) as a whole cause of some calculations.
What I have done so far is using jQuery to make an Ajax call to my action where I fetch the order headers and save them in a cache (System.Web.Caching.Cache).
When the Ajax call has succeeded I fire off another Ajax call to fetch the lines (and, once again, save the result in a cache).
It works quite well.
Now I was trying to figure out if I can move some of this logic from the client to the server.
When my action is called I want to fetch the order header and start a new thread - responsible of the order lines fetching - and return the result to the client.
In a test app I tried both ThreadPool.QueueUserWorkItem and Task.Factory but I want the generated thread to access my cache.
I've put together a test app and done something like this:
TEST 1
[HttpPost]
public JsonResult RunTasks01()
{
var myCache = System.Web.HttpContext.Current.Cache;
myCache.Remove("KEY1");
ThreadPool.QueueUserWorkItem(o => MyFunc(1, 5000000, myCache));
return (Json(true, JsonRequestBehavior.DenyGet));
}
TEST 2
[HttpPost]
public JsonResult RunTasks02()
{
var myCache = System.Web.HttpContext.Current.Cache;
myCache.Remove("KEY1");
Task.Factory.StartNew(() =>
{
MyFunc(1, 5000000, myCache);
});
return (Json(true, JsonRequestBehavior.DenyGet));
}
MyFunc crates a list of items and save the result in a cache; pretty silly but it's just a test.
I would like to know if someone has a better solution or knows of some implications I might have access the cache in a separate thread?!
Is there anything I need to be aware of, I should avoid or I could improve ?
Thanks for your help.
One possible issue I can see with your approach is that System.Web.HttpContext.Current might not be available in a separate thread. As this thread could run later, once the request has finished. I would recommend you using the classes in the System.Runtime.Caching namespace that was introduced in .NET 4.0 instead of the old HttpContext.Cache.

Testing Workflow History Autocleanup

I am facing a rather peculiar problem here. We have an OOTB approval workflow, which logs all the workflow history into a workflow history list. This list gets purged every 60 days. To increase this time period for which the workflow history is retained, I googled around and found that I have run this code:
using (SPSite wfSite = new SPSite(siteUrl))
{
using (SPWeb wfWeb = wfSite.OpenWeb(webName))
{
SPList wfList = wfWeb.Lists[listName];
SPWorkflowAssociation _wfAssociation = null;
foreach (SPWorkflowAssociation a in wfList.WorkflowAssociations)
{
if("approval 1" == wfAssociationName.ToLowerInvariant())
{
a.AutoCleanupDays = newCleanupDays;
_wfAssociation = a;
assoCounter++;
}
else
{
_wfAssociation = a;
}
}
wfList.UpdateWorkflowAssociation(_wfAssociation);
}
}
The code works fine, in the sense that it does not throw any exceptions. So far so good. But my problem now is that I need to test whether my code works. So i set the newCleanupDays variable to 0. But i see that new workflow activities are still getting logged in the workflow history list. I can set the variable to 1, but that would mean waiting an entire day to see if the code works..
Is there any way in which I can test my scenario, so that I can set the autocleanup days to 1, and I dont't have to wait an entire day to see if my code works? Is there any way I can "Fool" the system into thinking that 1 day has elapsed? I tried changing the system time and restarting the server and everything, but it didn't work for me.
Changing the system time should work, but you are going to have to kick off the timer job that initiates the workflows. Don't restart the server after the time change.
One warning is that SharePoint really really does not like travelling back in time. Any documents that are created in the "future" are going to have issues. So remember to test in a new web site that can be deleted when you roll back to "now".

Resources