I have a large nlobjSearchResultSet object with over 18,000 "results".
Each result is a pricing record for a customer. There may be multiple records for a single customer.
As 18000+ records is costly in governance points to do mass changes, I'm migrating to a parent (customer) record and child records (items) so I can make changes to the item pricing as a sublist.
As part of this migration, is there a simple command to select only the nlapiSearchResult objects within the big object, which match certain criteria (ie. the customer id).
This would allow me to migrate the data with only the one search, then only subsequent create/saves of the new record format.
IN a related manner, is there a simple function call to return to number of records contained in a given netsuite record? For % progress context.
Thanks in advance.
you can actually get the number of rows by running the search with an added aggregate column. A generic way to do this for a saved search that doesn't have any aggregate columns is shown below:
var res = nlapiSearchRecord('salesorder', 'customsearch29', null,
[new nlobjSearchColumn('formulanumeric', null, 'sum').setFormula('1')]);
var rowCount = res[0].getValue('formulanumeric', null, 'sum');
console.log(rowCount);
To get the total number of records, the only way is do a saved search, an ideal way to do such search is using nlobjSearch
Below is a sample code for getting infinite search Results and number of records
var search = nlapiLoadSearch(null, SAVED_SEARCH_ID).runSearch();
var res = [],
currentRes;
var i = 0;
while(i % 1000 === 0){
currentRes = (search.getResults(i, i+1000) || []);
res = res.concat(currentRes );
i = i + currentRes.length;
}
res.length or i will give you the total number of records and res will give you all the results.
Related
Azure Search returns a maximum of 1,000 results at a time. For paging on the client, I want the total count of matches in order to be able to display the correct number of paging buttons at the bottom and in order to be able to tell the user how many results there are. However, if there are over a thousand, how do I get the actual count? All I know is that there were at least 1,000 matches.
I need to be able to do this from within the SDK.
If you want to get total number of documents in an index, one thing you could do is set IncludeTotalResultCount to true in your search parameters. Once you do that when you execute the query, you will see the count of total documents in an index in Count property of search results.
Here's a sample code for that:
var credentials = new SearchCredentials("account-key (query or admin key)");
var indexClient = new SearchIndexClient("account-name", "index-name", credentials);
var searchParameters = new SearchParameters()
{
QueryType = QueryType.Full,
IncludeTotalResultCount = true
};
var searchResults = await indexClient.Documents.SearchAsync("*", searchParameters);
Console.WriteLine("Total documents in index (approx) = " + searchResults.Count.GetValueOrDefault());//Prints the total number of documents in the index
Please note that:
This count will be approximate.
Getting the count is an expensive operation so you should only do it with the very first request when implementing pagination.
For REST clients using the POST API, just include "count": "true" to the payload. You get the count in #odata.count.
Ref: https://learn.microsoft.com/en-us/rest/api/searchservice/search-documents
Ive been trying to create a suitelet that allows for a saved search to be run on a collection of item records in netsuite using suitescript 1.0
Pagination is quite easy everywhere else, but i cant get my head around how to do it in NetSuite.
For instance, we have 3,000 items and I'm trying to limit the results to 100 per page.
I'm struggling to understand how to apply a start row and a max row parameter as a filter so i can run the search to return the number of records from my search
I've seen plenty of scripts that allow you to exceed the limit of 1,000 records, but im trying to throttle the amount shown on screen. but im at a loss to figure out how to do this.
Any tips greatly appreciated
function searchItems(request,response)
{
var start = request.getParameter('start');
var max = request.getParameter('max');
if(!start)
{
start = 1;
}
if(!max)
{
max = 100;
}
var filters = [];
filters.push(new nlobjSearchFilter('category',null,'is',currentDeptID));
var productList = nlapiSearchRecord('item','customsearch_product_search',filters);
if(productList)
{
response.write('stuff here for the items');
}
}
You can approach this a couple different ways. Either way, you will definitely need to sort your search results by something meaningful and consistent, like by internal ID. Make sure you've got your results sorted either in your saved search definition or by adding a search column in your script.
You can continue building your search exactly like you are, and then just using the native slice method on the productList Array. You would use your start and end parameters to pass as the arguments to slice appropriately.
Another approach is to use the async API for searches. It will look similar to this:
var search = nlapiLoadSearch("item", "customsearch_product_search");
search.addFilter(new nlobjSearchFilter('category',null,'is',currentDeptID));
var productList = search.runSearch().getResults(start, end);
For more references on this approach, check out the NetSuite Help page titled "Search APIs" and the reference page for nlobjSearch.
For example I have a thousands of documents with same structure, for example:
{
"key_1":"value_1",
"key_2":"value_2",
"key_3":"value_3",
...
...
}
And I need to get, let's say key_1, key_3 and key_23 from some set of documents with known IDs, for example, I need to process only 5 documents while my DB contains several thousands. Each time I have a different set of keys and document IDs. Is it possible to get that information for a one request?
You can use a list function (see: this, this, and this).
Since you know the ids, you can then query _all_docs with the list function:
POST /{db}/_design/{ddoc}/_list/{func}/_all_docs?include_docs=true&columns=["key_1","key_2","key_3"]
Accept: application/json
Content-Length: {whatever}
{
"keys": [
"docid002",
"docid005"
]
}
The list function needs to look at documents, and send the appropriate JSON for each one. Not tested:
(function (head, req) {
send('{"total_rows":' + head.total_rows + ',"offset":' + head.offset + ',"rows":[');
var columns = JSON.parse(req.query.columns);
var delim = '';
var row;
while (row = getRow()) {
var doc = {};
for (var k in columns) {
doc[k] = row.doc[k];
}
row.doc = doc;
send(delim + toJSON(row));
delim = ',';
}
send(']}');
})
Whether this is a good idea, I'm not sure. If your documents are big, and bandwidth savings important, it might.
Yes, that’s possible. Your question can be broken up into two distinct problems:
Getting only a part of the document (in your example: key_1, key_3 and key_23). This can be done using a view. A view is saved into a design document. See the wiki for more info on how to create views.
Retrieving only certain documents, which are defined by their ID. When querying views, you cannot only specify a single ID (or rather key), but also an array of keys, which is what you would need here. Again, see the section on querying views in the wiki for explanations and examples.
Even though you only need a subset of values from a document, you may find that the system as a whole performs better if you just ask for the entire document then select the values you need from that result.
To only get the specific key value pairs you need to create a view that has view entries with a multipart key consisting of the doc id and doc item name, with value of the corresponding doc item.
So your map function would look something like:
function(doc){
for(var i = 1; i < doc.keysInDoc; i++){
var k = "key_"+i;
emit([doc._id, k], doc.[k]);
}
}
You can then use multi key lookup with each key being of the form ["docid12345", "key_1"], ["docid56789", "key_23"], etc.
So a query like:
http://host:5984/db/_design/design/_view/view?&keys=[["docid002","key_8"],["docid005","key_7"]]
will return
{"total_rows":84,"offset":67,"rows":[
{"id":"docid002","key":["docid002","key_8"],"value":"value d2_k8"},
{"id":"docid005","key":["docid005","key_12"],"value":"value d5_k12"}
]}
I has to display a list of books that containes more than 50 000 book.
I want to display paged list where for each page i invoke a method that gives me 20 books.
List< Books > Ebooks = Books.GetLibrary(index);
But using PagedList doesnt match with my want because it creates a subset of the collection of objects given and accesse to each subset with the index. And refering to the definition of its methode, i had to charge the hole list from the begining.
I also followed this article
var EBooks = from b in db.Books select b;
int pageSize = 20;
int pageNumber = (page ?? 1);
return View(Ebooks.ToPagedList(pageNumber, pageSize));
But doing so, i has to invoke (var Books = from b in db.Books select b; ) on each index
**EDIT****
I'm searching for indications to achieve this
List< Books > Ebooks = Books.GetLibrary(index);
and of course i has the number of all the books so i know the number of pages
So i'm searching for indication that leads me to achieve it: for each index, i invoke GetLibrary(index)
any suggestions ?
Have you tried something like:
var pagedBooks = Books.GetLibrary().Skip(pageNumber * pageSize).Take(pageSize);
This assumes a 0-based pageNumber.
If that doesn't work, can you add a new method to the Books class that gets a paged set directly from the data source?
Something like "Books.GetPage(pageNumber, pageSize);" that way you don't get the entire collection every time.
Other than that, you may have to find a way to cache the initial result of Books.GetLibrary() somewhere.
I have a list that looks like:
Movie Year
----- ----
Fight Club 1999
The Matrix 1999
Pulp Fiction 1994
Using CAML and the SPQuery object I need to get a distinct list of items from the Year column which will populate a drop down control.
Searching around there doesn't appear to be a way of doing this within the CAML query. I'm wondering how people have gone about achieving this?
Another way to do this is to use DataView.ToTable-Method - its first parameter is the one that makes the list distinct.
SPList movies = SPContext.Current.Web.Lists["Movies"];
SPQuery query = new SPQuery();
query.Query = "<OrderBy><FieldRef Name='Year' /></OrderBy>";
DataTable tempTbl = movies.GetItems(query).GetDataTable();
DataView v = new DataView(tempTbl);
String[] columns = {"Year"};
DataTable tbl = v.ToTable(true, columns);
You can then proceed using the DataTable tbl.
If you want to bind the distinct results to a DataSource of for example a Repeater and retain the actual item via the ItemDataBound events' e.Item.DataItem method, the DataTable way is not going to work. Instead, and besides also when not wanting to bind it to a DataSource, you could also use Linq to define the distinct values.
// Retrieve the list. NEVER use the Web.Lists["Movies"] option as in the other examples as this will enumerate every list in your SPWeb and may cause serious performance issues
var list = SPContext.Current.Web.Lists.TryGetList("Movies");
// Make sure the list was successfully retrieved
if(list == null) return;
// Retrieve all items in the list
var items = list.GetItems();
// Filter the items in the results to only retain distinct items in an 2D array
var distinctItems = (from SPListItem item in items select item["Year"]).Distinct().ToArray()
// Bind results to the repeater
Repeater.DataSource = distinctItems;
Repeater.DataBind();
Remember that since there is no CAML support for distinct queries, each sample provided on this page will retrieve ALL items from the SPList. This may be fine for smaller lists, but for lists with thousands of listitems, this will seriously be a performance killer. Unfortunately there is no more optimized way of achieving the same.
There is no DISTINCT in CAML to populate your dropdown try using something like:
foreach (SPListItem listItem in listItems)
{
if ( null == ddlYear.Items.FindByText(listItem["Year"].ToString()) )
{
ListItem ThisItem = new ListItem();
ThisItem.Text = listItem["Year"].ToString();
ThisItem.Value = listItem["Year"].ToString();
ddlYear.Items.Add(ThisItem);
}
}
Assumes your dropdown is called ddlYear.
Can you switch from SPQuery to SPSiteDataQuery? You should be able to, without any problems.
After that, you can use standard ado.net behaviour:
SPSiteDataQuery query = new SPSiteDataQuery();
/// ... populate your query here. Make sure you add Year to the ViewFields.
DataTable table = SPContext.Current.Web.GetSiteData(query);
//create a new dataview for our table
DataView view = new DataView(table);
//and finally create a new datatable with unique values on the columns specified
DataTable tableUnique = view.ToTable(true, "Year");
After coming across post after post about how this was impossible, I've finally found a way. This has been tested in SharePoint Online. Here's a function that will get you all unique values for a column. It just requires you to pass in the list Id, View Id, internal list name, and a callback function.
function getUniqueColumnValues(listid, viewid, column, _callback){
var uniqueVals = [];
$.ajax({
url: _spPageContextInfo.webAbsoluteUrl + "/_layouts/15/filter.aspx?ListId={" + listid + "}&FieldInternalName=" + column + "&ViewId={" + viewid + "}&FilterOnly=1&Filter=1",
method: "GET",
headers: { "Accept": "application/json; odata=verbose" }
}).then(function(response) {
$(response).find('OPTION').each(function(a,b){
if ($(b)[0].value) {
uniqueVals.push($(b)[0].value);
}
});
_callback(true,uniqueVals);
},function(){
_callback(false,"Error retrieving unique column values");
});
}
I was considering this problem earlier today, and the best solution I could think of uses the following algorithm (sorry, no code at the moment):
L is a list of known values (starts populated with the static Choice options when querying fill-in options, for example)
X is approximately the number of possible options
1. Create a query that excludes the items in L
1. Use the query to fetch X items from list (ordered as randomly as possible)
2. Add unique items to L
3. Repeat 1 - 3 until number of fetched items < X
This would reduce the total number of items returned significantly, at the cost of making more queries.
It doesn't much matter if X is entirely accurate, but the randomness is quite important. Essentially the first query is likely to include the most common options, so the second query will exclude these and is likely to include the next most common options and so on through the iterations.
In the best case, the first query includes all the options, then the second query will be empty. (X items retrieved in total, over 2 queries)
In the worst case (e.g. the query is ordered by the options we're looking for, and there are more than X items with each option) we'll make as many queries as there are options. Returning approximately X * X items in total.