Ive been trying to create a suitelet that allows for a saved search to be run on a collection of item records in netsuite using suitescript 1.0
Pagination is quite easy everywhere else, but i cant get my head around how to do it in NetSuite.
For instance, we have 3,000 items and I'm trying to limit the results to 100 per page.
I'm struggling to understand how to apply a start row and a max row parameter as a filter so i can run the search to return the number of records from my search
I've seen plenty of scripts that allow you to exceed the limit of 1,000 records, but im trying to throttle the amount shown on screen. but im at a loss to figure out how to do this.
Any tips greatly appreciated
function searchItems(request,response)
{
var start = request.getParameter('start');
var max = request.getParameter('max');
if(!start)
{
start = 1;
}
if(!max)
{
max = 100;
}
var filters = [];
filters.push(new nlobjSearchFilter('category',null,'is',currentDeptID));
var productList = nlapiSearchRecord('item','customsearch_product_search',filters);
if(productList)
{
response.write('stuff here for the items');
}
}
You can approach this a couple different ways. Either way, you will definitely need to sort your search results by something meaningful and consistent, like by internal ID. Make sure you've got your results sorted either in your saved search definition or by adding a search column in your script.
You can continue building your search exactly like you are, and then just using the native slice method on the productList Array. You would use your start and end parameters to pass as the arguments to slice appropriately.
Another approach is to use the async API for searches. It will look similar to this:
var search = nlapiLoadSearch("item", "customsearch_product_search");
search.addFilter(new nlobjSearchFilter('category',null,'is',currentDeptID));
var productList = search.runSearch().getResults(start, end);
For more references on this approach, check out the NetSuite Help page titled "Search APIs" and the reference page for nlobjSearch.
Related
Take a windowed virtual list with the capability of loading an arbitrary range of rows at any point in the list, such as in this following example.
The virtual list provides a callback that is called anytime the user scrolls to some rows that have not been fetched from the backend yet, and provides the start and stop indexes, so that, in an offset based pagination endpoint, I can fetch the required items without fetching any unnecessary data.
const loadMoreItems = (startIndex, stopIndex) => {
fetch(`/items?offset=${startIndex}&limit=${stopIndex - startIndex}`);
}
I'd like to replace my offset based pagination with a cursor based one, but I can't figure out how to reproduce the above logic with it.
The main issue is that I feel like I will need to download all the items before startIndex in order to receive the cursor needed to fetch the items between startIndex and stopIndex.
What's the correct way to approach this?
After some investigation I found what seems to be the way MongoDB approaches the problem:
https://docs.mongodb.com/manual/reference/method/cursor.skip/#mongodb-method-cursor.skip
Obviously he same approach can be adopted by any other backend implementation.
They provide a skip method that allows to skip an arbitrary amount of items after the provided cursor.
This means my sample endpoint would look like the following:
/items?cursor=${cursor}&skip=${skip}&limit=${stopIndex - startIndex}
I then need to figure out the cursor and the skip values.
The following code could work to find the closest available cursor, given I store them together with the items:
// Limit our search only to items before startIndex
const fragment = items.slice(0, startIndex);
// Find the closest cursor index
const cursorIndex = fragment.length - 1 - fragment.reverse().findIndex(item => item.cursor != null);
// Get the cursor
const cursor = items[cursorIndex];
And of course, I also have a way to know the skip value:
const skip = items.length - 1 - cursorIndex;
Azure Search returns a maximum of 1,000 results at a time. For paging on the client, I want the total count of matches in order to be able to display the correct number of paging buttons at the bottom and in order to be able to tell the user how many results there are. However, if there are over a thousand, how do I get the actual count? All I know is that there were at least 1,000 matches.
I need to be able to do this from within the SDK.
If you want to get total number of documents in an index, one thing you could do is set IncludeTotalResultCount to true in your search parameters. Once you do that when you execute the query, you will see the count of total documents in an index in Count property of search results.
Here's a sample code for that:
var credentials = new SearchCredentials("account-key (query or admin key)");
var indexClient = new SearchIndexClient("account-name", "index-name", credentials);
var searchParameters = new SearchParameters()
{
QueryType = QueryType.Full,
IncludeTotalResultCount = true
};
var searchResults = await indexClient.Documents.SearchAsync("*", searchParameters);
Console.WriteLine("Total documents in index (approx) = " + searchResults.Count.GetValueOrDefault());//Prints the total number of documents in the index
Please note that:
This count will be approximate.
Getting the count is an expensive operation so you should only do it with the very first request when implementing pagination.
For REST clients using the POST API, just include "count": "true" to the payload. You get the count in #odata.count.
Ref: https://learn.microsoft.com/en-us/rest/api/searchservice/search-documents
I has to display a list of books that containes more than 50 000 book.
I want to display paged list where for each page i invoke a method that gives me 20 books.
List< Books > Ebooks = Books.GetLibrary(index);
But using PagedList doesnt match with my want because it creates a subset of the collection of objects given and accesse to each subset with the index. And refering to the definition of its methode, i had to charge the hole list from the begining.
I also followed this article
var EBooks = from b in db.Books select b;
int pageSize = 20;
int pageNumber = (page ?? 1);
return View(Ebooks.ToPagedList(pageNumber, pageSize));
But doing so, i has to invoke (var Books = from b in db.Books select b; ) on each index
**EDIT****
I'm searching for indications to achieve this
List< Books > Ebooks = Books.GetLibrary(index);
and of course i has the number of all the books so i know the number of pages
So i'm searching for indication that leads me to achieve it: for each index, i invoke GetLibrary(index)
any suggestions ?
Have you tried something like:
var pagedBooks = Books.GetLibrary().Skip(pageNumber * pageSize).Take(pageSize);
This assumes a 0-based pageNumber.
If that doesn't work, can you add a new method to the Books class that gets a paged set directly from the data source?
Something like "Books.GetPage(pageNumber, pageSize);" that way you don't get the entire collection every time.
Other than that, you may have to find a way to cache the initial result of Books.GetLibrary() somewhere.
I have a coucdb database which contains about 200000 tweets, keys are tweet ID. I have a query which needs to retrieve all documents to look for some information. I'm using lightcouch to work with couchdb in a java web app. If I create a dbClient like this:
List<JsonObject>tweets = dbClient.view("_all_docs").query(JsonObject.class);
and then loop through tweets, for each JsonObject in tweets, use
JsonObject tweetJson = dbClient.find(JsonObject.class, tweet.get("id").toString().replaceAll("\"", ""));
to retrieve each tweet one by one it took extremely long time for 200000 documents. If I load all documents in one single query using includeDocs(true)
List<JsonObject>allTweets = dbClient.view("_all_docs").includeDocs(true).query(JsonObject.class);
it caused outofmemory exception since the number of documents are too large. So how can i deal with this problem? I'm thinking about using limit(5000) to retrieve 5000 documents for each time and loop through whole database, but I don't know how to write the loop to continue to retrieve the next 5000 after the first 5000 docs. One possible solution is using startKey and endKey but I'm confused how to use them when the key is tweet ID.
Use queryPage but make sure to use a String as the Key
See: https://github.com/lightcouch/LightCouch/issues/26#event-122327174
0.1.6 still seems to show this behaviour.
A workaround that I found for this goes something like this:
changes = DbClient.changes()
.since(null) // or... since(since) if you want an offset
.includeDocs(true);
int size = 1;
getCursor("0");
while (size > 0 ) {
ChangesResult resultSet = changes.limit(40000).getChanges();
List<ChangesResult.Row> rowList = resultSet.getResults();
for (ChangesResult.Row feed: rowList) {
<instantiate your object via gson>
.
.
.
}
getCursor(resultSet.getLastSeq());
size = rowList.size();
}
I am confused about how the search function works in the Spotify API. Their example is like this:
var sp = getSpotifyApi();
var models = sp.require('$api/models');
var search = new models.Search('Rihanna');
search.localResults = models.LOCALSEARCHRESULTS.APPEND;
var searchHTML = document.getElementById('results');
search.observe(models.EVENT.CHANGE, function() {
var results = search.tracks;
var fragment = document.createDocumentFragment();
for (var i=0; i<results.length; i++){
var link = document.createElement('li');
var a = document.createElement('a');
a.href = results[i].uri;
link.appendChild(a);
a.innerHTML = results[i].name;
fragment.appendChild(link);
}
searchHTML.appendChild(fragment);
});
search.appendNext();
So, I guess that calling appendNext() initiates the search, and the inner function is called when it has results? But the results are limited to a certain number (default 50) of the total. How do you get the rest? Do you call appendNext() again recursively from inside the callback? Also, does that mean that after you do that, your list includes the original results, or are the original results replaced? Anyone know of an example that searches through all available results?
Also they mention that if the search is running, appendNext() does nothing. So how do you gracefully wait until the current search is complete before getting the next 'page'?
Their documentation is terrible, IMHO. Say you have 1000 search results total from the server. And say I want to see results 900-1000. Have I got to keep calling AppendNext over and over until I get to 900?
Thanks
Bob
There is no pagination when using the Search functionality built in the Spotify Apps API. You can increase the number of results so it returns more than 50 results (see the Search page in the documentation), although the amount is limited (it seems to be 200 tracks at the moment).
There is an alternative way, which is performing requests to the Web API instead.