Get actual count of matches in Azure Search - azure

Azure Search returns a maximum of 1,000 results at a time. For paging on the client, I want the total count of matches in order to be able to display the correct number of paging buttons at the bottom and in order to be able to tell the user how many results there are. However, if there are over a thousand, how do I get the actual count? All I know is that there were at least 1,000 matches.
I need to be able to do this from within the SDK.

If you want to get total number of documents in an index, one thing you could do is set IncludeTotalResultCount to true in your search parameters. Once you do that when you execute the query, you will see the count of total documents in an index in Count property of search results.
Here's a sample code for that:
var credentials = new SearchCredentials("account-key (query or admin key)");
var indexClient = new SearchIndexClient("account-name", "index-name", credentials);
var searchParameters = new SearchParameters()
{
QueryType = QueryType.Full,
IncludeTotalResultCount = true
};
var searchResults = await indexClient.Documents.SearchAsync("*", searchParameters);
Console.WriteLine("Total documents in index (approx) = " + searchResults.Count.GetValueOrDefault());//Prints the total number of documents in the index
Please note that:
This count will be approximate.
Getting the count is an expensive operation so you should only do it with the very first request when implementing pagination.

For REST clients using the POST API, just include "count": "true" to the payload. You get the count in #odata.count.
Ref: https://learn.microsoft.com/en-us/rest/api/searchservice/search-documents

Related

GUIDEWIRE : How to display the the query builder result in UI?

I am using query builder to return number of search result from the database table. Now I would like to display the result in the UI by only first three rows. How can I achieve this?
QueryAPI is lazy, when .toList(), .toTypedArray(), .toCollection(), .where(), etc occurs all resultset is retrieved (eager).
I recommend you to use this:
var limit = 3
var rs = Query.make(entity.XXX)...select()
rs.setPageSize(limit)
var paginatedRS = com.google.common.collect.Iterables.limit(rs,limit)
setPageSize method specifies how many rows will be fetch "by page"
limit method make a new iterator that have only the first (limit) rows

Suitescript Pagination

Ive been trying to create a suitelet that allows for a saved search to be run on a collection of item records in netsuite using suitescript 1.0
Pagination is quite easy everywhere else, but i cant get my head around how to do it in NetSuite.
For instance, we have 3,000 items and I'm trying to limit the results to 100 per page.
I'm struggling to understand how to apply a start row and a max row parameter as a filter so i can run the search to return the number of records from my search
I've seen plenty of scripts that allow you to exceed the limit of 1,000 records, but im trying to throttle the amount shown on screen. but im at a loss to figure out how to do this.
Any tips greatly appreciated
function searchItems(request,response)
{
var start = request.getParameter('start');
var max = request.getParameter('max');
if(!start)
{
start = 1;
}
if(!max)
{
max = 100;
}
var filters = [];
filters.push(new nlobjSearchFilter('category',null,'is',currentDeptID));
var productList = nlapiSearchRecord('item','customsearch_product_search',filters);
if(productList)
{
response.write('stuff here for the items');
}
}
You can approach this a couple different ways. Either way, you will definitely need to sort your search results by something meaningful and consistent, like by internal ID. Make sure you've got your results sorted either in your saved search definition or by adding a search column in your script.
You can continue building your search exactly like you are, and then just using the native slice method on the productList Array. You would use your start and end parameters to pass as the arguments to slice appropriately.
Another approach is to use the async API for searches. It will look similar to this:
var search = nlapiLoadSearch("item", "customsearch_product_search");
search.addFilter(new nlobjSearchFilter('category',null,'is',currentDeptID));
var productList = search.runSearch().getResults(start, end);
For more references on this approach, check out the NetSuite Help page titled "Search APIs" and the reference page for nlobjSearch.

Marklogic Node API - how to filter the results from a valuesBuilder

I want to retrieve all documents in my MarkLogic db that have month=November and also group them by name, and get the count of records per name. I know I can get the frequency per name by using valuesBuilder with a range index on the name field, but how can I filter this result so that I only get the count of records for the month of November?
Supposedly valuesBuilder.fromIndexes().where() can do the filtering, but I don't know what to pass here and examples online seem to be sparse.
According to the API doc, the where clause takes a queryBuilder.query. With that in mind, you should be able to do something like this (not tested):
var marklogic = require('marklogic');
var vb = marklogic.valuesBuilder;
var qb = marklogic.queryBuilder;
vb
.fromIndexes()
.where(qb.value('month', 'November'))

Select parts of a nlobjSearchResult

I have a large nlobjSearchResultSet object with over 18,000 "results".
Each result is a pricing record for a customer. There may be multiple records for a single customer.
As 18000+ records is costly in governance points to do mass changes, I'm migrating to a parent (customer) record and child records (items) so I can make changes to the item pricing as a sublist.
As part of this migration, is there a simple command to select only the nlapiSearchResult objects within the big object, which match certain criteria (ie. the customer id).
This would allow me to migrate the data with only the one search, then only subsequent create/saves of the new record format.
IN a related manner, is there a simple function call to return to number of records contained in a given netsuite record? For % progress context.
Thanks in advance.
you can actually get the number of rows by running the search with an added aggregate column. A generic way to do this for a saved search that doesn't have any aggregate columns is shown below:
var res = nlapiSearchRecord('salesorder', 'customsearch29', null,
[new nlobjSearchColumn('formulanumeric', null, 'sum').setFormula('1')]);
var rowCount = res[0].getValue('formulanumeric', null, 'sum');
console.log(rowCount);
To get the total number of records, the only way is do a saved search, an ideal way to do such search is using nlobjSearch
Below is a sample code for getting infinite search Results and number of records
var search = nlapiLoadSearch(null, SAVED_SEARCH_ID).runSearch();
var res = [],
currentRes;
var i = 0;
while(i % 1000 === 0){
currentRes = (search.getResults(i, i+1000) || []);
res = res.concat(currentRes );
i = i + currentRes.length;
}
res.length or i will give you the total number of records and res will give you all the results.

How to retrieve all documents in couchdb database without causing out of memory

I have a coucdb database which contains about 200000 tweets, keys are tweet ID. I have a query which needs to retrieve all documents to look for some information. I'm using lightcouch to work with couchdb in a java web app. If I create a dbClient like this:
List<JsonObject>tweets = dbClient.view("_all_docs").query(JsonObject.class);
and then loop through tweets, for each JsonObject in tweets, use
JsonObject tweetJson = dbClient.find(JsonObject.class, tweet.get("id").toString().replaceAll("\"", ""));
to retrieve each tweet one by one it took extremely long time for 200000 documents. If I load all documents in one single query using includeDocs(true)
List<JsonObject>allTweets = dbClient.view("_all_docs").includeDocs(true).query(JsonObject.class);
it caused outofmemory exception since the number of documents are too large. So how can i deal with this problem? I'm thinking about using limit(5000) to retrieve 5000 documents for each time and loop through whole database, but I don't know how to write the loop to continue to retrieve the next 5000 after the first 5000 docs. One possible solution is using startKey and endKey but I'm confused how to use them when the key is tweet ID.
Use queryPage but make sure to use a String as the Key
See: https://github.com/lightcouch/LightCouch/issues/26#event-122327174
0.1.6 still seems to show this behaviour.
A workaround that I found for this goes something like this:
changes = DbClient.changes()
.since(null) // or... since(since) if you want an offset
.includeDocs(true);
int size = 1;
getCursor("0");
while (size > 0 ) {
ChangesResult resultSet = changes.limit(40000).getChanges();
List<ChangesResult.Row> rowList = resultSet.getResults();
for (ChangesResult.Row feed: rowList) {
<instantiate your object via gson>
.
.
.
}
getCursor(resultSet.getLastSeq());
size = rowList.size();
}

Resources