I am using query builder to return number of search result from the database table. Now I would like to display the result in the UI by only first three rows. How can I achieve this?
QueryAPI is lazy, when .toList(), .toTypedArray(), .toCollection(), .where(), etc occurs all resultset is retrieved (eager).
I recommend you to use this:
var limit = 3
var rs = Query.make(entity.XXX)...select()
rs.setPageSize(limit)
var paginatedRS = com.google.common.collect.Iterables.limit(rs,limit)
setPageSize method specifies how many rows will be fetch "by page"
limit method make a new iterator that have only the first (limit) rows
Related
I am trying to re-create AutoQuery queries outside of a service request. I am doing this because I give user option to save a request and then use that data elsewhere. I save the query string data so I am trying to create a query from the saved query string.
I need 2 things.
1) query that returns the complete data not limited by default autoquery page size
2) query that returns the count
I tried making the query like this:
IAutoQueryDb _autoQuery = HostContext.TryResolve<IAutoQueryDb>();
var dto = new MyQueryDbClass();
Dictionary<string, string> pars = GetParameters();
var query = _autoQuery.CreateQuery(dto, pars);
The problem with this is that the query generated has the table name of the response object and not the actual table so it doesn't work. Also I am unable to call ToCountSatement() on it. It is also limited by my default page size.
Is there a way to convert the AutoQuery query string to a SqlExpression so I can execute it and also get count statement?
The CreateQuery() API returns a populated SqlExpression<Table> similar to what would have been created if manually constructing the query yourself, e.g:
SqlExpression<Table> query = _autoQuery.CreateQuery(dto, pars);
To clear the paging info you can call .Limit() without arguments which will clear any populated Offset/Rows values:
query.Limit();
The Custom AutoQuery Implementations docs shows an example of AutoQuery executes the query behind the scenes, e.g. you can get the total with:
var total = Db.Count(query);
Ive been trying to create a suitelet that allows for a saved search to be run on a collection of item records in netsuite using suitescript 1.0
Pagination is quite easy everywhere else, but i cant get my head around how to do it in NetSuite.
For instance, we have 3,000 items and I'm trying to limit the results to 100 per page.
I'm struggling to understand how to apply a start row and a max row parameter as a filter so i can run the search to return the number of records from my search
I've seen plenty of scripts that allow you to exceed the limit of 1,000 records, but im trying to throttle the amount shown on screen. but im at a loss to figure out how to do this.
Any tips greatly appreciated
function searchItems(request,response)
{
var start = request.getParameter('start');
var max = request.getParameter('max');
if(!start)
{
start = 1;
}
if(!max)
{
max = 100;
}
var filters = [];
filters.push(new nlobjSearchFilter('category',null,'is',currentDeptID));
var productList = nlapiSearchRecord('item','customsearch_product_search',filters);
if(productList)
{
response.write('stuff here for the items');
}
}
You can approach this a couple different ways. Either way, you will definitely need to sort your search results by something meaningful and consistent, like by internal ID. Make sure you've got your results sorted either in your saved search definition or by adding a search column in your script.
You can continue building your search exactly like you are, and then just using the native slice method on the productList Array. You would use your start and end parameters to pass as the arguments to slice appropriately.
Another approach is to use the async API for searches. It will look similar to this:
var search = nlapiLoadSearch("item", "customsearch_product_search");
search.addFilter(new nlobjSearchFilter('category',null,'is',currentDeptID));
var productList = search.runSearch().getResults(start, end);
For more references on this approach, check out the NetSuite Help page titled "Search APIs" and the reference page for nlobjSearch.
I am trying to perform a freetext search on all the Cq:Page and dam:Asset with the ordering being the last modified.
I have created the Query for search which is as below:
1_group.p.or=true
1_group.1_type=cq:Page
1_group.2_type=dam:Asset
2_group.p.or=true
2_group.1_path=/content
2_group.2_path=/content/dam
fulltext=text
p.limit=-1
Now I need to sort the results based on last modified. But since cq:Page has property jcr:content/cq:lastModified and dam:Asset has property jcr:content/jcr:lastModified, I am unable to figure out which property should i use in the orderby field of predicate. Is there any way to form a predicate which uses different property values for pages and assets during sorting. Please let me know if we can achieve this in a single query.
Regards,
Shailesh
This can be done by creating a custom AbstractPredicateEvaluator and overriding the getOrderByComparator with your own comparator. Then you would register your custom predicate evaluator for your Query by calling registerPredicateEvaluator.
In the example below, you can use whatever you'd like for the customSort.sortby property. This can be useful if your comparator handles multiple types of sorting. You can get this information from the predicate via predicate.get("sortby").
QueryBuilder builder = resourceResolver.adaptTo(QueryBuilder.class);
Map<String, String> params = new HashMap<String, String>();
params.put("1_group", "true");
params.put("1_group.1_type", "cq:Page");
params.put("1_group.2_type", "dam:Asset");
params.put("2_group.p.or", "true");
params.put("2_group.1_path", "/content");
params.put("2_group.2_path", "/content/dam");
params.put("fulltext", text);
params.put("p.limit", "-1");
params.put("customSort.sortby", "last-modified");
PredicateGroup pg = PredicateGroup.create(params);
Query query = builder.createQuery(pg, resourceResolver.adaptTo(Session.class));
query.registerPredicateEvaluator("customSort", new CustomSortPredicateEvaluator());
SearchResult result = query.getResult();
there is no way since they are differenet type (cq:lastModified & jcr:lastModified), Even you use SQL2 like this (SELECT p.* FROM [nt:base] AS p WHERE ISDESCENDANTNODE(p, '/content') AND ( p.[jcr:primaryType]='dam:Asset' OR p.[jcr:primaryType]='cq:Page')) there no way to use ORDER BY.
Have to use Java code for sorting.
Keep your Query, get QueryResult to List and do Comparator Sort
I have a coucdb database which contains about 200000 tweets, keys are tweet ID. I have a query which needs to retrieve all documents to look for some information. I'm using lightcouch to work with couchdb in a java web app. If I create a dbClient like this:
List<JsonObject>tweets = dbClient.view("_all_docs").query(JsonObject.class);
and then loop through tweets, for each JsonObject in tweets, use
JsonObject tweetJson = dbClient.find(JsonObject.class, tweet.get("id").toString().replaceAll("\"", ""));
to retrieve each tweet one by one it took extremely long time for 200000 documents. If I load all documents in one single query using includeDocs(true)
List<JsonObject>allTweets = dbClient.view("_all_docs").includeDocs(true).query(JsonObject.class);
it caused outofmemory exception since the number of documents are too large. So how can i deal with this problem? I'm thinking about using limit(5000) to retrieve 5000 documents for each time and loop through whole database, but I don't know how to write the loop to continue to retrieve the next 5000 after the first 5000 docs. One possible solution is using startKey and endKey but I'm confused how to use them when the key is tweet ID.
Use queryPage but make sure to use a String as the Key
See: https://github.com/lightcouch/LightCouch/issues/26#event-122327174
0.1.6 still seems to show this behaviour.
A workaround that I found for this goes something like this:
changes = DbClient.changes()
.since(null) // or... since(since) if you want an offset
.includeDocs(true);
int size = 1;
getCursor("0");
while (size > 0 ) {
ChangesResult resultSet = changes.limit(40000).getChanges();
List<ChangesResult.Row> rowList = resultSet.getResults();
for (ChangesResult.Row feed: rowList) {
<instantiate your object via gson>
.
.
.
}
getCursor(resultSet.getLastSeq());
size = rowList.size();
}
I have a list that looks like:
Movie Year
----- ----
Fight Club 1999
The Matrix 1999
Pulp Fiction 1994
Using CAML and the SPQuery object I need to get a distinct list of items from the Year column which will populate a drop down control.
Searching around there doesn't appear to be a way of doing this within the CAML query. I'm wondering how people have gone about achieving this?
Another way to do this is to use DataView.ToTable-Method - its first parameter is the one that makes the list distinct.
SPList movies = SPContext.Current.Web.Lists["Movies"];
SPQuery query = new SPQuery();
query.Query = "<OrderBy><FieldRef Name='Year' /></OrderBy>";
DataTable tempTbl = movies.GetItems(query).GetDataTable();
DataView v = new DataView(tempTbl);
String[] columns = {"Year"};
DataTable tbl = v.ToTable(true, columns);
You can then proceed using the DataTable tbl.
If you want to bind the distinct results to a DataSource of for example a Repeater and retain the actual item via the ItemDataBound events' e.Item.DataItem method, the DataTable way is not going to work. Instead, and besides also when not wanting to bind it to a DataSource, you could also use Linq to define the distinct values.
// Retrieve the list. NEVER use the Web.Lists["Movies"] option as in the other examples as this will enumerate every list in your SPWeb and may cause serious performance issues
var list = SPContext.Current.Web.Lists.TryGetList("Movies");
// Make sure the list was successfully retrieved
if(list == null) return;
// Retrieve all items in the list
var items = list.GetItems();
// Filter the items in the results to only retain distinct items in an 2D array
var distinctItems = (from SPListItem item in items select item["Year"]).Distinct().ToArray()
// Bind results to the repeater
Repeater.DataSource = distinctItems;
Repeater.DataBind();
Remember that since there is no CAML support for distinct queries, each sample provided on this page will retrieve ALL items from the SPList. This may be fine for smaller lists, but for lists with thousands of listitems, this will seriously be a performance killer. Unfortunately there is no more optimized way of achieving the same.
There is no DISTINCT in CAML to populate your dropdown try using something like:
foreach (SPListItem listItem in listItems)
{
if ( null == ddlYear.Items.FindByText(listItem["Year"].ToString()) )
{
ListItem ThisItem = new ListItem();
ThisItem.Text = listItem["Year"].ToString();
ThisItem.Value = listItem["Year"].ToString();
ddlYear.Items.Add(ThisItem);
}
}
Assumes your dropdown is called ddlYear.
Can you switch from SPQuery to SPSiteDataQuery? You should be able to, without any problems.
After that, you can use standard ado.net behaviour:
SPSiteDataQuery query = new SPSiteDataQuery();
/// ... populate your query here. Make sure you add Year to the ViewFields.
DataTable table = SPContext.Current.Web.GetSiteData(query);
//create a new dataview for our table
DataView view = new DataView(table);
//and finally create a new datatable with unique values on the columns specified
DataTable tableUnique = view.ToTable(true, "Year");
After coming across post after post about how this was impossible, I've finally found a way. This has been tested in SharePoint Online. Here's a function that will get you all unique values for a column. It just requires you to pass in the list Id, View Id, internal list name, and a callback function.
function getUniqueColumnValues(listid, viewid, column, _callback){
var uniqueVals = [];
$.ajax({
url: _spPageContextInfo.webAbsoluteUrl + "/_layouts/15/filter.aspx?ListId={" + listid + "}&FieldInternalName=" + column + "&ViewId={" + viewid + "}&FilterOnly=1&Filter=1",
method: "GET",
headers: { "Accept": "application/json; odata=verbose" }
}).then(function(response) {
$(response).find('OPTION').each(function(a,b){
if ($(b)[0].value) {
uniqueVals.push($(b)[0].value);
}
});
_callback(true,uniqueVals);
},function(){
_callback(false,"Error retrieving unique column values");
});
}
I was considering this problem earlier today, and the best solution I could think of uses the following algorithm (sorry, no code at the moment):
L is a list of known values (starts populated with the static Choice options when querying fill-in options, for example)
X is approximately the number of possible options
1. Create a query that excludes the items in L
1. Use the query to fetch X items from list (ordered as randomly as possible)
2. Add unique items to L
3. Repeat 1 - 3 until number of fetched items < X
This would reduce the total number of items returned significantly, at the cost of making more queries.
It doesn't much matter if X is entirely accurate, but the randomness is quite important. Essentially the first query is likely to include the most common options, so the second query will exclude these and is likely to include the next most common options and so on through the iterations.
In the best case, the first query includes all the options, then the second query will be empty. (X items retrieved in total, over 2 queries)
In the worst case (e.g. the query is ordered by the options we're looking for, and there are more than X items with each option) we'll make as many queries as there are options. Returning approximately X * X items in total.