Cloudant DB api supports pagination? - pagination

I have an app already using cloudant.synch.query.IndexManager to query the db. But I need pagination. I can see Cloudant supports it by using bookmark, but I can only find doc about using HTTP Post to url Path: /db/_find. the methods in IndexManager don't have bookmark. Is there an cloudant api I can use, instead of doing the http post?
My app is Android and iOS app that uses IBM Bluemix MobileFirst service as the back end. And I'm using bms_samples_android_bluelist (https://github.com/ibm-bluemix-mobile-services/bms-samples-android-bluelist)as an example.

Use Skip and Limit for paging.
I got this from Cloudand Synch doc https://github.com/cloudant/sync-android/blob/master/doc/query.md
Skip and limit
Skip and limit allow retrieving subsets of the results. Amongst other things, this is useful in pagination.
skip skips over a number of results from the result set.
limit defines the maximum number of results to return for the query.
To display the twenty-first to thirtieth results:
QueryResult result = im.find(query, 20, 10, fields, null);
To disable:
skip, pass 0 as the skip argument.
limit, pass 0 as the limit argument.

Yep! Cloudant's native mobile libraries for Android and iOS are dubbed "Cloudant Synch." Most of the resources there are compiled on this page: https://cloudant.com/cloudant-sync-resources/

Related

How to do pagination in NeptuneDB to achieve high performance

Hi I am now building a website using aws NeptuneDB(Gremlin), NodeJs as Backend and Angular as Frontend. Now I am facing a problem, I want to do pagination on my website because without pagination I may load and display 5000 items one query. I know in MySQL, we can using like
select * from mydb limit 0, 20;
to do pagination.
Can I achieve similar in NeptuneDB(or GraphDB). I investigated for a while and I found this:
How to perform pagination in Gremlin
Refer to the answer of this question, It seems we cannot avoid loading all query results into memory. Does it mean it doesn't make any difference with or without the pagination.
Or can I achieve pagination between Node and Angular(I am just guessing)?
So Any ideas to improve performance?
It seems I can use like
g.V('vertice').out('some_relationship').order().by().range(0,3)
The order can be guarantee through order()
But the problem is:
In pagination, we need to get three parameters: currentPage, PageSize, TotalNumOfItems', currentPage and PageSize is pass from frontend, but how can we get the total number before retrieving items?
my way is just count before retrieve items:
g.V('vertice').out('some_relationship').count()
g.V('vertice').out('some_relationship').order().by().range(0,3)
Will this work?

Sitecore 8.1 : Steps for converting the Lucene Search to Solr

We just upgraded our 7.2 to 8.1 which uses lucene search provider. The website relies heavily on lucene for search and indexing the articles so that it can be displayed as a list.
We already have a SOLR instance setup. We need to get this Lucene converted to SOLR. Will appreciate if I get direction on below:
How do we convert the custom computed lucene indexes and fields on to Solr?
Apart from configurations on CORES and end points, are there any code changes etc. that we need to be careful of?
How does the index rebuild event works in terms of SOLR. Do they (CDs) all try to build once or in sequence or only one triggers build.
UPDATE:
I switched to SOLR. I can rebuild all the CORES and web_index shows 11K documents. However the page doesn't return any results. Below is the code snippet, appreciate if I can get help on what I'm doing wrong. THis was working fine with Lucene:
public IEnumerable<Article> GetArticles(Sitecore.Data.ID categoryId)
{
List<Article> articles = null;
var home = _sitecoreService.GetItem<Sitecore.Data.Items.Item>(System.Guid.Parse(ItemIds.PageIds.Home));
var index = ContentSearchManager.GetIndex(new SitecoreIndexableItem(home));
using (var context = index.CreateSearchContext(SearchSecurityOptions.DisableSecurityCheck))
{
var query = context.GetQueryable<ArticleSearchResultItem>().Filter(item => item.Category == categoryId);
var results = query.GetResults();
articles = new List<Article>();
foreach (var hit in results.Hits)
{
var article = _sitecoreService.GetItem<Article>(new Sitecore.Data.ID(hit.Document.Id).ToGuid());
if (article != null)
{
if (article.ArticlePage != null && !article.ArticlePage.HideInNavigation)
{
articles.Add(article);
}
}
}
}
return articles;
}
The actual code for the computed field would probably not change. You would need to test that to make sure, but because Sitecore abstracts away the Lucene and SOLR code, as long as you are just making use of the Sitecore API it should work.
You will need to change the config. In the Lucene index you add the computed fields in the defaultLuceneIndexConfiguration section. This will need to change to the defaultSolrIndexConfiguration
Again, as long as you are making us of the Sitecore API exclusively and not using Lucene.net or Solr.net directly - most code should work fine. Some gotcha's that I have found.
Lucene is not case sensitive, SOLR is case sensitive. So some queries that may have worked fine on Lucene, may not anymore because of case sensitivity.
Be careful of queries that do not set a .Take() limit on them. Sitecore does have a default value for the max rows returned for a query, but on SOLR that can have a much bigger impact on query time than it does for Lucene because of the network round trips.
Another thing to think about with SOLR is the number of searches that take place. With Lucene, there is little impact in making many small calls to the index, as its local and on disk so very fast. With SOLR, those calls turn into Network traffic, so a lot of micro calls to the index can have a big performance impact.
As mentioned by mikaelnet: SOLR uses dynamic fields in the index. So each field has a suffix based on the field type. This shouldn't be a problem in most cases. The Sitecore API will automatically append the suffix to any IndexField attributes you have. But on occasion, it can get that mapping wrong and you may have to code around that.
The index rebuild is set by your configuration. There are a few index update strategies that you can set:
manual: The index is only updated manually.
sync: The index is updated when items are modified, created or deleted. This should be the default for the master index on the content authoring server.
onPublishEndAsync: This updates the index after a publish job has been completed.
In a multi-server setup, for example: 1 content authoring server and 2 content delivery servers. You should setup the content authoring server or a dedicated indexing server to perform the index updates. The delivery servers should have the update strategies set to manual for all indexes. This stops the indexes being built multiple times by each server.
There are some good articles out there about setting up SOLR with Sitecore. For reference:
* http://www.sequence.co.uk/blog/sitecore-8-and-solr/
That should give you an idea of the differences.

Alfresco webscript (js) and pagination

I have a question about the good way to use pagination with Alfresco.
I know the documentation (https://wiki.alfresco.com/wiki/4.0_JavaScript_API#Search_API)
and I use with success the query part.
I mean by that that I use the parameters maxItems and skipCount and they work the way I want.
This is an example of a query that I am doing :
var paging =
{
maxItems: 100,
skipCount: 0
};
var def =
{
query: "cm:name:test*"
page: paging
};
var results = search.query(def);
The problem is that, if I get the number of results I want (100 for example), I don't know how to get the maxResults of my query (I mean the total amount of result that Alfresco can give me with this query).
And I need this to :
know if there are more results
know how many pages of results are lasting
I'm using a workaround for the first need : I'm doing a query for (maxItems+1), and showing only maxItems. If I have maxItems+1, I know that there are more results. But this doesn't give me the total amount of result.
Do you have any idea ?
With the javascript search object you can't know if there are more items. This javascript object is backed by the class org.alfresco.repo.jscript.Search.java. As you can see the query method only returns the query results without any extra information. Compare it with org.alfresco.repo.links.LinkServiceImpl which gives you results wrapped in PagingResults.
So, as javacript search object doesn't provide hasMoreItems info, you need to perform some workaround, for instance first query without limits to know the total, and then apply pagination as desired.
You can find how many objects have been found by your query simply calling
results.length
paying attention to the fact that usually queries have a configured maximum result set of 1000 entries to save resources.
You can change this value by editing the <alfresco>/tomcat/webapps/alfresco/WEB_INF/classes/alfresco/repository.properties file.
So, but is an alternative to your solution, you can launch a query with no constraints and obtain the real value or the max results configured.
Then you can use this value to devise how many pages are available basing you calculation on the number of results for page.
Then dinamically pass the number of the current page to the builder of your query def and the results variable will contain the corresponding chunk of data.
In this SO post you can find more information about pagination.

How to do "Not Equals" in couchdb?

Folks, I was wondering what is the best way to model document and/or map functions that allows me "Not Equals" queries.
For example, my documents are:
1. { name : 'George' }
2. { name : 'Carlin' }
I want to trigger a query that returns every documents where name not equals 'John'.
Note: I don't have all possible names before hand. So the parameters in query can be any random text like 'John' in my example.
In short: there is no easy solution.
You have four options:
sending a multi range query
filter the view response with a server-side list function
using a CouchDB plugin
use the mango query language
sending a multi range query
You can request the view with two ranges defined by startkey and endkey. You have to choose the range so, that the key John is not requested.
Unfortunately you have to find the commit request that somewhere exists and compile your CouchDB with it. Its not included in the official source.
filter the view response with a server-side list function
Its not recommended but you can use a list function and ignore the row with the key John in your response. Its like you will do it with a JavaScript array.
using a CouchDB plugin
Create an additional index with e.g. couchdb-lucene. The lucene server has such query capabilities.
use the "mango" query language
Its included in the CouchDB 2.0 developer preview. Not ready for production but will be definitely included in the stable release.

Venues/SuggestCompletion doesn't give any results

I know for a fact that there are atleast 5-6 POI within the 50 mile radius in this area. However, I don't get any results on this query.
https://api.foursquare.com/v2/venues/suggestCompletion?ll=-44.67,167.92&query=milford&radius=50000
I see results when I try search api (it doesnt use query as mentioned in documentation):
https://api.foursquare.com/v2/venues/search?ll=-44.67,167.92&intent=checkin&query=milford&radius=50000
No results with intent match on the search query.
I really like the suggestcompletion api (compact). Any suggestion/input would be great?
Thanks!
The suggestcompletion endpoint is used to suggest venues whose names start with the provided query. The endpoint is used to provide autocomplete results for search input fields. It is not used as a general purpose venue search - you should use the /venues/search endpoint for this purpose.
looks like you have missed out the API version param. You need to denote it by adding this into your request :
&v=20150826
suggestCompletion is included into newer API released on 20150826 which differs from default one that not including suggestCompletion feature.

Resources