What is the proper way to implement previous page navigation in cassandra? - cassandra

I'm playing with cassandra for some time and the one thing I'm less satisfied with is the previous page pagination.
As far as I can understand cassandra has auto paging support. All I have to give is PageSize and the PageState and its returning the next set of rows.
I have no problem with the "Next" page link since everytime I query cassandra it returns the next PageState.
However I have no idea what is the right way to implement previous page link. Since my project is a web app, its very important to have previous page link.
At the moment the only way I can go back to previous page is by storing all past PageStates in Sessions.
This is fine for a few page site. But the reason I choose cassandra is for big data. I don't wanna keep track of all past PageStates.
I don't want to expose the page state in browser either because of security reasons. What is the proper way to implement paging with proper previous page link?

Please take a look at the following Backward paging in cassandra c# driver.
We have implemented similar thing with encryption though.

Related

Offset pagination vs Cursor pagination

I am studying about pagination and I have some questions.
What is the difference between two approches?
Best use-case for a cursor based pagination?
Can cursor based pagination go to a specific page?
Can cursor based pagination go back to the previous page?
Are there any performance differences between the two?
My thoughts
I think cursor based is much more complex which makes offset based pagination more desirable. Only real-time data centric system needs a cursor based pagination.
Cursor pagination is most often used for real-time data due to the frequency new records are added and because when reading data you often see the latest results first. There different scenarios in which offset and cursor pagination make the most sense so it will depend on the data itself and how often new records are added. When querying static data, the performance cost alone may not be enough for you to use a cursor, as the added complexity that comes with it may be more than you need.
Quoted from this awesome blog post, happy coding!
Also, check this out:
Pagination is a solution to this problem that ensures that the server only sends data in small chunks. Cursor-based pagination is our recommended approach over numbered pages, because it eliminates the possibility of skipping items and displaying the same item more than once. In cursor-based pagination, a constant pointer (or cursor) is used to keep track of where in the data set the next items should be fetched from.
This explanation is from Appolo GraphQL docs.
This post explains the difference between them.
What is the difference between two approaches?
The difference is big. One paginates using offset, and the other paginates using cursors. There are multiple pros/cons of both approaches. For example, offset pagination allows you to jump to any page, while in Cursor-based pagination, you can only jump into the next/previous page.
There is also a substantial difference in the implementation underneath. The Offset will very likely have to load all the records from the first page to the page you want to get. There are techniques to avoid this.
For more pros and cons I suggest reading the article.
Best use-case for a cursor based pagination?
If you're using a relational Database and have millions of records. Querying high Offset will probably take a lot of time/timeout, while cursor pagination will be more performant.
Can cursor based pagination go to a specific page?
No, this is one of the disadvantages of the approach.
Can cursor based pagination go back to the previous page?
It's a very common technique to provide two cursors as a response, one containing the previous page and the other with the next page. If that's the case, you can go to the previous page. Otherwise, you cannot.
Are there any performance differences between the two?
Yes! see my comment above.

Best option for persisting a string value between pages in Razor CORE

I want to display the user's name on each page after they log in. Once I've retrieved their name from the database, what's the best way of storing that name in terms of speed and complexity of code? I've looked at sessions and cookies as options but wondered which is better or if some other way (like persisting a base ViewModel) is recommended.
I just don't want to have to go back to the database each page just to display some simple text.
In the end I went with putting all the pretty into cookies and reading them in the _Layout page.
Write once, read many and as the values are stored on the end-user's computer so the browser can pick them up locally, rather than from the server in any fashion and if someone hacks the values, it doesn't matter as they're just used for prettification.

Why can't ContinuationToken be used for paging in Azure Search API?

Reading the documentation for the Azure Search .NET SDK, I see that the ContinuationToken property is not supposed to used for pagination (this is the same as the #odata.nextLink and #search.nextPageParameter properties in the REST API).
Note that this property is not meant to help you implement paging of search results. You can implement paging using the Top and Skip search parameters.
Source
Why can't I use it for pagination? I have a situation where I want to run a query and then step through a static copy of the results page by page. I don't want those query results to change beneath my feet, however, as I am navigating through them, as new documents are added to the underlying database. In my case, there could be hundreds or thousands of results that get added in the minute or two between submitting the initial query and navigating to another page. How could I accomplish this?
Your question can be addressed in two parts:
Why is it not recommended to use ContinuationToken to implement pagination?
How can pagination be implemented such that results remain completely stable from page to page?
These are actually unrelated questions, since nothing about ContinuationToken guarantees the stability of the search results. Azure Search makes no consistency guarantees around paging, whether you use $top and $skip or ContinuationToken.
For question #1, the reason ContinuationToken is not recommended for paging is that Azure Search controls when the token is returned, not your application code. If you make assumptions about how and when Azure Search decides to return you a token, there's a chance those assumptions may break with a future service update. The intent of ContinuationToken is to prevent requests for too many documents from overwhelming the service, so you should assume that it is entirely at the service's discretion whether it will return a token.
For question #2, since Azure Search doesn't provide consistency guarantees, you can't completely avoid issues like the same document showing up in multiple pages, missing documents, or documents that are deleted by the time they are seen in results. Even if you wanted to build your own snapshot of the results and page over them in your application code, building a consistent snapshot isn't possible in the first place. However, if your only concern is to avoid showing new documents in the results, you can include a created timestamp field in your index and filter on that in every search request.
Frankly, unless you're trying to export the entire contents of your index, I would question the need for such strong consistency guarantees around paging. Google and Bing make no such guarantees, so arguably user expectations are already set around this. If you are trying to export your data, this is unfortunately not easy with Azure Search today. In that case, please vote on this User Voice item to help the team prioritize this scenario.

XPages: Unite views from 'X' databases in one page

I am facing the following challenge in an XPage: There are three databases with exactly the same views in it. The goal is to unite these three views from the three databases in one XPage and one view component!
AFAIK, one can usually provide just one view per view component. Currently, I have a Java back end where the documents are fetched. They are then processed to HTML markup and made more beautiful / functional by using jQuery data tables.
I see (at least) three disadvantages:
It is quite some code and if you want to display another view from the databases you quickly run into boiler plate code...
It is not too fast as it takes up to 30 sec. to fetch and display all records.
I can hardly image that my way is best practice.
Has anyone ever faced this challenge? I would like to reduce Java code, make it faster and use some standard component if possible.
Tim has good questions in his comment. With your current approach make sure you use ViewNavigator cache which is the fastest way to retrieve view entries:
Notes/Domino Release 8.52 or greater
View.setAutoUpdate must be False
ViewNavigator cache must be enabled
ViewNavigator.getNext() (or getPrev) must be used
http://www-10.lotus.com/ldd/ddwiki.nsf/dx/Fast_Retrieval_of_View_Data_Using_the_ViewNavigator_Cache

Dojo JsonRest on one side and Mongodb on the other side: pagination/filtering?

I am experimenting with Dojo's dgrid (which is great!). I am using Nodejs/Mongoose on the server side.
I want to write a "log browser": I have a big mongodb table containing lots of log entries; using dgrid, I want to be able to 1) Filter by certain parameters 2) Paginate using dgrid's native pagination.
Hence the problem: dojo's JsonRest stores will send a request like this:
Accept:application/javascript, application/json
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
...
Host:localhost:3000
Range:items=0-24
Hence the problem: it will give a range (that's all it can do, really) and will display things on the client side according to what it receives from the server.
It's unrealistic to expect a cliend side JsonRest object to make requests other than "ranges". However, I am aware that skip/limit doesn't go very well with Mongoose:
What is the best way to do ajax pagination with MongoDb and Nodejs?
My idea was to render the dgrid, allowing the users to pick filters, and let them happily paginate through their logs. However, the fact that skip/limit are out of question, I am in a bit of a pickle...
Any pearls of wisdom, other than ditch dgrid altogether and implementing pagination on my own without using Dojo stores?
Merc.
Front-end
The filtering isn't as feature-full in dgrid as it is in the dojo EnhancedGrid filter plugin so you will probably need to implement that part yourself.
The good news is you get the paging simply by mixing-in "dgrid/OnDemandGrid" when you create your grid.
Back-end
The docs seem to indicate that your best bet for performance is to do some tricks with indices and query based on those to get your ranges.
You are probably already referencing these, but here they are;
http://mongoosejs.com/docs/api.html#query_Query-skip
http://docs.mongodb.org/manual/reference/method/cursor.skip/
Since log data is usually sequential and rarely modified, you could probably just use a monotonically increasing index for each row of log data and query using those to get the right offset into and count of the rows.

Resources