ajaxProgressiveLoad="load", initialFilter and ajaxURLGenerator - tabulator

Im using an ajaxProgressiveLoad="load" successfully, but intialFilter doesnt seem to get applied during the load, as all the rows are displayed. Also, the calculation for the last_page response from the server is quite expensive (and will get more so!) so I was trying to use ajaxURLGenerator to include a last_page=getPageMax() request parameter to tell my server that it has already calculated the last_page already, and just return this value. But getPageMax() returns false, as detailed in the docs to indicate that pagination is not being used.
So at the moment, I'm under the impression that these 2 features/functions are not available under progressiveLoad ? If not, is there another way around to do this ?
Thanks

If you are using progressive loading then i would suggest that you use ajaxFiltering option to pass the filter information back to the server and filter it server side to reduce the amount of data sent in the request.
ajaxFiltering=true
The getPageMax function is only available when pagination is being used explicitly, not when progressive loading is being used
Importantly the last_page value is primarily used in this instance to let Tabulator know that there are still more pages to load, you could effectively always return this value as 1 or 2 above the current page while there is still information available and set it to the current page when you have reached the last set of records, that way it should continue to try and load data without the overhead of the final page calculation.

Related

Caching api response using react-query when changing routes but be able to receive data when reloading the page

I have prepared a simple demo with react-router-dom 6 and react query.
I have a couple of routes and a fetch call that takes place on the first route (Home).
What I want to achieve is navigating to the About page or any other page and do not perform another request for a certain time period (maybe never again) but if I refresh the page I want to be able to re trigger the request to get the data.
I have tried using staleTime when but if I refresh the page I get no results, just a blank page. refreshInterval works on refresh but does not keep the data when I change routes.
I have also tried this pattern in this article but still I don't get the job done.
Probably it may be that something I don't understand but the question is how do I avoid making the same request over and over again, perfrm it only once but still being able to get the data if I refresh the page when navigating between different routes.
Demo
The solution to the problem came from one of the maintainers on the official github repo eventually and is related to adding placeholderData as an empty array instead of initialData and setting staleTime to Infinity in my case as I only want to perform the request once.
Setting placeholderData gives you the opportunity to show some dummy data normally until you fetch the real but in my case it seems to do the job. More to read regarding this at this source
const { isFetching, data: catImages } = useQuery({
queryKey: ["catImages"],
queryFn: getCatImages,
placeholderData: [],
staleTime: Infinity
});

Is it possible to add row data via a callback to tabulator

I need a table that shows about 2.5 million rows from an array that is already created in memory. When I create the table and add the array to the 'data' property, the browser engine runs out of memory after some (significant) time. I assume that tabulator not only creates objects for the current virtual DOM part, but for each entry in the array in advance.
So my question: is it possible to not provide the entire array, but only the the count of rows, and let tabulator ask for the content of each row via a callback only when needed for rendering. Of course it only makes sense if tabulator does not keep any data of rows that are gone out of view.
I know that this might be in conflict with some column calculation features or others, but this would be fine for my use case.
The same use case is working with canvas-datagrid, which I have tried before.
If you can use Ajax to get the data there is the Progressive Ajax Loading that will help you to load data using the pagination module to make a series of requests for part of the data set, one at a time, appending it to the table as the data arrives.
Doc is here: http://tabulator.info/docs/4.3/data#ajax-alter
Progressive loading is an option, but you still are going to run into the issue that you will have two copies of the data in memory. It will either happen automatically if in 'load' mode or manually in 'scroll' mode as you scroll through the table. The best option would seem to be to have code via say a button that loads the data using either setData() or replaceData(). Then the user could fetch either the next or previous set of data in batches.

Pagination in Marklogic while using Search API

I have around 53,00,000 documents in MarkLogic server and I am building a simple search application. User enters a search term and MarkLogic server searches that term in all the nodes in all the documents and returns the matching documents as the result. I have implemented a custom paging to show results per page. I am showing 10 results per page.
I am using search api for this as:-
import module namespace search="http://marklogic.com/appservices/search" at "/Marklogic/appservices/search/search.xqy";
declare variable $options:=
<options xmlns="http://marklogic.com/appservices/search">
<transform-results apply="raw"/>
</options>;
search:search($p, $options, $noRecFrom, 10)/search:result
where $p is the input from the user $noRecFrom is the number which indicates from where we have to show records. For example for page 1 $noRecFrom will be 1, for page 2 $noRecFrom will be 11, for page 3 $noRecFrom will be 21 and so on. For paging there are hyperlinks to go to First, Next, Prev and Last pages.
To calculate the total number of records returned I am using:-
for $x in search:search($p, $options)
return $x//#total;
While First, Next and Prev hyperlink works perfectly but if someone clicks Last the application stops responding and the query does not show any output. Is it due to the large number of documents in the database or I am implementing it wrongly.
Is there any efficient way for pagination in MarkLogic (for search:search) so that the user can go the Last page without delay in query result for such a large database ?
The way you've implemented it, you're running the search repeatedly in your for loop. And that would indeed be slow.
Instead, you should be calculating a $start parameter based on the #total and number of documents per page, and passing that in as an argument (I think it's the third one) to search:search.
I would also recommend making sure you can run in unfiltered mode. There is good information about optimizing for fast pagination (indexes, etc) on the developer site; the idea is to resolve queries out of indexes to give very good, accurate unfiltered performance.
If you do it that
There is a tutorial on paginated search at http://developer.marklogic.com/learn/2006-09-paginated-search
Once you have resolved the issues mentioned by cwhit above, if you still want to get to the last page of data in a faster manner, you could make your code smart enough to reverse the sort order and pull the correct offset of records.
Here's another tip:
To get better insight to what MarkLogic is doing with search:search, call
search:get-default-options()
to see the starting point for common search applications.

WCF Data Service Paging Behavior

In my sample project, I set the entity page size to 20. Then I have an entity set with result count which is divisible to the page size. For example, the Categories set which has 100 items. When I go to:
http://localhost/Sample.svc/Categories?$skiptoken=80
I got 81st to 100th categories and the page has the "next" link
http://localhost/Sample.svc/Categories?$skiptoken=100
I tried to go to that page and it returns nothing.
What's the explanation for that?
The paging simply takes the next PageSize items. If it finds less than that, then it's clear there are no more items to return so you don't get the next link. If the query returns the requested number of items, the runtime doesn't try to figure out if this is the last page or not, it simply returns a next link. It might happen that such a link will return no results.
In fact the next link is not bound to return any results, but as long as the response constains another next link, there are potentially more results. The standard built in paging will return pages of the predefined size (except for the last one), but services are free to use any other kind of paging which might return different sizes for each page (including empty pages).
To directly answer your question "Why is the last page empty?":
The runtime doesn't "look ahead" so it can't tell if a given page is the last one except for when it gets less than the expected number of results. Looking ahead would be both costly (asking for more than necessary) and potentially wrong (what if the extra result causes an error...).

What's the appropriate Response Code for a Pagination API using a GET Request with page parameters, where the parameters produce no records?

I have developed a Web Interface for a db. The db and Web Interface are for my own use in my hobby running on my private intranet. Currently the db has 1800+ records which is going to increase with usage. Ver 1 of the Web Interface listed all records (~2.5KB) on a single page requiring a ton of scrolling. Vers 2 of the Web interface introduced pagination where records are grouped into a non-fixed size of roughly 100 records. On page load all 1800+ records are still transferred to the client but only the first page is "visible", the other 17 are hidden. I use a series of "non-submit" buttons with JS on-click functionality to hid the current page and make the selected page visible. Better in that scrolling is limited to ~100 records. Vers 3 only transfers the first page and the paging buttons on page load. Now, the on-click function using fetch() API sends a GET Request with parameters to fetch the desired page then swaps it into the DOM. The parameters specify a starting and ending points for the page. These values come from the paging buttons supplied by the server on page load. Works well with significantly reduced data transfer size. In Vers 4 I am generalize the fetch() API GET Request parameters to send user specified parameters to allow the user to choose any page starting and ending point. (Note: The user can not specify a page size directly.) So if the user selects a start and end point that no records fall into my plan is to use HTTP Response Code 204 "No Content" to tell the JS code that there are no matching records and nothing to swap. Is this the appropriate Response Code? Should I be including any other Header information in the Response with the 204 code?
Take a look at what the RFC says about 204:
https://www.rfc-editor.org/rfc/rfc7231#section-6.3.5
It's really intended for PUT requests. I think for what you are doing, it's fine to return 200 and no body, with a Content-Length header of 0.

Resources