Progress 4GL - Pagination - pagination

In PHP and MySQL I can do pagination of data like this:
select * from customers LIMIT $start, $limit;
The result will return me the page I'm requesting. Is it possible to do something like this using Progress 4GL?
I do not use -> select from customers
I use -> for each customers
But how can I set a limit and pages for that query search?
Example of the pagination:
I have 20.000 customers in my database. In each search I made, I want to separate the result. The limit I want to sent to the application is 100 rows of 1000. And when the user press page 2, it returns another 100 (but not the old 100) rows.
Does that make sense?
UPDATE
I'm using a technology from Adobe called Flex. The language Flex does not connect to the database directly, it depends on a back end language to do that.
So I'm using Flex and Progress 4GL. My flex application has a datagrid (like browser in 4GL) to show the data retrieved from my 4GL database.
The problem is that the database is huge, so I need to paginate the data. Each time the user clicks on another page, the Flex application has to communicate with Progress 4GL to retrieve the other page data. But, each click of the button is a different call so the Progress won't have any knowledge of the previous query.
How can I go from the 1st page to the 7th using a query?

Check this KB article on batching records to a .NET client
http://knowledgebase.progress.com/articles/Article/000033965?q=query+batch&l=en_US&c=Product_Group%3AOpenEdge&type=Article__kav&fs=Search&pn=1

Related

SharePoint view limitation

I have a document library in SharePoint online. I keep on dumping the records into it. As SharePoint have a 5000 record view limitation the moment it reaches that limit, still I will be able to upload documents but it doesn't show up any where.
Eventually I end up creating a new view and apply a filter and then the document starts showing up under the new view.
My question here is: Is there a way to automatically create a view when it reaches the 5000 limitation and put the newly uploaded documents to the new view.
Yes, you can do this via MS Flow/Workflow & server side apps/scripts of course but it's not a good approach to the issue IMO.
Have you indexed the columns? I just tested this now on a document library with 20k documents and I'm able to filter. There are limitations which you should look into (complex filtering), that's where compound indexes come in.
If you still have issues then I recommend you give the highlighted content web-part a try. You can create custom search queries & it looks similar to a document library if u set the settings correctly. The only meh thing about this approach is there is a delay for search to update, from 15 mins to 6 hours depending on how much data you have

Cloudant/Couch db pagination in search API - How to skip n number of records

I am building a typical pagination that allows the user to click on a particular page number and view the results (similar to the google search result view). I am using the cloudant search API for this. The cloudant search API provides the limit option but no skip option. How can I skip n number of results if the user is on page 1 and clicks on page 4 ?
I can see that the pagination is implemented using bookmarks. Does it mean that I need to first get the bookmark for page 4 by sending 3 additional requests one after another to the search api ?
There are a couple of different ways of handling this - one is the one you already suggested, which is just to fetch the pages as needed to get the bookmarks. I'm not sure there are many alternatives for search results where we can't pre-calculate the results.
Another alternative, and this depends a bit on the details of what you are trying to do, is to create a view containing the data and use the keys to narrow down the view to the results you need. View outputs support use of limit and skip which would enable you to implement pagination.
There's also a good example of pagination in the docs: http://docs.couchdb.org/en/2.1.0/ddocs/views/pagination.html

Dojo DataGrid (8.5.3 UP1) Returning Blank Rows - based on Readers field

Trying out a Dojo DataGrid control on an alternate XPage (so as not to impact production) for an existing View, which utilizes Readers fields in the documents. I've got the REST service implemented (xe:viewItemFileService) and connected to the Dojo DataGrid just fine (from 8.5.3 UP1 controls).
I have two scenarios of user visibility (via Roles in the Readers field, assigned by NAB Group definition):
All documents visible (user A). User A can see all documents, everything works perfectly fine for this one.
User B can see some documents. ViewPanel control works fine, but once it's in the Dojo DataGrid, it only has values for the documents User B should see, the remaining X (difference between correctly visible and total document count) rows are populated with "..." (non-values).
Inspecting the REST service's output via the pathInfo yields only the correct documents for User B; which I take as a good sign and makes me think the Dojo DataGrid is what's misbehaving.
Actual Question:
How can I suppress the generation of the unnecessary rows?
I've tried to implement Marky Roden's approach, but got lost on the manipulation of how I can control what the DataGrid is looking at to generate row count (he's talking programmatic store definitions when I'm using the xe:djxDataGrid control). The attribute of rowsPerPage doesn't seem right, and I can't find one for the xe:restService that would make sense to me for what I'm looking for.
Anyone know how to do this? Would love to get this work. Been loving the series by Brad Balassaitis and what XPages can do for us.
Setup:
Domino Server 8.5.3 UP1
NSF signed as Server ID
The grid gets the hint for the number of rows from ?readViewEntriews which tells the actual number, not just the number of documents user B can see. Anyway just romping through reader protected views without designing for access speed has huge performance ramifications. If you can categorize the view by the combined reader/author fields and limit to that category both performance and empty rows will go away.
If you have multiple possible hits (username, role, group membership), you might want to use a rest service that returns data using some SSJS using a viewNavigator

Pagination in Marklogic while using Search API

I have around 53,00,000 documents in MarkLogic server and I am building a simple search application. User enters a search term and MarkLogic server searches that term in all the nodes in all the documents and returns the matching documents as the result. I have implemented a custom paging to show results per page. I am showing 10 results per page.
I am using search api for this as:-
import module namespace search="http://marklogic.com/appservices/search" at "/Marklogic/appservices/search/search.xqy";
declare variable $options:=
<options xmlns="http://marklogic.com/appservices/search">
<transform-results apply="raw"/>
</options>;
search:search($p, $options, $noRecFrom, 10)/search:result
where $p is the input from the user $noRecFrom is the number which indicates from where we have to show records. For example for page 1 $noRecFrom will be 1, for page 2 $noRecFrom will be 11, for page 3 $noRecFrom will be 21 and so on. For paging there are hyperlinks to go to First, Next, Prev and Last pages.
To calculate the total number of records returned I am using:-
for $x in search:search($p, $options)
return $x//#total;
While First, Next and Prev hyperlink works perfectly but if someone clicks Last the application stops responding and the query does not show any output. Is it due to the large number of documents in the database or I am implementing it wrongly.
Is there any efficient way for pagination in MarkLogic (for search:search) so that the user can go the Last page without delay in query result for such a large database ?
The way you've implemented it, you're running the search repeatedly in your for loop. And that would indeed be slow.
Instead, you should be calculating a $start parameter based on the #total and number of documents per page, and passing that in as an argument (I think it's the third one) to search:search.
I would also recommend making sure you can run in unfiltered mode. There is good information about optimizing for fast pagination (indexes, etc) on the developer site; the idea is to resolve queries out of indexes to give very good, accurate unfiltered performance.
If you do it that
There is a tutorial on paginated search at http://developer.marklogic.com/learn/2006-09-paginated-search
Once you have resolved the issues mentioned by cwhit above, if you still want to get to the last page of data in a faster manner, you could make your code smart enough to reverse the sort order and pull the correct offset of records.
Here's another tip:
To get better insight to what MarkLogic is doing with search:search, call
search:get-default-options()
to see the starting point for common search applications.

Sharepoint View Grouping - performance

If you add grouping to a list view does it inheretly improve performance when you navigate to the view in the sharepoint page? i.e. if you use grouping, does sharepoint retrieve data when you click the [+] icon (using an Ajax call)? or is the data already retrieved beforehand?
Many thanks.
when u use grouping by default all your group document are not loaded first time. when u expand group it load all document.
After that using JavaScript it change the property from display none to display block and display block to display none and so on.
so according to performance it load all document first time but it's quick.
This is same for list also
My experience is that using groupings loads all data at once. So if you have a list with 1000 items and you display a grouped view, you may experience very slow loading.

Resources