Which one is better? LazyDataModel or LiveScroll - jsf

I am using LazyDataModel to display the data table records, pagination, filter, sorting purpose. The records can be max 2500 around. I display 10 Records per page, so customer has to visit 250 pages if they don't know the search term. Now customer don't want to visit all the pages rather they want the implementation where they can do all the stuff being on the same page.
The other option which comes in mind is live scroll but I tried to work on PoC and come to know that LazyDataModel and Live Scroll doesn't work together. So I created a demo page independent of LazyDataModel using Live Scroll. I really like live scrolling when it comes to filter the records as it is much faster. The only thing is drill down till the end.
I have following questions:
How live scroll works internally?
Does Live Scroll load the data all together and fetch it from heap or cache? (i.e. scrollRows="20")
If Live Scroll can do better then why LazyDataModel?
Don't you think that pagination is all about old days?

Related

Sharepoint List View Threshold and Item Limit setting

I have read a lot about the list view threshold. I have indexed the appropriate meta data columns to help. I have placed mandatory web part filters on web pages. I think I am going to be able to control the view pretty well.
Should a user try to get an "All Items View" will the "Item Limit" in the view settings keep the view from exceeding the threshold? I could not find a straight or understandable answer.
Actually I only know the answer for SharePoint 2010. From your question is not clear, which version you are using.
But, the item limit in the view will not protect you from exceeding the threshold. This is a very, very complicated topic with the List View Threshold.
You should read this
https://support.office.com/en-us/article/manage-large-lists-and-libraries-in-sharepoint-b8588dae-9387-48c2-9248-c24122f07c59

XPages viewPanel expanding one twisty taking long time

I have 1 level categorized view with around 80000 + documents and still increasing.
Initial loading of view with collapse all categories, Expand All/Collapse All pager control, works very fast, in a second.
But When I am trying to individual category, just one by one, takes around 10 seconds delay. Its a huge slow performance for users.
Please help on this, any fix available for this?
In 9.0.1 you can enable a new property that increases performance of categorized views.
See http://openntf.org/XSnippets.nsf/snippet.xsp?id=performant-view-navigation-for-notes-domino-9.0.1
You might want to revisit your UI pattern. Categories and pagers don't match well. See http://www.wissel.net/blog/d6plinks/SHWL-7UDMQS and fix as Per suggested the parameters

Dynamically load/populate data based on scrollbar handle position?

My PyQt application pulls data from third party API calls. The dataset returned usually contains in the neighborhood of hundreds of items. On occasion, the dataset returned contains in the tens of thousands of items. On those occasions, the interface is painfully slow to display - too slow to be useful.
To speed things up, I would like to load less of the data during the initial load. I would like to be able to populate the interface based on the scrollbar handle position. I would prefer that the scrollbar have the correct range as soon as the widget is displayed, but as the user scrolls, the data that they should be seeing is populated into the widget (a QTreeWidget in this case). This is to say that I'd rather the user didn't have to scroll to the bottom of the widget to load more data at the bottom & therefore change the range of the scroll bar.
I believe QSqlTable behaves this way out of the box, but because I'm not relying on sql queries (and because some of the columns' data is calculated by the GUI), I don't believe I can use that module. Is this possible with QTreeWidget and w/o direct sql calls?
There is built-in functionality for this in Qt model-view framework. See QAbstractItemModel.canFetchMore() and QAbstractItemModel.fetchMore() here
Oh, I've just realised you aren't using MVF but stand-alone QTreeWidget instead. If you are dealing with large data and require such a functionality, a switch to MVF may be a right thing to do.

SharePoint loading time optimization and caching

We have this page in SharePoint that list all the sites, the person who manages that site, their contact info, and the last modified date.
Currently, we are using a custom webpart that crawls through the sites and reads through the metadata, and then it displays all these in a list.
Opening this page takes about 10+ seconds.
We're looking at ways to cut this time to less than 3 seconds.
I'm thinking about some sort of timer job that caches the page, say every half hour, and when the page is requested, simply display the cached version. The data in the page itself doesn't change that often so caching isn't really a big issue. Is this idea feasible? I'm fairly new in SharePoint so what would be the steps to implement this?
Or if there are any other options/suggestions on how to reduce the load time, I'm all ears.
here are some approaches that might work for you.
Extend your existing Webpart with a cache. So the first User who visit the Site will wait as long as with the existing Solution. But he will fill the cache, so every other call of the Site will be much faster
http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.webpartpages.webpart.partcachewrite(v=office.15).aspx
Create a Timer-Job that fill up da extra SharePoint- List with the fields you need. So you render your Webpart using this data. To fetch the needed data from the List will be much faster than iterating some SPWeb or SPSite Objects.
A lot of data already can be fetched from the Search-Service, and you can extend the Attributes the search engine will crawl. Once the search attributes are extended you can create a search driven Webpart
http://technet.microsoft.com/de-de/library/jj679900(v=office.15).aspx
Each of this Solutions should work at SP 2007/10/13
If you need a quick-win than mybee Solution 1 is the best for you.
Regards

Tracking the Scroll and managing huge data in EditorTreeGrid

We have a requirement in which we need to load around 4000 records in two separate editor tree grids and highlight the differences in each record after doing comparison using values from a particular column in each tree . Everything's fine with a limited number of records but when we go up to 4000 records or more we have huge issues with the data. The tree grid takes around 10 minutes to render as it includes expanding all nodes, calculations to construct the parent child relation and then the highlight.
One solution I considered was trying a similar approach to Live Grid but for the highlighting logic we need all the records as the third record in Grid 'A' may match the 115th record in Grid 'B'.Live Grid would not have the previous selections when it brings the next set of records.
Considering the above, what would be the best way of achieving this? Can I just keep adding new records to the store as I scroll down ? I think it could be done by tracking the scroll position without using the Live Grid but am not sure how to achieve this. I'm not even sure if it's the right approach . Could anybody provide me some sample code to add elements to the store when user reaches the end of vertical scroll in EditorTreeGrid or suggest a better way to achieve this? My trial to add a scroll listener and a listener somehow doesn't kick in .
Also, the Live Grid uses List Store whereas I use the EditorTreeGrid . How do I effectively populate it to a tree store? I used to do getAllModels before and populate them into the TreeStore . Is it the right way to do this ?
In the end we ended up dumping the Tree structure and overrode the LiveGrid and LiveGridView to achieve this . LiveGrid does not load the complete data to the UI but tracks the scroll and brings in data on need basis.

Resources