Using dataCache property in view control in XPages - xpages

In the the data source of a view control there is a property of dataCache with options of Full, ID and NoData. From some sources I gather that:
Full - The entire view is persisted
ID - Minimal scalar data ID and position. Access to column values during a POST are not available
None - Enough said – the entire view needs to be reconstructed
But exactly how does this property effect performance of the XPage? Which are the methods/functionality I can use in each of these options? What is the suitability of each option?

I haven't tested, but I would presume the following are true. The more you persist in memory, the quicker it is to restore for expand/collapse etc. However, the more users and the bigger the view (number of columns rather than number of documents, because not all documents will get cached), the more risk of out of memory issues. Access to column values means you may have problems using getColumnValues() in SSJS from the view.
XPages is pretty quick, so unless you have specific performance issues, the default should be sufficient.

Related

How to limit the number of items per section in an NSFetchedResultsController?

For example, your FRC fetches a news feed and groups the articles into sections by date of publication.
And then you want to limit the size of each section to be up to 10 articles each.
One option I’ve considered is having separate NSFetchedResultsControllers for each day and setting a fetch limit. But that seems unnecessary as the UI only really needs a single FRC (not to mention that the number of days is unbounded).
Edit:
I’m using a diffable data source snapshot.
If it was me, I'd leave the NSFetchedResultsController alone for this and handle it in the table view. Implement tableView(_:, numberOfRowsInSection:) so that it never returns a value greater than 10. Then the table will never ask for more than 10 rows in a section, and your UI will be as you want.
Since I’m using a diffable data source snapshot, I am able to take the snapshot I receive in the FRC delegate callback and use it to create a new snapshot, keeping only the first K items in a section.

Pagination after reindex in Azure Search

I am new in Azure Search Service and I am not sure I got one important thing about it:
Let's pretend the situation when I am as a client scrolling down through results of my search query:
"New Y". I have 1000 elements, every page contains 10 of it. But during my scroll reindex operation has been started and some elements changed their position concerning new updates in data source (Azure Table).
Will I see next pages during my scrolling after reindex with probably some duplicated data or it still be the old "snapshot" of data I was scrolling before?
You'll see the changes as you execute subsequent requests. To Azure Search each request is independent and it represents a new search (caching aside), which for paging scenarios just happens to have a different "skip" number.
This means that if your data is changing you might see an item more than once (if it moves across pages due to changes) or even skip one (if it moves from a page you didn't see yet to a page you already saw).
There's no way to get a strictly consistent view of search matches outside of a single result. If you need to approximate this behavior you can request a larger page (using "top"), cache the results and present them in chunks. We find that in practice this is rarely needed for most search scenarios, but if search is backing a part of an app that needs consistency you might need to something along those lines.

Loading of view takes too long in XPages

I have 4 databases and they have more than 200.000 datas. A viewPanel which shows all datas of database does not load correcttly. It turns out with an error after little bit waiting. If that view does not have lots of datas no error is given.
I could not find a solution for this situation :(
I added this line into Application Property but It did not solved my problem.
xsp.domino.view.navigator=ByNoteId
Regards
Cumhur Ata
There are a number of performance "sins" you can commit on Domino. Unfortunately Domino is too forgiving and somehow still works even if you do them. The typical sins:
Using #Yesterday, #Today, #Now, #Tomorrow ind a view selection formula or a sorted column in a view. I wrote an article about your options to mitigate that
Having code that does a view.refresh before opening a page
Using reader fields and accessing a view that is not categorized by that reader field. Hits only users who can see only few documents. Check this article for possible remedies
Not having a fast temp location for view rebuilds. Typical errors are: not enough disk I/O or having your transaction log on the same channel as your databases. Make sure you have a high performance server
For Windows servers: not taking care of disk fragmentation - includes links to performance trouble shooting
Not using ODS51/52 and have compression for data and design active. Takes a simple command to fix it
That's off my head what you can check. Loading 200k documents into a panel in one go doesn't look like a good UX approach. Paginate it eventually

Most efficient way to create couchdb views

My CouchDB view indexes are being created slower than I would like. Writing the documents is not such a problem but the users can edit them offline and then bulk update, which seems to slow things right down.
This answer helped but I was just wondering is it better to separate out various views into different design documents (eg1) or to store them all in one (eg2).
Eg. 1
_design/posts/_view/id
_design/comments/_view/id
_design/tags/_view/id
Eg.2
_design/webresources/_view/_id?key="posts"
_design/webresources/_view/_id?key="comments"
_design/webresources/_view/_id?key="tags"
*This example is just for illustration purposes. I am only concerned with the time it takes to build the indexes.
You will gain better performance if you read often. Couchdb views are updated and build at read time. So you can can read the view every time the document updates to keep it hot*.
Or maybe listen to the changes feed and keep a track of documents updated. Once they reach a certain threshold value read a view.
Another option is use stale parameter.
If stale=ok is set, CouchDB will not refresh the view even if it is stale, the benefit is a an improved query latency. If stale=update_after is set, CouchDB will update the view after the stale result is returned
Every design document is a separate erlang process. So separating your views across different design documents will cause them to be built concurrently. However each view will still be built in a blocking manner. That is the two views across different design documents can start updating at the same time but the time it takes to update the individual views will be the same as if they were in the same design document.
*You don't necessarily have to care about the result. Our goal here is to trick couchdb to update the view. So you can fire off a request in a separate async process and be done with it.

Can you find the logical size of a single NotesDocument in a DAOS-enabled database or an uploaded file size?

I'm doing some feasability for an XPages application. One of the aspects is checking the amount of space used by users.
The database will be DAOS-enabled to minimise the size of the NSF. Is it possible to identify the logical size of a NotesDocuemnt that has a DAOSed attachment? I know I can find the logical size of the overall database, but need to identify it based on users.
LotusScript or Java would be acceptable options.
The other option is to capture file sizes at upload time and store that information against the user. Is it possible to identify the attachment size at the point of upload and deletion? This would need to be captured before the attachment was moved to the DAOS store.
Paul,
As far as I know from the client point of view he can't see if a Database/Document has been DAOS'ed or not. SO this meahs that using LotusScript against the document would report the document size as if the attachment(s) would be in the document. I haven't tested it myself to give you a 100% guarantee but you could test it for yourself very easily by enabling a database for DAOS and then create 10 docs with all of them the exact same attachment attached to the documents. If the docs report a size of arround the attachment size when accessed via LotusScript you will have your answer !
You could check the logical size of the database before and after saving the document. But unfortunately, you would have to rig the critical section of this code with a semaphore or some other mechanism that assures that only one instance can run at a time, otherwise two simultaneous saves would give you bad results.
Build a view with a column whose formulas is #DocLength or #Sum(#AttachmentLengths) This will show the logical size of the docs as if DAOS was not active.
/Newbs

Resources