I have some NotesViewEntryCollection that I want to merge into one collection, and then sort on date. All the collections are gathered from the same view, so there wont be a conversion problem.
Have tried to google this problem, but cant seem to find any good solutions, besides writing a bunch of for-loops.
Thnx in advance!
Assuming that you're using LotusScript and a recent version of Notes (8+). You can use the merge method. The examples provided in the help, here, should help you get started. Be aware of some caveats when using NotesViewEntryCollections as reported by IBM.
The NotesViewEntryCollection gives you a sorted collection, and the merge method will also give you a unique sorted list of documents, unlike a regular NotesDocumentCollection which is just an unsorted bucket.
I found this out recently, that if you create a NotesViewEntry from one view, you can only add entries that exist in that view. So you can't combine entries from two different views.
A possible way round this would be to use a java.util.TreeMap, push the entries into the TreeMap with the date as the key. This may work, but you may need to convert the NotesViewEntry objects to your own non-Notes objects before adding them in. This will definitely be the case if you want to store them in a managed bean of session or application scope. No matter how you store them, if you use a TreeMap it will have a performance hit if you're dealing with a lot of entries.
If you are using Notes 8.0 or greater, there is a Merge method you can call to merge two collections together. Otherwise, you are correct that you'd have to loop through each collection and call AddEntry to add each entry one at a time.
It does not answer your question, but it might be possible to move all the documents to a (temporary) folder. This folder can take care of the sorting and merging.
Related
I am still debating which way to go and possibly store certain information in its own doc. so for example the customer can have addresses with each address would be its own doc and then in the customer doc there would be an array of ref keys stored under addresses. The benefit would be i could update these docs simply based on the key value vs having to get the customer doc first, finding the array index of the address and then either modify the whole doc or go and use subdoc to replace the content of the array with the index.
Where i am stuck is how to retrieve those referenced subdoc's. is N1QL the only way to go or does the KV API offer a way to do this short of retrieving the whole customer doc, then looping thru address array and retrieving all referenced docs that way. I know Ottoman offers something like that but i am having an issue with the latest version of SDK 2.6 and Ottoman as its not very well maintained. So hopefully someone can share some insight what and why its the best way.
If you want to rely on key/value, then you'll need to do the multiple lookup as you've described. I'm not very familiar with Ottoman: it might do this for you, but behind the scenes it will still be multiple key/value operations and/or N1QL.
With N1QL, you can perform JOINs, but again, behind the scenes it's going to eventually be pulling documents out by key/value. It just does those extra steps for you. Direct key/value is always going to be the fastest route.
If you are still in the process of deciding whether to split the data amongst multiple documents or "denormalize" the data into a single doc, one thing you should think about is how often you're going to access customer+addresses together and how often you're going to customer/access separately. If you're reading/writing customer+address often, consider putting it in one document. Otherwise, consider putting it in multiple documents.
The third option is to store it both places, or rather "cache" the address data in the customer document. This is tricky, because it could get out of sync if you're not careful. So make sure it's worth it before you go down that road.
I want to build a type-ahead function but I need an alternative to getAllEntriesByKey method because the initial data collection is seems to be too large for an acceptable performance.
I would rather like to use the getEntryByKey method and the next X number of documents in a View.
Is something possible? Just jump into a position in a view (matching a specified query) and collect the next X number of documents?
For now I have written most in SSJS.
you can use a combination of NotesView.GetEntryByKey and NotesView.CreateViewNavFrom. This means however you will access the view twice so I do not know if you gain any performance improvement here.
An example (LotusScript) can be found here:
http://lpar.ath0.com/2011/09/19/notesviewentrycollection-vs-notesviewnavigator/
The LotusScript can easily be transformed into SSJS. I have used it something similar before. I can write a blog-post about it.
I using marklogic's search functionality to create a search page. As of right now, I'm running an XQuery to get search results through search:search. As a bare bones example, see this code:
xquery version "1.0-ml";
import module namespace search = "http://marklogic.com/appservices/search"
at "/MarkLogic/appservices/search/search.xqy";
search:search('test',
<options xmlns='http://marklogic.com/appservices/search'></options>)
This search searches all content in the database, which is fine in many cases. In other cases, I search based on collections with cts:collection-query. The collections serve as great contexts for my searches.
Now, I would like to limit my search results based on a relationship of data in a "main" document. This "main" document has all the relationships in an object model. If that object model has a reference to a document, I want that document included in the search. Essentially, the "main"/model document is the context of the search.
I was trying to brainstorm some ideas of the best way to to this. Here's what I've come up with thus far, but I was hoping someone more familiar with Marklogic (I've only been working with it for 6 months) could lead me in a good direction:
Add all documents referenced in the model document to a unique collection. Then query search based on that collection. However, the collections would have to be updated as the model changed.
Load the model document into my code and get a list of all the references and add them to a query by cts:document-query (or the like).
Restructure my concept of a "model" somehow in my XML documents.
Thanks for any input or suggestions.
I would start with (2) and see if the performance is good enough. That will depend on your use-case, but I expect it should be fine for thousands or even hundreds of thousands of references.
Be sure to use a single-term cts:document-query($list-of-references). That will be faster than cts:or-query(for $ref in $list-of-references return cts:document-query($ref)), because the index lookup can be a single pass instead of N separate lookups.
All of these ideas would work fine. Deciding which to use depends on particulars of your application such as how often the main document is changed (and are you in control of it),
how hard it is to remodel your XML.
Another thing to consider is you can set a trigger on document updates which could perform the collection changes automatically.
-David Lee
good day every one.
what are the effect of re-cache if i use it to my updating data?
This is what i trying to do, but i need to know what will happen if i replace the no-cache to re-cache
i will explain the idea,
in this textbox it will call the last digit in my database and add 1, but i figure out that there is a problem in this, if multiple user using this form there are possibility that the number will duplicate.#If(#IsNewDoc;#Elements(#DbColumn("":"nocache";#DbName;"GPA";1))+1;#Return(GPnum))
my thought guide me in this path, if i use #dblookup to find if there is a duplication in my number but, i dont make it thru .
Recache will not help you avoid duplicates.
You are trying to increment a counter in Lotus Notes to create a unique sequential identifier for documents. This is a problem that has been discussed many times, by many people, for at least 20 years. You can find good information here in StackOverflow and in various other forums, blogs, and articles. The two approaches that work are
Store the last counter value in a config doc, and use document locking to assure that you don't have two users accessing and updating it at the same time.
Do not set the counter variable directly in user code. Write your code to put a "pending" value in the field, and rely on a scheduled or triggered background agent that runs on only one server to set the final value. (Since the Agent Manager guarantees that only one agent can run at a time in one database, you will not have conflicts.)
Don't use a sequential counter for your identifier. Use the #Unique function instead. Documents will have a unique code instead of a unique number.
Please see this answer, and this answer, and this article.
I would like to know the best Practice that you guys follow when it comes to access the SharePoint List Items / Doc Lib using Object Model. To start let me share few things I have found.
Limit the number of Items Per container to 2K items.
Use ProcessWebData method of SPWeb to do Update/Insert of Large items
To completely answer your question would require a full blog post. There are several of these out there on the IntraWebs already.
Here are a few of the major points:
Avoid iterating though the entire list unless you need to see every item
If you do iterate through the list, use a foreach loop instead of a for loop
In all other cases use an SPQuery or an SPSiteData query
Access columns by the internal name or the field ID
You should also take a look at Common Coding Issues When Using the SharePoint Object Model as it has some examples on how to avoid serious performance problems.