Get view index sizes programmatically - lotus-notes

I'm familiar with the "Manage Views" tool in Domino Administrator, and wondered if anyone had come up with a way to access the view Index size programmatically. Ideally I'd use LotusScript but any other method would be great too.
I'm hoping to analyze which views are actually used across a large number of databases, without having to extract the info manually.

AFAIK this is not possible in LotusScript. But the index size could be retrieved with the C-API function NIFGetCollectionData.
You need to open the database with NSFDbOpen, get the view with NIFFindView, open the view with NIFOpenCollection and finally get the size with NIFGetCollectionData. And of course you need to call NIFCloseCollection and NSFDbClose to clean up.

Maybe there is an easier way, but you'll probably have to develop the necessary tooling yourself. A small quest in 6 steps:
open the log.nsf database for your server
open the view Usage/by Size
open a usage document for a fairly large database, e.g. mail/yourmail.nsf
the document shows all views and their (index) sizes
open the Document Properties box (Alt-Enter)
the field AllViewInfo (RichText) contains that data
Downsides:
AFAIK there is no ready-made LotusScript library for log.nsf.
those sizes are collected during the night, so they're not up to date
going through a rich-text's data can be daunting
HTH

Related

Sharepoint Folder listing takes time to load

I have around 1000 folders. when i select a document and click on 'move to' ,the folder listing window is loading for a long time and then showing the data. How can we speed up the loading?
the first rule of SharePoint: you do not create folders!
I know, it is too late now, but here it's what you should be doing now to fix this situation and avoid a bigger catastrophe:
create views to filter out the drilling down into sub-folders
views are your best friend in this scenario, you can improve the time of loading with better tailored views allied with indexes
employ metadata, actually this goes hand to hand with tailored views
Final step:
Once you have deployed tailored views to address the navigation to your data, you will start removing the folders, from the document library, the way how the users will find their info will be based on views and metadata.
You do not user document libraries the analogue way as use our hard disk by creating folders, we store data inside and provide Views to help users to find their info, the views filter out the unwanted data and present what you want to see, the metadata helps you to categorize, classify, and archive information, when all is combined, SharePoint index the data and presents you with agility, users once instructed to use views will never wait for long loading times!

Extract multiple Cognos report definitions

In COGNOS is there a way to get the definitions (filters, selected fields) from a number of reports in a folder?
I've inherited around 500 reports defined in a folder and they all need to be checked and fixed as they have business errors (not technical errors). If it was possible to get all their definitions in a single extract that would save an enormous amount of time having to click multiple times to get that information from each report one by one.
In ACCESS this can be done with VBA (for query definitions), but I'm not sure if there is a scripting language that can be used with COGNOS to achieve a similar result.
It sounds like you may want to "validate" each of these 500 reports (effectively equivalent to pressing the "validate" button on each individual report if it was open in the authoring studio).
Validation will ensure that a report specification XML is still syntactically correct, references a package which is still present the content store, references only query items from that package which still exist, generates valid SQL vs. the underlying datasource, etc.
If that's what you're looking for, an easy way to do batch validation for all 500 reports would be to use MotioPI (its a free admin tool for Cognos). Here's a short article which walks you through the process:
http://info.motio.com/Blog/bid/70357/Batch-Validation-of-Cognos-Reports
If you're wanting to retrieve the actual report specification (XML) for each of these 500 objects, then you'd need to write a program which utilizes the Cognos SDK to retrieve the specification XML from each of the 500 report objects. After that, you'd need to add logic which examines each of these 500 XML documents, looking for whatever it is you're looking for.
We solved this by exporting the XML of the reports using a SQL query on the content store.
The output is processed with a Python script to convert XML to table layout in CSV format.
This CSV-file can easely be imported in Excel.
You might want to process the reports XML directly in a SQL query with the xmltable function. In our situation this turned out to be a heavy proces we don't want to burden the content store database with. For a small set of reports this is working fine though.

Download Webi report from Excel

With newly released Webi there's no way to manipulate reports with VBA like it was in DESKI era.
I'd like to know if there's a way for me to click a button with parameters in Excel sheet and get a report from the server?
I've been thinking of using the RESTful Web-services but it seems that there is a performance problem.
I also considered using a JAVA app in the middle using the SDK but it's not really satisfying as I add one layer.
Do you know if there's an other way to download a Webi report from and to Excel?
For this type of requirement, you'd normally use the OpenDocument feature. There is one thing that it won't do however, at least not for Webi documents, and that is deliver the output in Excel format (HTML and PDF are the two possible formats for Webi). In all fairness, the export to Excel option is only about two or three clicks away, but I can understand that this wouldn't be an ideal solution.
Another option is the Java SDK, which I would not recommend, as the ReBEAN SDK (the part of the Java SDK you need to interface with Webi documents) is deprecated and replaced by the REST SDK.
The REST SDK would be the way to go if the OpenDocument feature is not sufficient. Keep in mind that this would involve quite a few steps, each time sending a command to the WACS server and then decoding the answer. The steps would be:
Authenticate and get a logon token
Refresh the document (if necessary pass prompt values)
Export the document to Excel
Close the document
The REST interface is only supported on the WACS server, which should run on your BI4 server (unless you have a customised landscape). If it's slow, I would suggest looking into the root cause of this performance issue, instead of discarding the SDK altogether.
If you're going to use the REST interface, I would recommend opting for JSON to communicate through REST instead of XML. It's easier to read and parse.
A last option, which I wouldn't recommend, is LiveOffice. This is a separate product which allows you to embed contents from Webi documents into Office documents (most notably Excel). LiveOffice has always had its share of problems and has not received much love from SAP regarding much needed updates.
One final thought: the report will never appear in the same sheet, at least not without an additional amount of coding. Whatever SDK you end up choosing, you will always end up with an Excel file. If you want to show the results in the Excel file you started from, you'll need to code the steps to open the generated file, grab the contents and then copy those to your worksheet.

Exporting data from Lotus Notes: C# interop, C, Java or LotusScript?

I'm about to export a lot of data from a Lotus Notes db, and I'm wondering if anyone can shed any light on how exactly I can move forward on this point.
Notes has some views (lists with custom templates?) of some kind - are these saved in .nsf files on the Domino server, or are the .nsf files for email only?
If the .nsf files are actually the database files, what would be the best language / development pack to use to pull data from them?
If you need full-time synchronization between an existing Notes infrastructure and a RDBMS, LEI (Lotus Enterprise Integrator) or a third-party tool like Notrix would be your best bet -- it's as simple as defining a job and a schedule/trigger to run it. If you need to occasionally pull (or push) a subset of the data, then NotesSQL is probably the easiest approach. If you're not afraid of learning the structure of the NSF (Notes Storage Facility), then the LotusScript/COM API or the Java/CORBA API would give you finer-grained control.
If what you really need is a one-time dump of everything, then exporting all of the data notes to DXL (Domino XML) would give you the most complete version of the data you're going to get, and in a way that would let you recover and convert formatted Notes Rich Text, file attachments, and so on in a way that would be incredibly difficult to achieve otherwise. DXL is verbose, so don't say I didn't warn you, but it is pretty comprehensive as well. (The DOmino Designer Help entry on the NotesDXLExporter class has example code that is exactly on point.)
It all depends on what language you're familiar with.
If you know LotusScript well, then that would be my first choice since it's the most integrated with the platform.
If you don't know LotusScript that well, but you know C#/Java/C really well...then you shouldn't have any trouble using any of those APIs (and they should all be able to get the job done equally as well).
In Lotus Notes Domino all the data is stored in the .nsf files. This is true for all Notes databases, not just email. The data is all stored in documents which are basically collections of named fields containing values. The views are simply ways of indexing and displaying collections of documents based of specific criteria. The views can also calculate values based on the value of a field in the documents.
The Notes LotusScript and Java APIs are essentially identical and would be the simplest way to programmatically access the data. The C API is much lower level and probably overkill for this kind of thing.
You could look at NotesSQL, if you want to create an ODBC connection to an NSF file to pull data into SQL or Access. If all the data is contained within the view you could simply select all the documents and click Edit > Copy Selected As Table and paste into Excel.
To answer your other questions: Notes views are similar to SQL views - essentially a query on the data stored within the NSF. NSF files contain both the data and the structure of the application in one file.

When is SPFile.Properties != to SPFile.Item.Properties in SharePoint?

One of our customers has a problem that we cannot reproduce. We programmatically copy a document's properties to a destination file using SPFile.Properties. However, for some reason the file's properties do not match the meta data specified on the list the file is stored in.
Now, we can probably solve this by copying SPFile.Item.Properties (not tested yet), but I am just wondering under what circumstances SPFile.Properties is unequal to SPFile.Item.Properties.
Update: We have just received an update from our customer. Using SPFile.Item.Properties always returns the up to date information. However, we still would like to understand the original question.
There is a slight difference between SPFile.Properties and SPFile.Item fields and the first one is much, much slower to call.
You have most probably seen Microsoft Office document's "properties" window (this one - http://dradisframework.org/images/tutorial/custom_document_properties.png). These are the properties that are read when you access SPFile.Properties. Reading them is slow since there is some code infrastructure that parses the binary DOC file and finds the properties. (takes up to 30 or something milliseconds for every property access) See more here: http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spfile.properties.aspx
In SharePoint, every item is an SPListItem and its field values (and I don't use the word "properties" on purpose here) are stored in Sharepoint's content database. So, when you access SPFile.Item.Properties, you actually look at the SPListItem to which the file is attached and look at its properties from SharePoint's content database.
What happens behind the scene, when you upload a file having some "Office properties" set, is that SharePoint copies them to same-named fields in SPListItem. (Some information about it here: http://weblogs.asp.net/bsimser/archive/2004/11/22/267846.aspx)
This is why these properties typically have the same value, BUT it only happens if SharePoint knows how to read metadata from your file and write them back. So, in case you put a .txt file in your SharePoint store, you will not get any SPFile.Properties back.
The user will always see the ListItem Properties and not the SPFile properties in a document library. So using the ListItem properties in the copy is the way to go.
I believe this issue is related to the Sharepoint property promotion/demotion feature which enables document properties to be embedded in the physical MSOffice file and travel with it to the client etc. This however is only supported currently for Office file types (to my knowledge).
Jonathan
Trying to find the "official documented" anything for sharepoint is pretty much undoable. :-D. The online docs suck, you are better of using blog entries etc.
P.S. I agree with Alex here. Although an SPFile never exists in a list without an accompanying SPListItem, the connection between the 2 can get corrupted (i.e. being able to edit the list item but the file is not openable). This to me indicates information about the 2 is stored in different locations in the content db. I have had this happen before.

Resources