Extract multiple Cognos report definitions - cognos

In COGNOS is there a way to get the definitions (filters, selected fields) from a number of reports in a folder?
I've inherited around 500 reports defined in a folder and they all need to be checked and fixed as they have business errors (not technical errors). If it was possible to get all their definitions in a single extract that would save an enormous amount of time having to click multiple times to get that information from each report one by one.
In ACCESS this can be done with VBA (for query definitions), but I'm not sure if there is a scripting language that can be used with COGNOS to achieve a similar result.

It sounds like you may want to "validate" each of these 500 reports (effectively equivalent to pressing the "validate" button on each individual report if it was open in the authoring studio).
Validation will ensure that a report specification XML is still syntactically correct, references a package which is still present the content store, references only query items from that package which still exist, generates valid SQL vs. the underlying datasource, etc.
If that's what you're looking for, an easy way to do batch validation for all 500 reports would be to use MotioPI (its a free admin tool for Cognos). Here's a short article which walks you through the process:
http://info.motio.com/Blog/bid/70357/Batch-Validation-of-Cognos-Reports
If you're wanting to retrieve the actual report specification (XML) for each of these 500 objects, then you'd need to write a program which utilizes the Cognos SDK to retrieve the specification XML from each of the 500 report objects. After that, you'd need to add logic which examines each of these 500 XML documents, looking for whatever it is you're looking for.

We solved this by exporting the XML of the reports using a SQL query on the content store.
The output is processed with a Python script to convert XML to table layout in CSV format.
This CSV-file can easely be imported in Excel.
You might want to process the reports XML directly in a SQL query with the xmltable function. In our situation this turned out to be a heavy proces we don't want to burden the content store database with. For a small set of reports this is working fine though.

Related

Extract Keywords from Office Documents with Sharepoint Flow

I am trying to implement a document management system using Sharepoint. One major issue is that colleagues cannot find documents in the current setup (local fileserver). They have asked that we have a system that scans uploaded documents and automatically looks for keywords in them and then populates a "Meta" column.
I have had sort of success with OCR on image files, but getting keywords out of office documents (doc, xls etc.) I have had no success until now.
Is there a way to setup a flow to do this task for me?
any help is much aprechiated.
i tried "Get file metadata" and Azure "Text analysis", but it seems to take the raw data of the files (XML I assume) and returns that the document is to large to analyse.
There is something vague about this requirement - how is a keyword defined in a document?
Therefore, first obvious solution would be to assign keywords for each file upon uploading it. You may create a process for this with flow - have tasks, reminders and so on.
Automating this with OCR first means that you need to user OCR that works with MS flow you have only one choice - ElasticOCR. Then, in your flow
- feed the document content to the ElasticOCR action
- keep in mind that OCR is not 100% accurate
- analyze the generated text content according to your keyword definition
- finally write the meta back to the library in the corresponding columns.
Having worked on a similar requirement, we asked uploaders to publish their documents with a short abstract(column from the content type). The assumption is the abstract contains the keywords and is stored in a multi-line column - making it searchable site wide.

Download Webi report from Excel

With newly released Webi there's no way to manipulate reports with VBA like it was in DESKI era.
I'd like to know if there's a way for me to click a button with parameters in Excel sheet and get a report from the server?
I've been thinking of using the RESTful Web-services but it seems that there is a performance problem.
I also considered using a JAVA app in the middle using the SDK but it's not really satisfying as I add one layer.
Do you know if there's an other way to download a Webi report from and to Excel?
For this type of requirement, you'd normally use the OpenDocument feature. There is one thing that it won't do however, at least not for Webi documents, and that is deliver the output in Excel format (HTML and PDF are the two possible formats for Webi). In all fairness, the export to Excel option is only about two or three clicks away, but I can understand that this wouldn't be an ideal solution.
Another option is the Java SDK, which I would not recommend, as the ReBEAN SDK (the part of the Java SDK you need to interface with Webi documents) is deprecated and replaced by the REST SDK.
The REST SDK would be the way to go if the OpenDocument feature is not sufficient. Keep in mind that this would involve quite a few steps, each time sending a command to the WACS server and then decoding the answer. The steps would be:
Authenticate and get a logon token
Refresh the document (if necessary pass prompt values)
Export the document to Excel
Close the document
The REST interface is only supported on the WACS server, which should run on your BI4 server (unless you have a customised landscape). If it's slow, I would suggest looking into the root cause of this performance issue, instead of discarding the SDK altogether.
If you're going to use the REST interface, I would recommend opting for JSON to communicate through REST instead of XML. It's easier to read and parse.
A last option, which I wouldn't recommend, is LiveOffice. This is a separate product which allows you to embed contents from Webi documents into Office documents (most notably Excel). LiveOffice has always had its share of problems and has not received much love from SAP regarding much needed updates.
One final thought: the report will never appear in the same sheet, at least not without an additional amount of coding. Whatever SDK you end up choosing, you will always end up with an Excel file. If you want to show the results in the Excel file you started from, you'll need to code the steps to open the generated file, grab the contents and then copy those to your worksheet.

Though deprecated, is DAO still used for automation of Access databases from Excel?

I'm trying to wrap my head around it. I've checked other questions and nothing seems to be too similar.
The Office 2013 development centre contains extensive DAO examples and states it is one of the easiest ways to work with an Access file (does not require an Access window) but DAO is a deprecated technology. (https://msdn.microsoft.com/en-us/library/office/ff834801.aspx)
I'm trying to write an Excel Add-in (eventual end point) that will grade Access assignments in .accdb format.
I'm can't just use ADODB to perform SQL queries to extract data, unless SQL can also do the following:
check a specified report has a specified title
check that specified tables, queries, forms, and reports exist
check specific fields in a table exist
check that specific fields have been set as the Primary Key
I also need to check that certain values exist in a table, but those I can solve with SQL.
So should I be using DAO or stick with ADODB? Remember, I'm using Excel, not programming in Access VBA.
The simplest way to work with excel tables is to link them in Access either manually or via the TransferSpreadsheet function in VBA. If you use a generic naming standard for the file names and tab names in Excel, you will not have to relink, rather you can replace the Excel file and the Access link will read the new file unless the layout has changed.
Once linked you can use the query-by-example tool to write your queries which also can be written into code (i.e. either embedded in the VBA [old school and not a best practice] or saved in an Access table then looked up and assigned to a variable for use with the CurrentDb.Execute strSQL, dbFailOnError command.
I suggest manually linking the first go round so you can manually define the individual column types vs. letting Access mess it up.
If you cannot use generic names and have to pick the file via a browse dialog, you can get a file browse function via Google to allow you to programatically link the file. (or you can rename it to a generic name which works even better)
So DAO vs. ADODB is moot from an Access perspective. I supported 50 databases with 70,000+ lines of code and dozens of Excel source files the past 4 years and never had to ponder the question.
Microsoft originally deprecated DAO in favor of ADO, but recently renamed DAO to Microsoft Access Engine (ACE) and is now pushing it as the preferred data access technology for Automation-supported environments (VBA, WSH etc.)
In general, in your scenario I don't think it makes much of a difference which one you use. I suggest you read through this.
I don't think it's possible to read table/query/field information via SQL. However, this can be done either with the DAO.TableDefs and DAO.QueryDefs collection, or the ADO.OpenSchema method.
RE: forms/reports - I don't know if it's possible to read form/report information via DAO or ADO even without SQL, as it is actually part of the UI objects, and not the data; unless we're talking about reading/parsing the system tables. You may have to open the database in a hidden Access instance and read the forms/reports that way.

Exporting data from Lotus Notes: C# interop, C, Java or LotusScript?

I'm about to export a lot of data from a Lotus Notes db, and I'm wondering if anyone can shed any light on how exactly I can move forward on this point.
Notes has some views (lists with custom templates?) of some kind - are these saved in .nsf files on the Domino server, or are the .nsf files for email only?
If the .nsf files are actually the database files, what would be the best language / development pack to use to pull data from them?
If you need full-time synchronization between an existing Notes infrastructure and a RDBMS, LEI (Lotus Enterprise Integrator) or a third-party tool like Notrix would be your best bet -- it's as simple as defining a job and a schedule/trigger to run it. If you need to occasionally pull (or push) a subset of the data, then NotesSQL is probably the easiest approach. If you're not afraid of learning the structure of the NSF (Notes Storage Facility), then the LotusScript/COM API or the Java/CORBA API would give you finer-grained control.
If what you really need is a one-time dump of everything, then exporting all of the data notes to DXL (Domino XML) would give you the most complete version of the data you're going to get, and in a way that would let you recover and convert formatted Notes Rich Text, file attachments, and so on in a way that would be incredibly difficult to achieve otherwise. DXL is verbose, so don't say I didn't warn you, but it is pretty comprehensive as well. (The DOmino Designer Help entry on the NotesDXLExporter class has example code that is exactly on point.)
It all depends on what language you're familiar with.
If you know LotusScript well, then that would be my first choice since it's the most integrated with the platform.
If you don't know LotusScript that well, but you know C#/Java/C really well...then you shouldn't have any trouble using any of those APIs (and they should all be able to get the job done equally as well).
In Lotus Notes Domino all the data is stored in the .nsf files. This is true for all Notes databases, not just email. The data is all stored in documents which are basically collections of named fields containing values. The views are simply ways of indexing and displaying collections of documents based of specific criteria. The views can also calculate values based on the value of a field in the documents.
The Notes LotusScript and Java APIs are essentially identical and would be the simplest way to programmatically access the data. The C API is much lower level and probably overkill for this kind of thing.
You could look at NotesSQL, if you want to create an ODBC connection to an NSF file to pull data into SQL or Access. If all the data is contained within the view you could simply select all the documents and click Edit > Copy Selected As Table and paste into Excel.
To answer your other questions: Notes views are similar to SQL views - essentially a query on the data stored within the NSF. NSF files contain both the data and the structure of the application in one file.

When is SPFile.Properties != to SPFile.Item.Properties in SharePoint?

One of our customers has a problem that we cannot reproduce. We programmatically copy a document's properties to a destination file using SPFile.Properties. However, for some reason the file's properties do not match the meta data specified on the list the file is stored in.
Now, we can probably solve this by copying SPFile.Item.Properties (not tested yet), but I am just wondering under what circumstances SPFile.Properties is unequal to SPFile.Item.Properties.
Update: We have just received an update from our customer. Using SPFile.Item.Properties always returns the up to date information. However, we still would like to understand the original question.
There is a slight difference between SPFile.Properties and SPFile.Item fields and the first one is much, much slower to call.
You have most probably seen Microsoft Office document's "properties" window (this one - http://dradisframework.org/images/tutorial/custom_document_properties.png). These are the properties that are read when you access SPFile.Properties. Reading them is slow since there is some code infrastructure that parses the binary DOC file and finds the properties. (takes up to 30 or something milliseconds for every property access) See more here: http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spfile.properties.aspx
In SharePoint, every item is an SPListItem and its field values (and I don't use the word "properties" on purpose here) are stored in Sharepoint's content database. So, when you access SPFile.Item.Properties, you actually look at the SPListItem to which the file is attached and look at its properties from SharePoint's content database.
What happens behind the scene, when you upload a file having some "Office properties" set, is that SharePoint copies them to same-named fields in SPListItem. (Some information about it here: http://weblogs.asp.net/bsimser/archive/2004/11/22/267846.aspx)
This is why these properties typically have the same value, BUT it only happens if SharePoint knows how to read metadata from your file and write them back. So, in case you put a .txt file in your SharePoint store, you will not get any SPFile.Properties back.
The user will always see the ListItem Properties and not the SPFile properties in a document library. So using the ListItem properties in the copy is the way to go.
I believe this issue is related to the Sharepoint property promotion/demotion feature which enables document properties to be embedded in the physical MSOffice file and travel with it to the client etc. This however is only supported currently for Office file types (to my knowledge).
Jonathan
Trying to find the "official documented" anything for sharepoint is pretty much undoable. :-D. The online docs suck, you are better of using blog entries etc.
P.S. I agree with Alex here. Although an SPFile never exists in a list without an accompanying SPListItem, the connection between the 2 can get corrupted (i.e. being able to edit the list item but the file is not openable). This to me indicates information about the 2 is stored in different locations in the content db. I have had this happen before.

Resources