copy MIME atachments from document to document - xpages

XPages application uses "template" documents containing two fields binded to RichText editor and file upload/download controls.
Regular documents initially link to template document, with one datasource for common fields (title, category, readers/authors...) of current document, and second datasource shows read only RT+attachments from template.
When user decides to alter RT/attachments, he unlinks document from template - by copying RT+attachments from template to current document.
The problem: standard Java snippet for RT copy (and attachments too) is:
session.setConvertMime(true);
RichTextItem rti = (RichTextItem)docTemplate.getFirstItem("Body");
rti.copyItemToDocument(docCurrent, "Body");
rti = (RichTextItem)docTemplate.getFirstItem("Files"); <====
rti.copyItemToDocument(docCurrent, "Files");
docCurrent.save(); //saves in RT format, next save via XPage converts to MIME
This always works for Body field (although it alters formatting a bit), but it rarely works for attachments.
Resave of template document in Notes client converts RT from MIME to native RT format and code works without problem.
Not working means:
exception java.lang.ClassCastException: lotus.domino.local.Item incompatible with lotus.domino.RichTextItem at line with arrow.
missing Files field (Body is created correctly tho)
For some attachments code seem to work (text file), for bigger or binary it fails (23k .doc, 3M .pdf).
LotusScript alternative of above code called as agent does not help either.
Datasource property computeWithForm is not used by purpose.
Question: what is proper technique for copying MIME attachment(s) between documents?

The quickest way would be using Document.copyAllItems(Document doc, boolean replace) and than removing what is unnecessary.

Related

How to add a secure and hidden attachment to a PDF document using iTextSharp

I want to attach a file to an existing PDF document using iTextSharp and I can able to do it using pdfStamper.AddFileAttachment(...) method. Now I want to make the attachment hidden/secure in a way that no one able to see the attachment and even not able to retrieve it directly from PDF. It should only be retrieved from code.
I wouldn't store anything that has to be hidden in a File Attachment. That's a public, well-known mechanism that is understood and supported by multiple pieces of software (through UI).
If it has to be hidden and secure, I would protect the file by encrypting it in some way and then store all of it in a private CosStream somewhere in the document. The best way to do this would likely be the "Page-Piece Dictionaries" which provide a way to store product private data inside a PDF file. Private data can be attached to forms, pages or the document as a whole.
In my version of the PDF specification, this is paragraph 14.5, Page-Piece Dictionaries.
To include the concern of the OP and mkl's subsequent comment, there is no set expectation that Page-Piece data is encoded in any set way. The Page-Piece Dictionary contains a "Private" key that can have anything as value (so the value can be a string for smaller data, could be a dictionary containing multiple pieces of private information, or could be a stream that is compressed to keep it small).
From the PDF specification: "Private (key) : (Optional) Any private data appropriate to the conforming product, typically in the form of a dictionary". Remark the "typically" in the description. Further explanation in the PDF specification clarifies that the type of data stored may be anything you want.

Array of attachment type - how to get a filename for highlighted fragment?

I use ElasticSearch to index resources. I create document for each indexed resource. Each resource can contain meta-data and an array of binary files. I decided to handle these binary files with attachment type. Meta-data is mapped to simple fields of string type. Binary files are mapped to array field of attachment type (field named attachments). Everything works fine - I can find my resources based on contents of binary files.
Another ElasticSearch's feature I use is highlighting. I managed to successfully configure highlighting for both meta-data and binary files, but...
When I ask for highlighted fragments of my attachments field I only get fragments of these files without any information about source of the fragment (there are many files in attachment array field). I need mapping between highlighted fragment and element of attachment array - for instance the name of the file or at least the index in array.
What I get:
"attachments" => ["Fragment <em>number</em> one", "Fragment <em>number</em> two"]
What I need:
"attachments" => [("file_one.pdf", "Fragment <em>number</em> one"), ("file_two.pdf", "Fragment <em>number</em> two")]
Without such mapping, the user of application knows that particular resource contains files with keyword but has no indication about the name of the file.
Is it possible to achieve what I need using ElasticSearch? How?
Thanks in advance.
So what you want here is to store the filename.
Did you send the filename in your json document? Something like:
{
"my_attachment" : {
"_content_type" : "application/pdf",
"_name" : "resource/name/of/my.pdf",
"content" : "... base64 encoded attachment ..."
}
}
If so, you can probably ask for field my_attachment._name.
If it's not the right answer, can you refine a little your question and give a JSON sample document (without the base64 content) and your mapping if any?
UPDATE:
When it come from an array of attachments you can't get from each file it comes because everything is flatten behind the scene. If you really need that, you may want to have a look at nested fields instead.

Dynamic field binding inside a repeat control

I have a strange thing, I'm using dynamic field binding in a custom control.
The field binding is created like this.
XPage (Datasource "document" is placed here)
Custom Control (String passed in)
(to get errors if there are any)
Repeat (CompositeData is passed to a bean that returns the strings for Rows,columns)
Repeat (repeat 1 variable used for Columns)
Custom Control (fieldname is passed in)
field binding is done like this
#{document[compositeData.fieldName]}
The problem is that when I save the XPage I get an error in the messages control
Document has been saved by another user - Save created a new document as a response to that modified document.
And all fields are cleared.
Any ideas how to debug this or is there something I'm missing?
The "Document has been saved by another user" error is only tip of the iceberg - there are some really strange problems with reapeats that repeats fields that are bound and repeatControls property is set to false. The decoding part of xpages lifecycle cannot handle it properly - the controls will be losing data. You should use repeatControls set to true as Martin suggests.
"Repeat control variable doesn't exists" is probably caused by the property that removes repeats set to true. You can solve this by either changing it to false or by adding additional data context that will keep repeated value.
And finally for this to have add/remove functionality You can use Dynamic Content Control and show(null) hack to rebuild the repeat content.
To manage this complexity better I would advise You to stop using document data source and start creating some managed beans.
If You will follow my suggestions I guarantee that You will get the functionality You are looking for as I have few apps that works great and have this kind of complex data editors in them.
I don't know if it'll help you, but I pass both the document datasource and the field name as parameters to a DynamicField control, and use it like this:
compositeData.dataSource[compositeData.fieldName]
The type of the datasource is com.ibm.xsp.model.DataSource, it's listed as dataInterface under Data Sources.
Do you have repeatControls="true" set for the repeat control?
It sounds like you've got the datasource defined multiple times on the XPage (plus custom controls). Either that or the save button has save="true" but the code saves the document back-end, or code in multiple places saves the same document. I've used the same method of passing the datasource down to the custom control, but that may just be because that was what I saw on a blog.

Cascaded ListBoxes using SPFieldMultiChoice - issue defaults to default Content type

I wound up modifying the source from a publically posted POC: http://datacogs.com/datablogs/archive/2007/08/26/641.aspx, which is a custom field definition for cascading drop downs. The modifications were to allow parent-child list boxes where a user can multiselect for filtering and selecting the values to be written back to a SharePoint list.
I got the parent-child cascading behavior working, but the save operation only takes the default Content Type value.
I changed the base type for the custom field control from "SPFieldText" to "SPFieldMultiChoice", along with changing the FLD_TYPES field definition values from: "Text" to "MultiChoice"
Steps Explained:
1. The custom field is created which is derived from ‘SPFieldMultiChoice’ class. The custom field allows multiple values to be selected.
2. The Field created using above custom field is added to custom content type created from GUI derived from ‘Document’ Content type.
3. The custom content type is added to the document library.
4. The document is uploaded and custom content type is selected and tagged to document.
A. The correct content type gets tagged with correct metadata if type of document is .xls,.doc,.txt etc
B. The default content type i.e. ‘Document Content Type’ gets tagged if type of document is .xlsx, .docx.
Issue Summary – Point#B: is an issue as correct content type is not tagged and default content type gets tagged if type of uploaded document is .xlsx or .docx.
However same content type , same custom field works if type of document is .xls or .doc.
Appreciate your inputs in this regard.
Thanks for taking the time to read through my post.
Cheers, ~Poonam
Not sure why this is happening, it might be a good idea to notify Microsoft of this behavior. The difference between .doc and .docx you describe is very, very strange. Could you try setting the content type in an itemeventreceiver, to force the ContentType or ContentTypeId field of the item to reflect the correct content type explicitly.
i.e.
item["ContentTypeId"] = new ContentTypeId("0x010100your_id_plus_the_part_added_by_list");
the_part_added_by_list is the extra guid that is added when a ctype is added to a list
this is because ctypes in a list are basically children of the actual ctype you added
i.e. 0x010100 YOURGUID 00 LIST SPECIFIC GUID,
you can get this full id using a tool like Stramit CAMLViewer, or progr. by looping through the SPLIst's ContentTypes collection.
(my guess would be to do this in the ItemUpdating / ItemUpdated event to see if there is a difference in ctype during those calls)

Link data in custom SQL db with document library

Environment:
I have a windows network shared desktop application written in C# that leans against an MSSQL database. Windows sharepoint services 3.0 is installed (default installation, single processor, default sql express content database and so on) on the same Windows Server 2003 machine.
Scenario:
The application generates MS Word documents during processing (creating work orders) that need to be saved on sharepoint, and the result of the process must be linked to the corresponding document.
So, for each insert in dbo.WorkOrders (one work order), there is one MS Word document. I would need to save the document ID from the sharepoint library to my database so that later on, possible manual corrections can be made to the document related. When a work order is deleted, the sharepoint document would also have to be deleted.
Also, there is a dbo.Jobs table which is parent to dbo.WorkOrders and can have several work orders.
I was thinking about making a custom list on sharepoint, that would have two ID fields - one is the documents ID and the other AutoID of the document. I don't think this would be a good way performance-wise and it requires too much upkeep, therefore it's more error prone.
Another path I was contemplating is metadata. I could have an Identity field in dbo.WorkOrders that would be unique and auto incremented, and I could save that value as a file name (1.docx, 2.docx 3.docx ... n.docx where n would be the value in dbo.WorkOrder's identity field). In the metadata field of the Word document, I could save the job ID from dbo.Jobs.
I could also just increment the identity field in the WorkOrder (it would be a bigint), but then the file names would get ugly and maybe I'd overflow the ID range (since there could be a lot of documents).
There are other options also that I have considered and dismissed, since none of them satisfied the requirements (linked data sources, subfolder structures etc.). I'm not sure how to proceed. I'm new to sharepoint and it's still a bit of a mystery to me, as I don't understand all the inner workings of the system.
What do you suggest?
Edit:
I think I'll be using guid as file names and save those guids in my database after sending documents to sharepoint. What do you think of that?
All the documents in SharePoint under the same Content Database (SQL Database) are stored in the same table, that said, you have an unique ID for files no matter where they are in the sharepoint structure.
When retrieving files by their UniqueID The API only gives you the option to get them if you also know their SPWeb, so you could easily store, for each record you have in your external database (or your custom list, the SPFile GUID and the SPWeb GUID) retrieving them with:using(SPWeb subweb = (SPContext.Current.Site.OpenWeb(new Guid("{000...}")))
{
SPFile file = subweb.GetFile(new Guid("{111...}"));
// file logic
}
ps.: As Colin pointed out, url retrieval is possible but messy. I also changed the SPSite to the context since you are always under the same Site Collection in my example.
Like F.Aquino said, all items in sharepoint have a UniqueId field already (i.e. SPListItem.UniqueId and SPFile.UniqueId), which is a guid. Save that to your database, along with your web.'s guid. Then you can use the code provided by F.Aquino to get the file, or even the byte[] of the stream.
P.S. for F.Aquino, your code leaves the SPSite in memory, use this instead:
P.P.S this is just clarification, mark F.Aquino as the answer.
using(SPSite site = new SPSite("http://url"))
{
using(SPWeb subweb = site.OpenWeb(new Guid("{000...}"))
{
SPFile file = subweb.GetFile(new Guid("{111...}"));
// file logic
}
}

Resources