pdfbox - How to incrementally sign updated document or changes - security

I have some questions wrt pdfbox.
I want to sucessively sign a document that is subject to changes, e.g. a) originalPdf (signed by X), b) an image is added to the pdf (then signed by person Y), etc., such that the signatures are all valid. How can I reach that with pdfbox, if possible? I tried several things (e.g. with saveIncremental) but they did not give the intended result.
Or do I need to define empty fields beforehand and allow them to be updated with images such that signatures are valid? Is this performed with Annotations, if yes, how could we realize that?
Any helpful tips or code references in the public domain would be very helpful. Thanks.
....
contentStream = new PDPageContentStream(doc, page, PDPageContentStream.AppendMode.APPEND, true);
PDImageXObject pdImage = PDImageXObject.createFromFile("C:/logo.png", doc);
....
page.getResources().getCOSObject().getCOSDictionary(COSName.XOBJECT).setNeedToBeUpdated(true);
page.getCOSObject().setNeedToBeUpdated(true);
page.getResources().getCOSObject().setNeedToBeUpdated(true);
doc.getDocumentCatalog().getPages().getCOSObject().setNeedToBeUpdated(true);
doc.getDocumentCatalog().getCOSObject().setNeedToBeUpdated(true);
doc.saveIncremental(fos);

Only a small set of changes is allowed to signed documents, see here for some details. So indeed, you cannot change page contents after signing, and whether or not you can fill form fields or even use arbitrary annotations depends on the signature type of the original signature.
If the signature(s) do allow adding annotations, though, i.e. if there only are approval (non-certification) signatures or at most a certification signature with annotations, form fill-in, and digital signatures allowed, you can add images in annotations.
With PDFBox you can add an annotation showing an image in an incremental update like this:
PDDocument document = ...;
PDImageXObject image = ...;
OutputStream result = ...;
PDAppearanceStream appearanceStream = new PDAppearanceStream(document);
appearanceStream.setBBox(new PDRectangle(1, 1));
appearanceStream.setResources(new PDResources());
try ( PDPageContentStream contentStream = new PDPageContentStream(document, appearanceStream) ) {
contentStream.drawImage(image, new Matrix());
}
PDAppearanceDictionary appearance = new PDAppearanceDictionary();
appearance.setNormalAppearance(appearanceStream);
PDAnnotationRubberStamp stamp = new PDAnnotationRubberStamp();
stamp.setLocked(true);
stamp.setLockedContents(true);
stamp.setPrinted(true);
stamp.setReadOnly(true);
stamp.setAppearance(appearance);
stamp.setIntent("StampImage");
stamp.setRectangle(new PDRectangle(200, 500, 100, 100));
PDPage page = document.getPage(0);
page.getAnnotations().add(stamp);
Set<COSDictionary> objectsToWrite = new HashSet<>();
objectsToWrite.add(page.getCOSObject());
document.saveIncremental(result, objectsToWrite);
(AddToSignedFile test testAddImageAnnotation)
I here used a feature available in the development head towards 3.0.0, the saveIncremental overload with a second argument that accepts a collection of dictionaries to save. If you are working with an earlier version, you may instead have to mark a path of objects from the document catalog to the page object for inclusion in the incremental update using setNeedToBeUpdated.
If the signature(s) don't allow adding annotations, though, but do allow form editing, i.e. if there is a certification signature with form fill-in and digital signatures allowed, you can at least fill in form fields. This in particular may include setting the appearance of a pushbutton to an image of your choice as Adobe regularly mis-uses pushbuttons as pseudo image-fields. This of course means that you need to have a field prepared for each desired later addition.
If the signatures don't even allow that, i.e. if there is a certification signature with no changes allowed, or if there are no open form fields available for your additions, you're out of luck.
As an aside, since PDF 2.0 the certification level can also be made stricter by an entry in the signature locking dictionary of an approval signature. You may have to consider this in the cases above...

Related

How to avoid changing property values in an NSBatchInsertRequest?

I have a simple Core Data entity Story that occasionally I update with the latest data from a network call. This network call sometimes updates many, many stories instances, so I run an NSBatchInsertRequest, shown below. (The other reason I'm using a batch insert is that many stories might need to be added to the persistent store.)
The problem is a user can have already marked a Story as a favorite. When they do that, I set story.isFavorite = true on the main thread and save viewContext.
However, when the batch insert occurs it overwrites story.isFavorite, setting it back to false, even though I'm using NSMergeByPropertyObjectTrumpMergePolicy on both the batch insert and view contexts. I am not touching story.isFavorite in the batch insert handler either so I don't expect that property to be overwritten.
I thought the benefit of a batch insert with this merge policy was to avoid first fetching + then manually updating changed properties + finally saving. What is the right way to avoid changing property values in an NSBatchInsertRequest?
Story
#objc(Story)
public class Story: NSManagedObject {
#NSManaged public var title: String?
#NSManaged public var storyURL: URL?
#NSManaged public var updatedTime: Date?
#NSManaged public var isFavorite: Bool // <- the problem property
}
Batch insert
container.viewContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
container.viewContext.automaticallyMergesChangesFromParent = false
let context = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType)
context.parent = container.viewContext
context.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
context.perform {
let batchInsert = NSBatchInsertRequest(entity: Story.entity(), managedObjectHandler: { managedObject in
let story = managedObject as! Story
let storyResponse = downloadedStories[I]
// Update story with latest response data BUT don't modify story.isFavorite.
story.title = storyResponse.title
story.storyURL = storyResponse.storyURL
story.updatedTime = storyResponse.updatedTime
// ...
})
let result = try context.execute(batchInsert) as? NSBatchInsertResult
if let insertedIDs = result?.result as? [NSManagedObjectID] {
// Merge changes into parent context. Skip save() because not needed for batch insert.
NSManagedObjectContext.mergeChanges(fromRemoteContextSave: [NSInsertedObjectsKey: insertedIDs], into: [container.viewContext])
}
}
Edit
The Story entity does have a unique value constraint using attribute storyURL.
Update after Michael Tsai's answer
By making the Story entity attribute isFavorite a non-Optional Boolean without a default value (it was marked as Optional before, though I'm not sure it makes a difference here) and keeping the Use Scalar Type box checked, I can confirm that existing objects in the store will not be modified (at all) with this configuration of the batch insert context.
context.persistentStoreCoordinator = container.persistentStoreCoordinator
// HOWEVER, observe that regardless of the merge policy below,
// setting `context.parent = container.viewContext` will also
// overwrite the store data!
context.mergePolicy = NSMergeByPropertyStoreTrumpMergePolicy
// NSMergeByPropertyObjectTrumpMergePolicy ignores objects in the store
// (which have the same unique constraint value, here equal `storyURL`)
// and overwrites all properties.
// To confirm that the batch insert operation does not modify
// existing Story instances (at all), first delete all instances where
// where isFavorite == false. Then load the all story data again and
// execute the NSBatchInsertRequest with this change to managedObjectHandler:
story.title = storyResponse.title + " (modified)"
You will see the missing stories get inserted back, this time with their titles having a suffix " (modified)"; but previously favorited stories
do not get modified (basically, with this setup, the batch insert won't re-insert objects).
So the isFavorite property does not get overwritten BUT neither do any properties that should be changed (because they received a new title, for example).
Therefore, if you don't want your objects to get updated, but you want completely new objects to be inserted, you can use this approach.
However, if you are expecting your objects to require updates here are some alternatives:
you may opt to run a separate update operation, maybe an NSBatchUpdateRequest after you run your batch insert in this way,
or after the batch insert, you can update certain properties in a simple loop in a (possibly background/child) context without a batch operation, which could be fine if there isn't tons of data;
lastly, you might be able to first batch insert new data to a temporary store before somehow manually merging your choice of properties with the new store, then delete the temporary store.
A simpler approach: you could fetch the all properties you want to keep unchanged before you execute the batch insert (storing them in an dictionary keyed by your object's uniqueness constraint value), and then during the batch insert set the property again.
For this approach, you will want to use a different merge policy such as NSMergeByPropertyObjectTrumpMergePolicy so that the updated object gets re-inserted into the store (make sure to fetch all properties that you don't want to lose in advance of the batch insert)
random idea: How to Save Data When Using One ManagedObjectContext and PersistentStoreCoordinator with Two Stores
I don't think it is actually possible to do a partial update with a batch insert request. It's hard to know for sure because I don't think any of this is documented except in WWDC sessions. When I first watched the 2019 session, I was excited because the presenter said:
Attributes that are optional or configured with default values can be omitted from the dictionary as well.
In the case of updating an object with unique constraint, the existing values will not be changed.
I took this to mean that:
You can omit values for new objects, and you'll get the defaults or NULL. That makes sense.
If there's an existing object and you omit a value, that value will not the changed. So you can purposely omit values to do a partial update, i.e. update other values while leaving your isFavorite alone.
But, after writing code to test this and looking at the output from com.apple.CoreData.SQLDebug, what actually seems to happen with NSMergeByPropertyObjectTrumpMergePolicy is:
If you omit a value that's required you get a validation error.
If you omit a value that's optional, it updates the row to NULL. For a Bool property in Swift, this will become false.
If you omit a value with a default value, it updates the row to the default.
This is a shame because it seems like partial updates could be implemented by having the ON CONFLICT clause only specify DO UPDATE SET for the attributes that you actually set. But (as of macOS 11) Core Data seems to always generate SQL to set all of the columns.
In summary, with batch inserts, NSMergeByPropertyObjectTrumpMergePolicy does not actually merge by property based on what's changed (like with a regular Core Data save). Rather, it either inserts a new row (if the object is absent) or overwrites all the columns but preserves the objectID (if the object was present).
NSMergeByPropertyStoreTrumpMergePolicy also doesn't merge by property. It just means to leave the stored object alone if it's already present.
Update (2021-06-24): I heard from DTS that Apple considers the current (iOS 14/macOS 11) behavior described above a bug, and that it should let you batch insert without changing omitted properties. The Radar number is 79747419.

Load product image everywhere in the Shopware 6 storefront from external URL at runtime without saving it in filesystem

I am changing the image of a product from an external URL at runtime on saleschannel.product.load event. This all works fine, but when placing the order, it gives the error about
SQLSTATE[23000]: Integrity constraint violation: 1452 Cannot add or update a child row: a foreign key constraint fails (`sv3_dev`.`order_line_item`, CONSTRAINT `fk.order_line_item.cover_id` FOREIGN KEY (`cover_id`) REFERENCES `media` (`id`) ON UPDATE CASCADE)
I am guessing it is because I am overrwriting the media entity of the product with my custom implementation like this, so it does not find the media cover ID when inserting the order line item:
$pathInfo = pathinfo($url);
$media = new MediaEntity();
$media->setId(Uuid::randomHex());
$media->setUrl($url);
$media->setMimeType(sprintf('image/%s', $pathInfo['extension']));
$media->setFileExtension($pathInfo['extension']);
$media->setFileName($pathInfo['filename']);
$productMediaEntity = new ProductMediaEntity();
$productMediaEntity->setId(Uuid::randomHex());
$productMediaEntity->setMedia($media);
$productMediaEntity->setPosition(0);
$mediaCollection = new ProductMediaCollection([$productMediaEntity]);
$entity->setMedia($mediaCollection);
if ($entity->getCover() === null) {
$entity->setCover($productMediaEntity);
} else {
$entity->getCover()->setMedia($productMediaEntity->getMedia());
}
Is there a way to dynamically change the image at run time everywhere in the storefront?
I cannot save the image / media in the filesystem due to some copyright clause which does not allow the images to be downloaded in the shop. We can only load it at runtime.
To anyone else who stumbles upon this, dynamically adding the media entity like mentioned in the question works for the rest of the shop except when placing the order, as it requires a media ID due to FK constraint. So what I did was that I created a media entity from the mediaRepository and used that ID as reference for the order instead of the Uuid::randomHex() without saving the actual image in the filesystem.

Incremental loading in Azure Mobile Services

Given the following code:
listView.ItemsSource =
App.azureClient.GetTable<SomeTable>().ToIncrementalLoadingCollection();
We get incremental loading without further changes.
But what if we modify the read.js server side script to e.g. use mssql to query another table instead. What happens to the incremental loading? I'm assuming it breaks; if so, what's needed to support it again?
And what if the query used the untyped version instead, e.g.
App.azureClient.GetTable("SomeTable").ReadAsync(...)
Could incremental loading be somehow supported in this case, or must it be done "by hand" somehow?
Bonus points for insights on how Azure Mobile Services implements incremental loading between the server and the client.
The incremental loading collection works by sending the $top and $skip query parameters (those are also sent when you do a query by using the .Take and .Skip methods in the table). So if you want to modify the read script to do something other than the default behavior, while still maintaining the ability to use that table with an incremental loading collection, you need to take those values into account.
To do that, you can ask for the query components, which will contain the values, as shown below:
function read(query, user, request) {
var queryComponents = query.getComponents();
console.log('query components: ', queryComponents); // useful to see all information
var top = queryComponents.take;
var skip = queryComponents.skip;
// do whatever you want with those values, then call request.respond(...)
}
The way it's implemented at the client is by using a class which implements the ISupportIncrementalLoading interface. You can see it (and the full source code for the client SDKs) in the GitHub repository, or more specifically the MobileServiceIncrementalLoadingCollection class (the method is added as an extension in the MobileServiceIncrementalLoadingCollectionExtensions class).
And the untyped table does not have that method - as you can see in the extension class, it's only added to the typed version of the table.

CRM 2011 JavaScript How to access data stored in an entity passed from a lookup control?

As the question suggests, I need to find out how to access entity data that has been passed into a JavaScript function via a lookup.
JavaScript Code Follows:
// function to generate the correct Weighting Value when these parameters change
function TypeAffectedOrRegionAffected_OnChanged(ExecutionContext, Type, Region, Weighting, Potential) {
var type = Xrm.Page.data.entity.attributes.get(Type).getValue();
var region = Xrm.Page.data.entity.attributes.get(Region).getValue();
// if we have values for both fields
if (type != null && region != null) {
// create the weighting variable
var weighting = type[0].name.substring(4) + "-" + region;
// recreate the Weighting Value
Xrm.Page.data.entity.attributes.get(Weighting).setValue(weighting);
}
}
As you can see with the following line using the name operator I can access my Type entity's Type field.
// create the weighting variable
var weighting = type[0].name.substring(4) + "-" + region;
I am looking for a way now to access the values stored inside my type object. It has the following fields new_type, new_description, new_value and new_kind.
I guess I'm looking for something like this:
// use value of entity to assign to our form field
Xrm.Page.data.entity.attributes.get(Potential).setValue(type[0].getAttribute("new_value"));
Thanks in advance for any help.
Regards,
Comic
REST OData calls are definitely the way to go in this case. You already have the id, and you just need to retrieve some additional values. Here is a sample to get you started. The hardest part with working with Odata IMHO is creating the Request Url's. There are a couple tools, that you can find on codeplex, but my favorite, is actually to use LinqPad. Just connect to your Org Odata URL, and it'll retrieve all of your entities and allow you to write a LINQ statement that will generate the URL for you, which you can test right in the browser.
For your instance, it'll look something like this (it is case sensitive, so double check that if it doesn't work):
"OdataRestURL/TypeSet(guid'" + type[0].Id.replace(/{/gi, "").replace(/}/gi, "") + "'select=new_type,new_description,new_value,new_kind"
Replace OdataRestURL with whatever your odata rest endpoint is, and you should be all set.
Yes Guido Preite is right. You need to retrieve the entity by the id which come form the lookup by Rest Sync or Async. And then get Json object. However for make the object light which is returned, you can mention which fields to be backed as part of the Json. Now you can access those fields which you want.

How to handle unused Managed Metadata Terms without a WssId?

The Problem
We upload (large amounts of) files to SharePoint using FrontPage RPC (put documents call). As far as we've been able to find out, setting the value of taxonomy fields through this protocol requires their WssId.
The problem is that unless terms have been explicitly used before on a listitem, they don´t seem to have a WSS ID. This causes uploading documents with previously unused metadata terms to fail.
The Code
The call TaxonomyField.GetWssIdsOfTerm in the code snippet below simply doesn´t return an ID for those terms.
SPSite site = new SPSite( "http://some.site.com/foo/bar" );
SPWeb web = site.OpenWeb();
TaxonomySession session = new TaxonomySession( site );
TermStore termStore = session.TermStores[new Guid( "3ead46e7-6bb2-4a54-8cf5-497fc7229697" )];
TermSet termSet = termStore.GetTermSet( new Guid( "f21ac592-5e51-49d0-88a8-50be7682de55" ) );
Guid termId = new Guid( "a40d53ed-a017-4fcd-a2f3-4c709272eee4" );
int[] wssIds = TaxonomyField.GetWssIdsOfTerm( site, termStore.Id, termSet.Id, termId, false, 1);
foreach( int wssId in wssIds )
{
Console.WriteLine( wssId );
}
We also tried querying the taxonomy hidden list directly, with similar results.
The Cry For Help
Both confirmation and advice on how to tackle this would be appreciated. I see three possible routes to a solution:
Change the way we are uploading, either by uploading the terms in a different way, or by switching to a different protocol.
Query for the metadata WssIds in a different way. One that works for unused terms.
Write/find a tool to preresolve WssIds for all terms. Suggestions on how to do this elegantly are most welcome.
setting the WssID value to -1 should help you. I had similar problem (copying documents containing metadata fields) between two different web applications. I've spent many hours on solving strange metadata issues. In the end, setting the value to -1 have solved all my issues. Even if the GetWssIdsOfTerm returns a value, I've used -1 and it works correctly.
Probably there is some background logic that will tak care of the WssId.
Radek

Resources