NSUndoManager does not seem to post NSManagedObjectContextDidSaveNotification on undo or redo - core-data

My app performs certain actions in a method attached to the NSManagedObjectContextDidSaveNotification notification (which is being executed on save). It also uses an NSUndoManager which is undoing/redoing happily. I had expected the 'did save' notification to be raised whenever an undo or redo occurred (in cases where the undo/redo affected the Core Data repository), but that doesn't seem to be happening.
Is it reasonable to expect the NSManagedObjectContextDidSaveNotification to be posted for undo/redo? If not, is there a way we can determine what was undone/redone after the fact (NSUndoManagerDidUndoChangeNotification does not appear to expose that info)?

It seems I was mistaken in my assumption that undo/redo should have anything to do with saving the NSManagedObjectContext.
Because my app auto-saves, my beleaguered brain thought that the undo/redo would auto-save too. Clearly this is not so.
I'm now observing the NSUndoManagerDidUndoChangeNotification and NSUndoManagerDidRedoChangeNotification notifications and manually saving there which is (currently) giving me the behaviour I expect.

Related

How do I Update the value of a Characteristic from a Homebridge Plugin?

I have a plugin for Homebridge that operates shades, i.e. WindowCovering Services. It basically imitates the remote for the shades. The remote has 16 channels and one of them, 0, operates all shades. Each shade/channel is an Accessory on a Dynamic Platform. As the shades move I am updating the CurrentPosition and PositionState Characteristics. This seems to work fine now. However, some updates never seem to reach Homekit.
When multiple shade/channels are moving at the same time, this shows in the Home app as "Opening" or "Closing". When the PositionState is updated to Stopped, the icons show the current %age open. However the updates on some shades will get lost.
I thought perhaps a delay between update calls is required, so I implemented a scheme that prevents calls being made close together with a configurable delay. That seemed to improve things, but updates are still lost and I don't really know if the delay is required.
All PositionState updates go through this code. I have been debugging this issue for quite a while and am convinced the code is executed, but I can't figure out why the Home app does not see the Stop.
updateStateCB() {
this.service.getCharacteristic(this.platform.Characteristic.PositionState).updateValue(this.positionState);
this.logTimeCh('Update state:' + this.positionState);
}
Where might I be going wrong here? Is the delay between calls required? Is there a bug in Homekit somewhere?
Thanks

Calling Save Changes Multiple Times

I was wondering if anyone has done any perf tests around the effect calling EF Cores SaveChangesAsync() has on performance if there are no changes to be saved.
Essentially I am assuming it's basically nothing and therefore isn't a big deal to call it "just in case"?
(I am trying to do something with tracking user activity in middleware in asp net core and essentially on the way out I want to make sure save changes was called to persist the activity to the database. There is a chance that it has already been called on the context depending on the operation of the user and if that's the case I don't want to incur the cost of a second operation when the activity could be persisted as part of the normal transaction/round trip)
As you can see in implementation if there are no changes, nothing will be done. As far it has impact to performance, I don't know. But of course calling SaveChanges or SaveChangesAsync without any changes will have a performance impact in relation to don't call them.
That's the same behavior like EF6 has too.

After saving Core Data, binding of NSManagedObject not works

After saving context, UITableViewCell's data is not been refreshed because binding is not triggered for UITextFields contained by cell. Have you seen any similar side effect so far?
The problem was that I executed really a lot of operation / query on persistent store from observeValueFromKeyPath. And observeValueFromKeyPath was triggered too many times. So in somehow the system in the background might just get overloaded, and I think this produced the strange side effects.

CQRS - When to send confirmation message?

Example: Business rules states that the customer should get a confirmation message (email or similar) when an order has been placed.
Lets say that a NewOrderRegisteredEvent is dispatched from the domain and is picked up by an event listener that sends of the confirmation message. When that is done some other event handler throws an exception or something else goes wrong and the unit of work is rolled back. We've now sent the user a confirmation message for something that was rolled back.
What is the "cqrs" way of solving problems like this where you want to do something after a unit of work has been committed? Another complicating factor is replaying of events. I don't want old confirmation messages to be re-sent whenever I replay recorded events in order to build a new view / projection.
My best theory so far: I've just started to look into the fascinating world of cqrs and was wondering whether this is something that would be implemented as a saga? If a saga is like a state machine where each transition only can take place a single time then I guess that would solve this problem? I just have a hard time visualizing how this will fit together with the command bus and domain events..
An Event should only occur after the transaction has been completed. If anything goes wrong and there's a rollback, then the event didn't occur from an external point of view. Therefore it shouldn't be published at all. Though an OrderRegistrationFailed event could be published if necessary.
You wouldn't want the mail to be sent unless the command has sucessfully been executed.
First a few reasons why the command handler -- as proposed in another answer -- would be the wrong place: Under some circumstances the command handler wouldn't be able to tell if the command will eventually succeed or not. Having the command handler invoke the mail sending would also put process knowledge inside the command handler, which would break the SRM and too tightly couple business rules with the application layer.
The mail should be sent after the fact, i.e. from an event handler.
To prevent this handler from firing during replay, you can just not register it. This works similar to how you test your application. You only register the handlers that you actually need.
Production system -> register all event handlers
Tests -> register only the tested event handlers
Replay -> register only the projection/denormalization handlers
Another - even more loosely coupled, though a bit more complex - possibility would be to have a Saga handle the NewOrderRegisteredEvent and issue a SendMail command to the appropriate bounded context (thanks, Yves Reynhout, for pointing this out in the question's comments).
There are two likely solutions
1) The publishing of the event and the handling of the event (i.e. the email) are part of a single transaction. In this case, your transaction framework takes care of it for you. If the email fails, then the event is rolled back. You'll likely retry the command. This is conceptually clean and easy to think about. No event is finished publishing until everyone that has something to say about it has had their say. However practically speaking, this can be painful, as it typically involves distributed transactions. These are hard to come by. Can your email client enroll in the same transaction as the database which is holding your events?
2) The publishing of the event is transactional, but the event handlers each deal with transactions in their own way. The event handler which sends emails could keep track of which events it had seen. If it crashed, it would request old events and process them. You could make a business decision as to how big a deal it would be if people had missing or duplicate emails. (For money-related transactions, the answer is probably you shouldn't allow it.)
Solution (2) is typically what you see promoted in DDD/CQRS circles as it's the more loosely coupled solution. Solution (1) is quite practical in a small system where the event store and the projections are in a single database and the projections don't change often. Solution (2) allows a diversity of event handlers to work in their own way. Solution (1) can cause lots of non-overlapping concerns to become entagled. In this case your order business rules don't complete until the many bizarre things that happen in emailing are taken care of. For one thing, it may slow you down quite a bit.
If the sending of the email were more interesting than "saw the event, sent the email", then you're right, you might have a saga or workflow on your hands. Email in large operations is often a complex system in its own right which you're unlikely to have to implement much of. You just need to be sure you put your email into a request queue of some sort (using approach (2)), and the email system is likely to do retries/batching/spam avoidance/working overnight/etc.

Core Data: awakeFromFetch Not Getting Called For Unsaved Contexts

First, let me illustrate the steps to reproduce the 'bug'.
Create a new NSManagedObject.
Fault the managed object using refreshObject:mergeChanges:NO - At this time, the didTurnIntoFault notification is received by the object.
'Unfault' the object again by using willAccessValueForKey:nil - At this time, the awakeFromFetch notification is supposed to be received BUT NO NOTIFICATION COMES. All code relying of it firing fails, and the bread in the toaster burns :)
The interesting thing is that if I 'save' the managed object context before performing step 2, everything works okay and the awakeFromFetch notification comes as expected.
Currently the workaround that I am using is 'saving' the context at regular intervals, but that is more of a hack since we actually need to save the context once (when the application terminates).
Googling has so far returned nothing concrete, except a gentleman here that seems to have run into the same problem.
So my question is twofold - Is this really a bug, and if it is, then what other walkarounds (sic) do you suggest.
EDIT: THIS IS NOT A BUG BUT THAT WAS JUST ME BEING STUPID. See, if I turn an object to fault without saving it, then there is no history of the object to maintain. So in this case (i.e for an unsaved object) there is no logical concept of awakeFromFetch (since it was never saved). Please do let me know if I am still getting it all mixed up.
Anyways, turns out my 'actual' problem was somewhere else - hidden well behind 2 gotcha's
If you use refreshObject:mergeChanges:NO to turn an object to fault in order to break any retain cycles that core data might have established, you have to do the same for the child objects also - Each child object which might have gotten involved in a cyclic retain with someone else will have to be manually faulted. What I had (wrongly) assumed was that faulting the parent will automatically break the cycles amongst the children.
The reverseTransform function of your custom transformers will NOT be called when such a object (i.e. which has been forcefully faulted) is resurrected by firing a fault on it. This in my eyes IS a bug, since there is no other way for me to know when the object is alive again. Anyways, the workaround in this case was to set the staleness interval to an arbitrarily low value so that core data skips its cache and always calls the reverseTransform function to resurrect the object. Better suggestions are welcome.
it really has been one of those days :)

Resources