When a orphaned object is deleted, are deletion rules applied? - core-data

Let's say I have a class called Directory and another called File, and they have a one-to-many relationship. (A Directory has many Files).
In the restkit docs for RKManagedObjectRequestOperation:
An NSOperation subclass that sends an HTTP request and performs object mapping on the parsed response body to createNSManagedObject instances, establishes relationships between objects using RKConnectionDescription objects, and cleans up orphaned objects that no longer exist in the remote backend system.
My question is if the deletion rules are applied by RestKit when an orphan object is found and deleted. In this case, would be all Files get deleted? (If I use Cascade in deletion rule)

Yes, the deletion rules are part of Core Data, not RestKit so they will be run.

Related

How to enforce no-cycle constraint in domain-driven-design?

I am still new in DDD. In the domain model, I have an aggregate for a Component which can either consist of sub-components or itself be a sub-component of another parent Component. In other words, I have a hierarchy of components.
Two actions are needed: AddSubComponent() and AddToParent(). How to enforce the invariant that no cycle shall exist in the hierarchy when executing these 2 actions? Shall it be enforced via domain events? Or just have reference to parents and children in the aggregate and then checking recursively within the aggregate itself the nested Components if the operation would create a cycle?
Note: I am using C# and persisting the domain model using EF Core.
Edit
I solved the problem by considering each component as an aggregate. Each has a reference to its parents and children.
AddSubComponent(Component c)
Checks if c is already a sub-component and returns or only adds c as sub-component after checking with parents of the component and the children of c. Then fires a domain event which is handled by the Application layer to verify the whole graph that it contains no cycles and also to call c.AddToParent()
AddToParent(Component c)
Does the same logic as the action above but adds only to parent and also fires a domain event which is handled at the Application layer as well only to call parent.AddSubComponent().
This way calling any method would ensure the atomic consistency only for the aggregate and eventual consistency of the full hierarchy is ensured by the application layer. Also all those operations above are wrapped in a transaction. Therefore. only valid hierarchies are persisted eventually.
I would rather implement a one-way relationship in this case. When using events, like you mentioned, it would be trivial, as you can emit an event after executing AddSubComponent, so AddToParent will happen asynchronously, as a reaction. However, producing events in a system that just persist state, is not trivial, as you must avoid unreliable two-phase commits using an outbox or CDC.
If you only maintain the list of children for a parent, that won't be necessary. It might, of course, not fit to your context as cases like ensuring that a component has only one parent won't be possible inside the model itself and would require a domain service, which queries the component parents to see how many are there.
You can enforce it right in the methods themselves.
AddSubComponent
During the Add SubComponent method, check if the component you are adding already exists as a child or a child of a child.
AddToParent
Call the "AddSubComponent" of the parent itself and the check will be done as described above.

Data Repository using MemoryCache

I built a homebrew data entity repository with a factory that defines retention policy by type (e.g. absolute or sliding expiration). The policy also specifies the cache type as httpcontext request, session, or application. A MemoryCache is maintained by a caching proxy in all 3 cache types. Anyhow, I have a data entity service tied to the repository which does the load and save for our primary data entity. The idea is you use the entity repository and don't need to care if the entity is cached or retrieved from it's data source (db in this case).
An obvious assumption would be that you would need to synchronise the load/save events as you would need to save the cached entity before loading the entity from it's data source.
So I was investigating a data integrity issue in production today... :)
Today I read there can be a good long gap between the entity being removed from the MemoryCache and the CacheItemRemovedCallback event firing (default 20 seconds). The simple lock I had around the load and save data ops was insufficient. Furthermore the CacheItemRemovedCallback was in it's own context outside of HttpContext making things interesting. It meant I needed to make the callback function static as I was potentially assigning a disposed instance to the event.
So once I realised there was was the possibility of a gap whereby my data entity no longer existed in cache but might not have been saved to it's data source might explain the 3 corrupt orders out of 5000. While filling out a long form it would be easy to perform work beyond the policy's 20 minute sliding expiration on the primary data entity. That means if they happen to submit at the same moment of expiration an interesting race condition between the load (via request context) and save (via cache expired callback) emerges.
With a simple lock it was the roll of the dice, would save or load win? Clearly we need a save before the next load from the data source (db). Ideally when an item expires from the cache it is atomically written to it's data source. with the entity gone from the cache but the expired callback not yet fired a load operation can slip in. In this case the entity will not be found in the cache so will default to load from the data source. However, as the save operation may not have commenced resulting in data integrity corruption and will likely clobber your now saved cached data.
To accomplish synchronisation I need a named signalling lock so I settled on EventWaitHandle. A named lock is created per user which is < 5000. This allows the Load to wait on a signal from the expired event which Saves the entity (whose thread exists in its own context outside HttpContext). So in the save it is easy to grab the existing name handle and signal the Load to continue once the Save is complete.
I also have a redundancy where it times out and logs each 10 seconds block by the save operation. As I said, the default is meant to be 20 seconds between an entity being removed form MemoryCache and it being conscious of it to fire the event which in turn saves the entity.
Thank you to anyone who followed my ramblings through all that. Given the nature of the sync requirements was the EventWaitHandle lock the best solution?
For completeness I wanted to post what I did to address the issue. I made multiple changes to the design to create a tidier solution which did not require a named sync object and allowed me to use a simple lock instead.
First the data entity repository is a singleton which was stored in the request cache. This front end of the repository is detached from the cache's themselves. I changed it to reside in the session cache instead which becomes important below.
Second I changed the event for the expired entity to route through the data entity repository above.
Third I changed the MemoryCache event from RemovedCallback to UpdateCallback**.
Last, we tie it all together with a regular lock in the data entity repository which is is the user's session and the gap-less expiry event routing through the same allowing the lock to cover load and save (expire) operations.
** These events are funny in that A) you can't subscribe to both and B) UpdateCallback is called before the item is removed from the cache but it is not called when you explicitly remove the item (aka myCache.Remove(entity) won't call event but UpdateCallback will). We made the decision if the item was being forcefully removed from the cache that we didn't care. This happens when the user changes company or clears their shopping list. So these scenarios won't fire the event so the entity may never be saved to the DB's cache tables. While it might have been nice for debugging purposes it wasn't worth dealing with the limbo state of an entity's existence to use the RemovedCallback which had 100% coverage.

RestKit two consecutive enqueueBatchOfObjectRequestOperations not directly mapped to core data

I'm using RestKit 0.20.2 in combination with MagicalRecord (important for any eventual context problems that you can think of).
My app is a saving tickets(entity) with items(entity) and each item has a tax(entity). My use case is this : I need to sync core data with my webserver when the ipad reconnects to the internet after a long period of time beeing unable to send the data (for whatever reason)
My problem is to be able to sync a lot of objects (it can go from 100 to a 1000 to even more), to be able to post a lot of objects without timeouts I set the restkit concurrency :
[RKObjectManager sharedManager].operationQueue.maxConcurrentOperationCount = 3;
Now this is working absolutely fine. But my problem is that i have alot of redundant entities that syncs with every item.
For instance each item has a tax, but i only have two taxes in my model that need to be synced with the web service and then sent as a relationship with the item (i only put the id of the tax). So to circumvent that problem for each postItem i check if the related Tax has an ID, if yes, then i can parse the item directly with the tax relationship in it, if not, i need to sync the tax first then the item with the returned taxID.
The work around is working as expected too. But again there is a problem, because between each postItem RestKit isn't saving the newly TaxID between two requests, so instead of sending it once, it sends it every time it encounters it inside an item and when all the operations are done it saves the newly created taxIDs
To improve that I digged a bit in restkit and found
- (void)enqueueBatchOfObjectRequestOperations:(NSArray *)operations
progress:(void (^)(NSUInteger numberOfFinishedOperations, NSUInteger totalNumberOfOperations))progress
completion:(void (^)(NSArray *operations))completion
So now i'm building RKManagedObjectRequestOperations for my tax entities and batch them.
Then i sync the items. It's more efficient, and i don't need to set a dependency between operations (because i need them to be executed in a certain order, tax then items then the whole ticket.)
PROBLEM Between the two enqueueBatchOperations, RestKit doesn't map the result of the first batch immediately, even if i explicitly call
[[RKManagedObjectStore defaultStore].mainQueueManagedObjectContext saveToPersistentStore:&error]
It isn't mapped because after the first batch of taxes, i send all the items, and i can see that the taxID's aren't set, but after all the batches are done, i can clearly see them mapped correctly in my core data file. So I thought it was a context issue, but when i digg into RestKit and more specifically in
appropriateObjectRequestOperationWithObject:(id)object
method:(RKRequestMethod)method
path:(NSString *)path
parameters:(NSDictionary *)parameters
I can see line 580 :
NSManagedObjectContext *managedObjectContext = [object respondsToSelector:#selector(managedObjectContext)] ? [object managedObjectContext] : self.managedObjectStore.mainQueueManagedObjectContext;
That sets the mainQueueContext (and not the object context) for the operations (i've checked with breakpoints), so calling save or saveToPersistentStore should propagate the changes from child contexts to the mainQueue and... it's where i've lost hope and turned to stackoverflow ;)
As it usually happens i found the solution after posting on SO :)
The problem was that the RKManagedObjectRequestOperations where all created before restkit actually sent the information. So the context was the same for all the requests (as mentioned in the appropriateObjectRequestOperationWithObject method, and the changes weren't propagated because the context reference was the "old" reference.
To have the information on the next requests i just build the RKManagedObjectRequestOperations in the enqueueBatchOfOperations completion block, and now all is working fine with the newly created taxID ;)

Core Data: Deleting causes 'NSObjectInaccessibleException' from NSOperation with a reference to a deleted object

My application has NSOperation subclasses that fetch and operate on managed objects. My application also periodically purges rows from the database, which can result in the following race condition:
An background operation fetches a bunch of objects (from a thread-specific context). It will iterate over these objects and do something with their properties.
A bunch of rows are deleted in the main managed object context.
The background operation accesses a property on an object that was deleted from the main context. This results in an 'NSObjectInaccessibleException', reason: 'CoreData could not fulfill a fault'
Ideally, the objects that are fetched by the NSOperation can be operated on even if one is deleted in the main context. The best way I can think to achieve this is either to:
Call [request setReturnsObjectsAsFaults:NO] to ensure that Core Data won't try to fulfill a fault for an object that no longer exists in the main context. The problem here is I may need to access the object's relationships, which (to my understanding) will still be faulted.
Iterate through the managed objects up front and copy the properties I will need into separate non-managed objects. The problem here is that (I think) I will need to synchronize/lock this part, in case an object is deleted in the main context before I can finish copying.
Am I missing something obvious? It doesn't seem like what I'm trying to accomplish is too out of the ordinary. Thanks for your help.
You said each thread has its own context. That's good. However, they also need to stay synchronized with changes to each other (how depends on their hierarchy).
Are the all assigned to the same persistent store coordinator, or do they have parent/child relationships?
Siblings should monitor NSManagedObjectContextObjectsDidChangeNotification from other siblings. Parents will automatically get notified when a child context saves.
I ended up mitigating this by perform both fetches and deletes on the same queue.
Great question, I can only provide a partial answer and would really like to know this too. Unfortunately your own solution is more of a workaround but not really an answer. What if the background operation is very long and you can't resort to running it on the main thread?
One thing I can say is that you don't have to call [request setReturnsObjectsAsFaults:NO] since the fetch request will load the data into the row cache and will not go back to the database when a fault fires for one of the fetched objects (see Apples documentation for NSFetchRequest). This doesn't help with relationships though.
I've tried the following:
On NSManagedObjectContextWillSave notification, wait for the current background task to finish and prevent new tasks from starting with something like
-(void)contextWillSave:(NSNotification *)notification {
dispatch_sync(self.backgroundQueue, ^{
self.suspendBackgroundOperation = YES;
});
}
Unset suspendBackgroundOperation on NSManagedObjectContextDidSave notification
However the dispatch_sync call introduces possible dead locks so this doesn't really work either (see my related question). Plus it would still block the main thread until a potentially lengthy background operation finishes.

Resolving an NSManagedObject conflict with multiple threads, relationships, and pointers

I'm having a conflict when saving a bunch of NSManagedObjects via an outside thread. For starters, I can tell you the following:
I'm using a separate MOC for each thread.
The MOCs share the same persistent store coordinator.
It's likely that an outside thread is modifying one or many of the records that I'm saving.
OK, so with that out of the way, here's what I'm doing.
In my outside thread, I'm doing some computation and updating a single value in a bunch of managed objects. I do this by looking up the object in the persistent store by my primary key, modifying the single decimal property, and then calling save on the bunch all at once.
In the meantime, I believe the main thread is doing some updating of its own.
When my outside thread does its big save on its managed object context, I get an exception thrown stating a large number of conflicts. All of the conflicts seem to be centered around a single relationship on each record. Though the managed object in the persistent store and my outside thread share the same ObjectID for this relationship, they don't share the same pointer. Based on what I see, that's the only thing that's different between the objects in my NSMergeConflict debug output.
It makes sense to me why the two objects have relationships with different pointers -- they're in different threads. However, as I understand it from Apple's documentation, the only thing cached when an object is first retrieved from the persistent store are the global IDs. So, one would think that when I run save on the outside thread MOC, it compares the ObjectIDs, sees they're the same, and lets it all through.
So, can anyone tell me why I'm getting a conflict?
Per the documentation in the Concurrency with Core Data chapter of The Core Data Programming Guide, the recommended configuration is for the contexts to share the same persistent store coordinator, not just the same persistent store.
Also, the section Track Changes in Other Threads Using Notifications of the same chapter states if you're tracking updates with the NSManagedObjectContextDidSaveNotification then you send -mergeChangesFromContextDidSaveNotification to the main thread's context so it can merge the changes. But if you're tracking with NSManagedObjectContextDidChangeNotification then the external thread should send the object IDs of the modified objects to the main thread which will then send -refreshObject:mergeChanges: to its context for each modified object.
And really, you should know if the main thread is also performing updates through its controller, and propagate its changes in like manner but in the opposite direction.
You need to have all your contexts listening for NSManagedObjectContextDidSaveNotification from any context that makes changes. Otherwise, only the front context will be aware of changes made on the background threads but the background context won't be aware of changes on the front thread.
So, if you have three threads and three context each of which makes changes, all three context must register for notifications from the other two.
Unfortunately, it seems as though this bug was actually being caused by something else -- I was calling the operation causing the error more than once at the same time when I shouldn't have been. Although this doesn't answer the initial question as to why pointers matter in conflicts, updating my code to prevent this situation has resolved my issue.

Resources