I am implementing "duplicate" functionality in my iOS application. I'm using the following workflow:
present a list of managed objects in initial context in root view controller
when user taps on a row, create a new context pass it to "detail" view controller with duplicated managed object ([[DetailController alloc] initWithObject:clonedObject inContext:newContext]).
However I am struggling with the concept of reassigning relations from source object to the cloned one since their managed object contexts differ. What would be correct approach to this:
Should I just reassign the pointer value and do not bother about MOC or...
I should refetch the values in new context depending on their unique identifiers?
Any other option I did not think of?
P.S. Contexts are using same persistent store coordinator.
Managed object IDs are thread safe. As such, you can pass a managed object ID to the MOC in your view controller, retrieve that object via existingObjectWithID:error, then perform the duplication in that context. This way, the objects never cross MOC boundaries.
Related
I've created an NSManagedObject class that matches the corresponding Core Data entity. The class has an initializer so I can pass in property values and assign them.
Once the NSManagedObject class is initialized and ready to be saved to Core Data, how exactly do you save it?
The examples I've seen all start by creating a new class through NSManagedObjectContext. I don't want to go that route since I'm creating the class like any other class.
Is there some way to pass this object to NSManagedObjectContext and call its save() method.
It sounds like you're probably not properly initializing your managed objects. It's not enough to assign property values in an initializer-- you have to use the designated initializer. The examples you've seen all use an NSManagedObjectContext because the designated initializer for a managed object requires one. If you don't provide one, you're not using the designated initializer, and you won't be able to save your objects in Core Data.
This is one of the base requirements of Core Data. You must use managed objects, which must be initialized properly, and doing this requires a context.
You don't save a managed object-- you tell a context to save any changes it knows about, which includes changes to any of its managed objects. You can make that more fine-grained by creating a new context that only knows about one new object. But saving an object on one context doesn't automatically let other contexts know, so you end up adding some complexity to keep changes synced.
Apple's Core Data Programming Guide covers this in detail with sample code.
What should Repositories return from service calls?
An Entity (or Collection of Entities), or instead a reference to itself, which could then be used to access an property that holds a collection of Entities, for example?
Take this sample code:
$user = $userRepository->findById(1);
or
$users = $userRepository->findAll();
I think in most code, a User Entity object, or Users Collection Entity would be returned from a call like this.
It seems a bit strange to me that from one direction, a Repository will return objects directly, yet from the other end, it will hold them in state before acting upon them. Take this sample code as an example:
$user = $factory->make('user');
$user->setName($array_data['name']);
$repo->add($user);
$repo->save();
Is this just how it's done?
I think I am expecting to see something a bit more like this, in terms of retrieval:
$users = $userRepository->findAll(); // Returns $userRepository reference
foreach($users->collection() as $user) {
// Do some operations, or whatever
}
$users->save();
or perhaps, for read only needs:
$users = $userRepository->findAll();
$users = $users->collection(); // Returns User Entities held in state
Clarification as to why it's done one way or another would be much appreciated.
Where does the Factory belong inside the Domain?
Should it be injected as a dependency of the Mapper object? It seems like there must also be Factory access from the controlling code/service layer as well, for creating Entities to submit to the Repository.
That leads into my next question...
What is the preferred way to create new Entities from the controlling class/service layer?
I have seen Factory objects being used, like this:
$user = $factory->make('user');
$user->setName($array_data['name']);
$repo->add($user);
As well as built-in Repository methods, like this:
$repo->saveFromArray($array_data);
In the second example, the $array_data would be forwarded through the repository, to the Mapper, which will then perform the save. Of course the data source would be checked for overlapping records beforehand, in either example.
I assume the first method is preferred? It seems to be a more object-oriented approach.
You have many questions...
What should Repositories return from service calls?
Always aggregate roots (AR). AR design is very important, but it's none of the repository's concern. The repository methods return one or many objects as needed by the Domain. There is no Users Collection Enitity, there's a list of Users (which in php probably is an array), don't complicate things.
The Domain repositories should be used only for Domain needs (read or write). The whole object is returned, the repository doesn't return pieces of an AR, but the whole AR. Once again I mention that AR design is very important.
Where does the Factory belong inside the Domain?
Where it's needed. I don't use a factory, at most I have a factory method, but even that is for restoring purposes (if I'm using a memento). You don't have to use a factory to create the domain objects.
What is the preferred way to create new Entities from the controlling class/service layer?
The simplest way possible. For probably 99% of cases, you'll be using the "new" operator. Use Factories only if it gives you a concrete benefit for specific entities.
The Mapper never performs saves, because it's a mapper. Only repositories do persistence work. Mappers 'convert'/copy data from one model to another. You can use mappers to map a domain objects to some data model to be persisted and back.
Please share your view on this kind of thing i'm currently testing out :
Have an JPA entity inside my JSF managed bean
Bind the entity's properties to the JSF form elements like input text, combo, even datatable for the entity's list of detail objects for example.
Have the entity processed by a service object, meaning the entity object itself, and perhaps with some other simple variables / objects
The service will do some basic validation or simple processes, and deliver the entity object to the DAO layer to be persisted
And the JSF view will reflect on the detached entity
Is this kind of solution with passing the entities between tiers OK ?
Forgive me for my inexperience in this matter, since i was used to play with 'variables' in webapp (using map based formbean in struts 1), but i've read about transforming the entity objects into some other format, but i'm not sure what it is for ?
If the relations between entities are defined, we can bind it to JSF components, and therefore render based on and populate the entity's properties.
Yes, this is perfectly fine and in fact the recommended way to do it nowadays.
This "transforming the entity objects into some other format" refers probably to the Data Transfer Object pattern, which was necessary in the bad old days before annotations, when entity classes usually had to inherit from some framework-specific base class, undergo bytecode manipulation or were implemented as proxy objects by an EJB container.
Such entity objects were either impossible to serialize or contained much more state than the actual entity data and therefore would waste a lot of space when serialized. So if you wanted to have a separate app server tier, you had to use the DTO pattern to have it communicate efficiently with the web tier.
I've got a simple pojo named "Parent" which contains a collection of object "Child".
In hibernate/jpa, it's simply a one-to-many association, children do not know their parent: these Child objects can have different type of Parent so it easier to not know the parent (think of Child which represents Tags and parents can be different object types which have tags).
Now, I send my Parent object to the client view of my web site to allow user to modify it.
For it, I use Hibernate/GWT/Gilead.
My user mades some changes and click the save button (ajax) which sends my Parent object to the server. fields of my parent has been modified but more important, some Child objects has been added or deleted in the collection.
To summary, when Parent object comes back to server, it now has in its collection:
- new "Child" objects where id is null and need to be persist
- modified "Child" objects where id is not null and need to be merge
- potentially hacked "Child" objects where id is not null but are not originally owned by the Parent
- Child objects missing (deleted): need to be deleted
How do you save the parent object (and its collection) ? do you load the parent collection from database to compare each objects of the modified collection to see if there is no hacked item ?
Do you clear the old collection (to remove orphan) and re add new child (but there is some Child that has not been modified) ?
thanks
PS: sorry for my english, I hope you have understand the concept ;)
Something in your stack has to supply the logic you are talking about, and given your circumstances it is probably you. You will have to get the current persisted state of the object by reading from your datasource so you can do the comparison. Bear in mind that, if several legitimate actions can update your parent object and its collection simultaneously you will have to take great care over defining your transaction grain and the thread-safe nature of your code.
This is not a simple problem by any means and there may well be framework features that can assist, but I am yet to find something which has solved this for any real world implementation I have encountered, especially where I have logic which tried to distinguish between legitimate and "hacked" data.
You may consider altering your architecture such that the parent and children are persisted in separate actions. It may not be appropriate in your case but you might be able to have a finer grain of transaction by splitting up the persistence actions and provide child-oriented security which makes your problem of hacking a little more manageable.
Good luck. I recommend you draw a detailed flow chart of your logic before you do too much coding.
The best solution I've found is to manage a DTO, manually created. The DTO sends only needed datas to the client. For each fields I want to set in ReadOnly mode, I calculate a signature based on a secret key that I send to client with my dto.
When my DTO comes back to server, I check the signature to be sure that my read only fields have not changed (recalculate the signature with coming back fields and compare it to the signature coming back with dto)
It allows me to specify read only fields and be sure that my objects are not hacked.
In the project I'm working on, we have an aggregate domain object. The factory object handles the creation of the unique id for the object. But there is a separate import process which creates the same object initially without the id. To add the imported object to the system, we are now forced to do a field by field copy to a new object since we can't just set the id for it for obvious reasons. Could anyone suggest a better way of handling this situation?
Possibilities:
If the import process allows it, inject your domain object when it is creating so it actually populates your object.
Have your object's implementation be a wrapper around the one created by the import process. Change your factory accordingly.