Even though I use a specific ORM framework, Bold for Delphi, I'm more interested in framework agnostic theoretical view on the problem.
So the question is about having a persistent object and a transient attribute with initial value tag.
The initial tag specifies the value attribute will get when instance of owning object is created.
However when subsequently loading this object from persistence, what should be the value of transient attribute?
Should initial value tag be applied again? Logically, it should, otherwise it will be left unassigned (null).
I couldn't find any specs on this particular case in any of the docs.
We can't create object up to the DB record only - because we would lose all transient attributes. So, when you are loading a persistent object, it can be done only into the already created instance. And there is no other way of instantiating without using the base object constructor, which sets the initial values. Of course, some language could make a workaround about it, but why?
Related
I've created an NSManagedObject class that matches the corresponding Core Data entity. The class has an initializer so I can pass in property values and assign them.
Once the NSManagedObject class is initialized and ready to be saved to Core Data, how exactly do you save it?
The examples I've seen all start by creating a new class through NSManagedObjectContext. I don't want to go that route since I'm creating the class like any other class.
Is there some way to pass this object to NSManagedObjectContext and call its save() method.
It sounds like you're probably not properly initializing your managed objects. It's not enough to assign property values in an initializer-- you have to use the designated initializer. The examples you've seen all use an NSManagedObjectContext because the designated initializer for a managed object requires one. If you don't provide one, you're not using the designated initializer, and you won't be able to save your objects in Core Data.
This is one of the base requirements of Core Data. You must use managed objects, which must be initialized properly, and doing this requires a context.
You don't save a managed object-- you tell a context to save any changes it knows about, which includes changes to any of its managed objects. You can make that more fine-grained by creating a new context that only knows about one new object. But saving an object on one context doesn't automatically let other contexts know, so you end up adding some complexity to keep changes synced.
Apple's Core Data Programming Guide covers this in detail with sample code.
Recently I was thinking about some issues I had in the past while attempting to design a particular domain model, lets just say Address, that can be editable with a given context but non-editable within another. My current way of thinking is that I would have both a Value Object version of the Address and an Entity for the Address perhaps attached to something like a customer account in order to derive it's identity.
Now I'm realizing that if I'm ever creating a new Address, such as when one is entered in by the user, that I most likely also need to be able to continue to edit that Address and perhaps edit any preexisting Addresses as well within that same bounded context. For this reason I could assume that within this context Address should be modeled as Entity and not a Value Object. This leads me to my main question which is that if you should always be using entities when modifying an existing set of data or creating a new data then does it ever make sense to have a Factory for creating any Value Object?
The rule that is beginning to emerge for me as I follow this line of thinking is that value objects should only be created to represent things that are either static to the application or things that have already been persisted into the database but not things that are transient within the current domain context. So the only place I should any type of creation of value objects would be when they are re-hydrated/materialized within or on the behalf of aggregate root repositories for persistent values or within a service in the case of static values. This is beginning to seem pretty clear to me however it concerns me that I haven't read anywhere else where somebody is drawing the same conclusions. Either way, I'm hoping that somebody can confirm my conclusions or set me straight.
that can be editable with a given context but non-editable within
another
Differences in mutability settings for an entity within different contexts can also be represented in the application tier. It is an operational concern, possibly involving authentication and authorization and an application service is a convenient location for this logic. The distinction between a VO and an entity does not address these concerns directly. Just because a VO should be immutable, does not mean that an entity cannot change the value of a VO that it references. For instance, a user can reference an immutable address value, however an edit operation can update the user to reference a new value. Allowing a user to edit an address in one context and not in another can be represented as permission values associated with the corresponding context.
This leads me to my main question which is that if you should always
be using entities when modifying an existing set of data or creating a
new data then does it ever make sense to have a Factory for creating
any Value Object?
It can certainly make sense to have a factory for creating VO instances. This can be a static method on the VO class or a dedicated object, depending on your preference. However, a VO should not be used to address mutability requirements of the domain model. Instead, as specified above, this should be handled at the application tier.
My question here is if I should place a RowVersion [TimeStamp] property in every
entity in my domain model.
For Example: I have an Order class and an OrderDetails "navigation, reference" property,
should I use a RowVersion property for both entities, or is it enough to the parent object?
These classes are pocos meant to be used with Entity Framework Code First approach.
Thank you.
The answer, as often, is "it depends".
Since it will almost always be possible to have an Order without any OrderDetails, you're right that the parent object should have a RowVersion property.
Is it possible to modify an OrderDetail without also modifying the Order? Should it be?
If it isn't possible and shouldn't be, a RowVersion property at the detail level doesn't add anything. You already catch all possible modifications by checking the Order's RowVersion. In that case, only add the property at the top level, and stop reading here.
Otherwise, if two independent contexts load the same order and details, both modify a different OrderDetail, and both try to save, do you want to treat this as a conflict? In some cases, this makes sense. In other cases, it doesn't. To treat it as a conflict, the simplest solution is to actually mark the Order as modified too if it is unchanged (using ObjectStateEntry.SetModified, not ObjectStateEntry.ChangeState). EF will check and cause an update to the Order's RowVersion property, and complain if anyone else made any modifications.
If you do want to allow two independent contexts to modify two different OrderDetails of the same Order, yes, you need a RowVersion attribute at the detail level.
That said: if you load an Order and its OrderDetails into the same context, modify an OrderDetail, and save your changes, Entity Framework may also check and update the Order's RowVersion, even if you don't actually change the Order, causing bogus concurrency exceptions. This has been labelled a bug, and a hotfix is available, or you can install .NET Framework 4.5 (currently available in release candidate form), which fixes it even if your application uses .NET 4.0.
Please share your view on this kind of thing i'm currently testing out :
Have an JPA entity inside my JSF managed bean
Bind the entity's properties to the JSF form elements like input text, combo, even datatable for the entity's list of detail objects for example.
Have the entity processed by a service object, meaning the entity object itself, and perhaps with some other simple variables / objects
The service will do some basic validation or simple processes, and deliver the entity object to the DAO layer to be persisted
And the JSF view will reflect on the detached entity
Is this kind of solution with passing the entities between tiers OK ?
Forgive me for my inexperience in this matter, since i was used to play with 'variables' in webapp (using map based formbean in struts 1), but i've read about transforming the entity objects into some other format, but i'm not sure what it is for ?
If the relations between entities are defined, we can bind it to JSF components, and therefore render based on and populate the entity's properties.
Yes, this is perfectly fine and in fact the recommended way to do it nowadays.
This "transforming the entity objects into some other format" refers probably to the Data Transfer Object pattern, which was necessary in the bad old days before annotations, when entity classes usually had to inherit from some framework-specific base class, undergo bytecode manipulation or were implemented as proxy objects by an EJB container.
Such entity objects were either impossible to serialize or contained much more state than the actual entity data and therefore would waste a lot of space when serialized. So if you wanted to have a separate app server tier, you had to use the DTO pattern to have it communicate efficiently with the web tier.
I've got a simple pojo named "Parent" which contains a collection of object "Child".
In hibernate/jpa, it's simply a one-to-many association, children do not know their parent: these Child objects can have different type of Parent so it easier to not know the parent (think of Child which represents Tags and parents can be different object types which have tags).
Now, I send my Parent object to the client view of my web site to allow user to modify it.
For it, I use Hibernate/GWT/Gilead.
My user mades some changes and click the save button (ajax) which sends my Parent object to the server. fields of my parent has been modified but more important, some Child objects has been added or deleted in the collection.
To summary, when Parent object comes back to server, it now has in its collection:
- new "Child" objects where id is null and need to be persist
- modified "Child" objects where id is not null and need to be merge
- potentially hacked "Child" objects where id is not null but are not originally owned by the Parent
- Child objects missing (deleted): need to be deleted
How do you save the parent object (and its collection) ? do you load the parent collection from database to compare each objects of the modified collection to see if there is no hacked item ?
Do you clear the old collection (to remove orphan) and re add new child (but there is some Child that has not been modified) ?
thanks
PS: sorry for my english, I hope you have understand the concept ;)
Something in your stack has to supply the logic you are talking about, and given your circumstances it is probably you. You will have to get the current persisted state of the object by reading from your datasource so you can do the comparison. Bear in mind that, if several legitimate actions can update your parent object and its collection simultaneously you will have to take great care over defining your transaction grain and the thread-safe nature of your code.
This is not a simple problem by any means and there may well be framework features that can assist, but I am yet to find something which has solved this for any real world implementation I have encountered, especially where I have logic which tried to distinguish between legitimate and "hacked" data.
You may consider altering your architecture such that the parent and children are persisted in separate actions. It may not be appropriate in your case but you might be able to have a finer grain of transaction by splitting up the persistence actions and provide child-oriented security which makes your problem of hacking a little more manageable.
Good luck. I recommend you draw a detailed flow chart of your logic before you do too much coding.
The best solution I've found is to manage a DTO, manually created. The DTO sends only needed datas to the client. For each fields I want to set in ReadOnly mode, I calculate a signature based on a secret key that I send to client with my dto.
When my DTO comes back to server, I check the signature to be sure that my read only fields have not changed (recalculate the signature with coming back fields and compare it to the signature coming back with dto)
It allows me to specify read only fields and be sure that my objects are not hacked.