Please share your view on this kind of thing i'm currently testing out :
Have an JPA entity inside my JSF managed bean
Bind the entity's properties to the JSF form elements like input text, combo, even datatable for the entity's list of detail objects for example.
Have the entity processed by a service object, meaning the entity object itself, and perhaps with some other simple variables / objects
The service will do some basic validation or simple processes, and deliver the entity object to the DAO layer to be persisted
And the JSF view will reflect on the detached entity
Is this kind of solution with passing the entities between tiers OK ?
Forgive me for my inexperience in this matter, since i was used to play with 'variables' in webapp (using map based formbean in struts 1), but i've read about transforming the entity objects into some other format, but i'm not sure what it is for ?
If the relations between entities are defined, we can bind it to JSF components, and therefore render based on and populate the entity's properties.
Yes, this is perfectly fine and in fact the recommended way to do it nowadays.
This "transforming the entity objects into some other format" refers probably to the Data Transfer Object pattern, which was necessary in the bad old days before annotations, when entity classes usually had to inherit from some framework-specific base class, undergo bytecode manipulation or were implemented as proxy objects by an EJB container.
Such entity objects were either impossible to serialize or contained much more state than the actual entity data and therefore would waste a lot of space when serialized. So if you wanted to have a separate app server tier, you had to use the DTO pattern to have it communicate efficiently with the web tier.
Related
I m trying to build a rest easy service with hibernate. Is it Good to have Hibernate and Jaxb be annotation both on the same class. OR there should be two different classes one for hibernate data object with annotation and another similar class for rest request and response with jaxb annotation.
The question is, basically if you need extra transfer objects next to your entities.
If you don't, the structure of your tranfer data (JSON, XML, whatever) will be more or less dictated of how your entities are structured. You can achieve a lot with annotations but you'll still be somewhat bound. As a consequence of this, changes in the entities may need to be propagated to your outer interfaces. Basically, if you change your entities and/or your database schema, you may also need to change the structure of the JSON returned by your REST interface.
Having separate DTOs is safer in cases when you need to provide stability of your interfaces. The downside is that you'll need mapping code to convert between DTOs and entities.
From my experience, you can get away with just entities most of the time.
From what I've read and implemented, DTO is the object that hold a subset of value from a Data model, in most cases these are immutable objects.
What about the case where I need to pass either new value or changes back to the database?
Should I work directly with the data model/actual entity from my DAL in my Presentation layer?
Or should I create a DTO that can be passed from the presentation layer to the business layer then convert it to an entity, then be updated in the DB via an ORM call. Is this writing too much code? I'm assuming that this is needed if the presentation layer has no concept of the data model. If we are going with this approach, should I fetch the object again at the BLL layer before committing the change?
A few thoughts :
DTO is a loaded term, but as it stands for Data Transfer Object, I see it more as a purely technical, potentially serializable container to get data through from one point to another, usually across tiers or maybe layers. Inside a layer that deals with business concerns, such as the Domain layer in DDD, these little data structures that circulate tend to be named Value Objects instead, because they have a business meaning and are part of the domain's Ubiquitous Language. There are all sorts of subtle differences between DTO's and Value Objects, such as you usually don't need to compare DTO's, while comparison and equality is an important concern in VO's (two VO's are equal if their encapsulated data is equal).
DDD has an emphasis on the idea of rich domain model. That means you usually don't simply map DTO's one-to-one to domain entities, but try to model business actions as intention-revealing methods in your entities. For instance, you wouldn't use setters to modify a User's Street, City and ZipCode but rather call a moveTo(Address newAddress) method instead, Address being a Value Object declared in the Domain layer.
DTO's usually don't reach the Domain layer but go through the filter of an Application layer. It can be Controllers or dedicated Application Services. It's Application layer objects that know how to turn DTO's they got from the client, into the correct calls to Domain layer Entities (usually Aggregate Roots loaded from Repositories). Another level of refinement above that is to build tasked-based UIs where the user doesn't send data-centric DTO's but Commands that reflect their end goal.
So, mapping DTO's to Entities is not really the DDD way of doing things, it denotes more of a CRUD-oriented approach.
Should I work directly with the data model/actual entity from my DAL in my Presentation layer?
This is okay for small to medium projects. But when you have a large project with more than 5 developers where different layers are assigned to different teams, then the project benefits from using a DTO to separate the Data Layer from the Presentation Layer.
With a DTO in the middle, any changes in the presentation layer won't affect the data layer (vice versa)
Or should I create a DTO that can be passed from the presentation layer to the business layer then convert it to an entity, then be updated in the DB via an ORM call. Is this writing too much code? I'm assuming that this is needed if the presentation layer has no concept of the data model. If we are going with this approach, should I fetch the object again at the BLL layer before committing the change?
For creating a new entity, then that is the usual way to go (for example "new user"). For updating an existing entity, you don't convert a DTO to an entity, rather you fetch the existing entity, map the new values then initiate an ORM update.
UpdateUser(UserDto userDto)
{
// Fetch
User user = userRepository.GetById(userDto.ID);
// Map
user.FirstName = userDTO.FirstName;
user.LastName = userDTO.LastName;
// ORM Update
userRepository.Update(user);
userRepository.Commit();
}
For large projects with many developers, the disadvantage of writing too much code is minimal compared to the huge advantage of decoupling it provides.
See my post about Why use a DTO
My opinion is that DTOs represent the contracts (or messages, if you will) that form the basis for interaction between an Aggregate Root and the outside world. They are defined in the domain, and the AR needs to be able to both handle incoming instances and provide outgoing instances. (Note that in most cases, the DTO instances will be either provided by the AR or handled by the AR, but not both, because having one DTO that flows both ways is usually a violation of separation of concerns.)
At the same time, the AR is responsible for providing the business logic through which the data contained in the DTOs are processed. The presentation layer (or any other actor including the data access layer, for that matter) is free to put whatever gibberish it wants into a DTO and request that the AR process it, and the AR must be able to interpret the contents of the DTO as gibberish and raise an exception.
Because of this requirement, it is never appropriate to simply map a DTO back to its Entity counterpart.
The DTO must always be processed through the logic in the AR in order to affect changes in the Entity that may bring it to the state described by the DTO.
I might be an idiot here but I have a question regarding xpages and managed beans. I'm trying to separate logic and presentation by moving logic to a bean corresponding to an entity (a document more or less). I have a data-provider-class fetching and setting data. This is fine and all with one xpage but as the application gets more advanced with relations and multiple xpages I run into a problem (I'm looking at http://blog.mindoo.com/web/blog.nsf/dx/18.03.2011104725KLEDH8.htm?opendocument&comments#anc1 for inspiration).
If I'm not wrong I can't assign different managed beans to different xpages so setting different data-provider-classes and businesslogic-beans to different xpages can not be done in faces-config.xml. Now I might be going about this in the wrong way but any pointers is most appreciated.
Best regard
Olof
You cant assign managed beans (as in define them in faces-config) for specific xpages ( as far is i know). They are application specific. I think you are looking for something like the factory pattern/creator pattern. These are design patterns used to create instances of a specific class. For more info see: Factory method pattern Wikipedia or Creational patterns wikipedia.
When you create for instance a pizzeria website you could have a factory to create specific types of pizza's depending the button you are pressing. Each pizza is then created in memory ( bean ) and used as the datasource of your custom control. When the customer is ready ordering the pizza is saved to a notesdocument ( saved state ) and transformed together with all other products orderd as an order for that customer.
Whenever you want to retrieve that particular pizza again (for instance when you want to check which pizza the customer has ordered) you only need to ask the factory if you can get pizza with number / ID and the factory will return that pizza from the notesdocument. Build once, use many.
So basically you don't have several managed beans per page but per application and you use them across your application wherever you need them.
Look at beans as "global variables", so you can have different functions by defining different names. For example: "invoice", "customer", "order", "orderItem" and so on. It's up to you.
I'm not sure how to maintain a bi-directional relationship between my core data entities and some objects that are instantiated when the entities are created and committed to the database.
I have many subclassed MKAnnotation objects with one-to-one relationships to the entities. Every time my fetchedResultsController executes a new fetch, I am assuming that the results from a previous fetch are released and the NSManagedObjects that are fetched are remapped in memory. So my one-to-one relationships are broken. If I can save a pointer to the MKAnnotation objects in core data, that would fix half of the problem (the relationship in one direction). Does this make sense? How would you do this?
I delete all of the core data content when the application is restarted, so long term persistence of the relationship information is not a concern that I have.
Mixing pointers and managed objects is usually futile because Core Data has so many optimizations in place that direct memory management is all but impossible e.g. an object may revert to a fault.
You're really going about this the wrong way. Core Data isn't primarily a persistence API, its a data modeling API intended to provide the complete model layer of a Mode-View-Controller design app. As such, you can use it without saving anything at all. If you are using Core Data and you have data such as map annotation, the annotation should be modeled in Core Data. Doing so will simplify everything.
Since there is no MSAnnotation class but merely a MKAnnotation protocol, the simplest solution in this case would be to create a NSManagedObject class that implements the MKAnnotation protocol. You can either convert location data like CLLocationCoordinate2D into NSValues or better yet, just make attributes for them. Since the class implements the protocol, you could pass the managed objects anywhere you would pass any protocol object.
I am writing a JSF 2.0 form to edit a JPA #Entity object. I have a backing bean that has a get method for the Entity, which it fetches from the EntityManager. So far so good.
The question is does the Entity object that is being edited by the user get accessed by other parts of the application? In other words if someone else calls up that record, do they see the field changes before I merge the record back into the data base via the EntityManager? Or do they get a different instance.
The reason this is important is that the user can enter all sorts of bad data. The validation phase done by the backing bean will not call merge() all the errors are cleared, but what about before then?
If this is a common instance, how do I avoid this problem?
The question is does the Entity object that is being edited by the user get accessed by other parts of the application? In other words if someone else calls up that record, do they see the field changes before I merge the record back into the data base via the EntityManager? Or do they get a different instance.
The entity instance used by JSF will be a detached entity instance. It is not one that belongs to the persistence context. Each client/user will also receive it's own instance of the detached entity.
The reason this is important is that the user can enter all sorts of bad data. The validation phase done by the backing bean will not call merge() all the errors are cleared, but what about before then?
The merging of any invalid data will occur on when you invoke EntityManager.merge to merge the contents of the detached entity with the persistence context. If you never invoked merge, then the modified contents of the Entity will never make it to the persistence context.
If this is a common instance, how do I avoid this problem?
You can always avoid this by validating the state of the entity before merging it with the persistence context. You could employ bean validation in both JSF and JPA, to prevent this scenario, although you will typically do this only in one layer, to prevent redundant checks. However, if you have specified validation groups for your bean validation constraints to distinguish between presentation and persistence constraints, then you ought be employing bean validation in both the layers. Do keep in mind, that once the contents of the bean have been merged successfully with the persistence context, there isn't a lot you can do to undo this change, except for a transaction rollback, or a refresh/clear of the persistence context.
Adding to the correct answer of Vineet:
You could possibly have an attached entity returned by your backing bean, for instance if you used a stateful session bean (EJB) with the extended persistence context.
In this case however you would still not risk concurrency issues, since every instance of a persistence context returns unique instances of attached entities (unique: instance not shared with other existing persistence contexts).
Furthermore, JSF will not push changes into the model (the attached JPA entity in this case) if any kind of validation error occurs. So as long as you have your validation set up correctly (bean validation or regular JSF validation), there will be no risk of 'tainting' the entity.
Additionally, note that for the attached case you would not have to call merge(), as this will automatically happen when the context closes, so instead you 'close' the stateful bean.
That said, the common case is the one Vineet describes where you get a detached entity.