Whats the difference between a Jalo layer and a service layer in the Hybris commerce suite? I will really appreciate if someone could give an example along with. I know Jalo layer has been deprecated but still if I have to specify which layer to use in my platform then where will I tell Hybris or how will I tell Hybris to use a specific layer?
I think it's best if you read up on the quite good hybris wiki regarding both:
Jalo: https://wiki.hybris.com/display/release5/Jalo+Layer
Service layer: https://wiki.hybris.com/display/release5/ServiceLayer
You won't have to specify which one you use (they are both always running) and if you start a new project you basically must (or at least really really should!) use the service layer exclusively as Jalo will go away (so they say at least for quite some time) in one of the next major releases.
In a nutshell, Jalo is the old persistence mechanism while service layer was introduced to address various problems the jalo layer had (performance/caching, extensibility, etc etc).
So if you will be only/mostly working on new projects you probably won't have to acquire too much knowledge about the jalo layer, but if you plan on becoming a hybris consultant or work on old legacy hybris code you will have to deal with Jalo more.
A small example:
In your items.xml files (where you declare your data model) you can specify a jaloclass attribute which while make the platform create a Java class for you.
E.g.: core-items.xml has Product declared with jaloclass="de.hybris.platform.jalo.product.Product".
The platform automatically also creates the respective servicelayer class (always called *Model.java, so e.g. de.hybris.platform.core.model.product.ProductModel.
One limitation of the jalo layer is e.g. that if you want to extend the Product item type in one of your own extensions with some attribute, the newly created attribute will not be at the Product jalo class (as it resides in the platform and is created only once), but instead it will be available on your extensions Manager class which is a bit unintuitive and cumbersome. The service layer creates all of its model classes only after analyzing and merging all registered extensions and therefore is able to add that attribute at the actual ProductModel class.
There are many more differences, so if you have more concrete questions feel free to ask them :)
In the past, persistence and business logic was written in the Jalo Layer. After introducing the Service Layer, the existing business logic in Jalo Layer is being moved to the Service Layer. With this, the first goal of the migration to the Service Layer is that all Jalo related classes should not contain any code.
As the Jalo Layer should not contain business logic anymore, the public API will be much smaller in the future. It will mainly consist of the means to query flexible searches and a generic way to save and remove data. This functionality is already provided in the Service Layer by adapter services like FlexibleSearchService and ModelService. In this case, any access to the Jalo Layer is no longer encouraged. The second goal is to eliminate all Jalo access in existing classes of the Service Layer.
source :
Visit https://wiki.hybris.com/pages/viewpage.action?spaceKey=release5&title=Transitioning+to+the+ServiceLayer
In first Hybris versions Logic was attached to generated item type classes trough the Jalo (Jakarta Logic) layer, in order to be more flexible Hybris is now moving everything to the more flexible approach of a service layer (not finished yet, promotions are a good example of legacy Jalo layer).
Based on after reading of the above answers and did one practice based on the first answer my conclusion is following:
Yes, JALO's non-abstract class implementation is moved as *Model.java for writing more specific business logic including the good explanation in first 2 answers.
Cheers,
Related
When using MVC5 I add a domain layer and storage layer using dependency injection for all the business data. But I have always left the ApplicationDbContext in the main MVC application layer project.
I've read a great many posts on SO and see many people recommend moving the ApplicationDbContext out of the MVC project. I'd like to understand why ApplicationDbContext should or should not be moved. Are there any reasons to not move this context?
On the one hand, ApplicationDbContext uses a data model which suggests it should be moved to the storage layer but that will require large DTO's. On the other hand, ApplicationDbContext really relates primarily to application access and I have a separate roles/permissions functionality for the business data anyway. Several SO posts also suggested checking roles in the application layer, not the domain layer, but that seems suspect; don't we want to check business roles in the domain layer?
So I'm confused and before I go to the work to move the ApplicationDbContext to a different layer, I'd like to make sure I'm making a sound and informed decision.
It is not a must, but it is advisable if you are doing DDD and your project tends to grow large.
Although DDD is not about implementation details, DDD asks for a clear separation of concerns. Then you should separate domain logic from infrastructure.
You could achieve that separation in many ways. One way would be to have a single project with folders for Domain, Application and Infrastructure, and have your DbContext reside in the Infrastructure folder. This is suitable for small projects.
In large projects, however, I would recommend you to evaluate the Clean Architecture, which will take this separation to project level.
I'd like to understand why ApplicationDbContext should or should not be moved. Are there any reasons to not move this context?
It can be moved, there's no rule against it. But the tooling will then require you to specify both startup and database projects as parameters, like this:
dotnet ef database update --project <path> --startup-project <path>
But since you're using MVC5, you're probably not using EF Core. In case of EF 6 or lower, you'll be using the Package Manager Console (PMC) to manage migrations and database-updates, which will make your life easier in that regard, since you can mark you MVC project as the Startup Project from the context menu, and select the target project from a dropdown control in the PMC.
Several SO posts also suggested checking roles in the application layer, not the domain layer, but that seems suspect; don't we want to check business roles in the domain layer?
Yes, roles are related to permissions, which are business rules. People probably recommend checking that in the application layer because you need to pull this data from the database, but you could do it like this:
Use the Specification Pattern to represent roles and permissions as an specification in the Domain Layer. But the IRepository interfaces would best be defined in Application Layer, as they represent infrastructure (which will be concretely implemented in the Infrastructure Layer). So you would start the role-checking in the Application Layer by using repositories to retrieve data, but the actual permission validation would be done by an Specification in the Domain Layer.
That would be one way to do it.
While studying DDD I'm wondering why the Domain model need to define interfaces for the Infrastructure layer.
From my reads I got that a high level (domain model, application service, domain service) defines interfaces that need to be implemented by a lower layer (infrastructure). Simple.
From this idea I think it makes sense that an application level (in a high level) defines interfaces for a lower one (infrastructure) because the application level will use infrastructure components (A repository is a usual client of the applicaiton layer) but doesn't want to be tied to any special implementation.
However, this is confusing when I see in some books a domain level defining infrastructure interfaces because a domain model will not use ever a repository because we want our domain model "pure".
Am I missing something?
While studying DDD I'm wondering why the Domain model need to define interfaces for the Infrastructure layer.
It doesn't really -- that's part of the point.
The domain model defines the interfaces / contracts that it needs to do work, with the promise of happily working with any implementation that conforms to the contract.
So you can choose to implement the interface in your application component, or in the infrastructure component, or where ever makes sense.
Note the shift in language from "layer" to "component". Layers may be too simplistic to work -- see Udi Dahan 2007.
I came across the same question myself. From my understanding, the reason behind this is that you want the application to only consume objects/interfaces defined in the Domain Model. It also helps to keep one repository per aggregate since they sit next to each other in the same layer.
The fact that the Application layer has a reference to the Infrastructure layer is only for the purpose of dependency injection. Your Application layer services should call these interfaces exposed in the Domain layer, get Domain Objects (POCOs), do things and possibly call interfaces again; for example to commit a transaction. This is why for example the Unit of Work pattern exposes its action through a Domain Layer interface.
I am getting my feet wet with DDD (in .Net) for the first time, as I am re-architecting some core components of a legacy enterprise application.
Something I want to clear up is, how do we implement persistence in a proper DDD architecture?
I realize that the domains themselves are persistence ignorant, and should be designed using the "ubiquitous language" and certainly not forced into the constraints of the DAC of the month or even the physical database.
Am I correct that the Repository Interfaces live within the Domain assembly, but the Respository Implementations exist within the persistence layer? The persistence layer contains a reference to the Domain layer, never vice versa?
Where are my actual repository methods (CRUD) being called from?
Am I correct that the Repository Interfaces live within the Domain
assembly, but the Repository Implementations exist within the
persistence layer? The persistence layer contains a reference to the
Domain layer, never vice versa?
Yes, this is a very good approach.
Where are my actual repository methods (CRUD) being called from?
It might be a good idea to not think in CRUD terms because it is too data-centric and may lead you into Generic Repository Trap. Repository helps to manage middle and the end of life for domain objects. Factories are often responsible for beginning. Keep in mind that when the object is restored from the database it is in its midlife stage from DDD perspective. This is how the code can look like:
// beginning
Customer preferredCustomer = CustomerFactory.CreatePreferred();
customersRepository.Add(preferredCustomer);
// middle life
IList<Customer> valuedCustomers = customersRepository.FindPrefered();
// end life
customersRepository.Archive(customer);
You can call this code directly from you application. It maybe worth downloading and looking at Evan's DDD Sample. Unit of Work pattern is usually employed to deal with transactions and abstracting your ORM of choice.
Check out what Steve Bohlen has to say on the subject. The code for the presentation can be found here.
I was at the presentation and found the information on how to model repositories good.
Am I correct that the Repository Interfaces live within the Domain
assembly, but the Repository Implementations exist within the
persistence layer? The persistence layer contains a reference to the
Domain layer, never vice versa?
I disagree here, let's say a system is comprised of the following layers:
Presentation Layer (win forms, web forms, asp.net MVC, WPF, php, qt, java, , ios, android, etc.)
Business Layer (sometimes called managers or services, logic goes here)
Resource Access Layer (manually or ORM)
Resource/Storage (RDBMS, NoSQL, etc.)
The assumption here is that the higher you are the more volatile the layer is (highest being presentation and lowest being resource/storage). It is because of this that you don't want the resource access layer referencing the business layer, it is the other way around! The business layer references the resource access layer, you call DOWN not UP!
You put the interfaces/contracts in their own assembly instead, they have no purpose in the business layer at all.
Without getting into all of the gory details, I am trying to design a service-based solution that will be consumed by several client applications. The solution allows admins to create and modify document templates which are used by regular users to perform data entry. It is my intent to make the application a learning tool for best practices, techniques, etc.
And, at the same time, I have to accomodate a schizophrenic environment because the 'powers that be' cannot ever stick to their decisions regarding technologies and tools. For example, I am using Linq-to-SQL today because they aren't ready to go to EF4 but there is also discussion about switching over to NHibernate. So, I have to make the code as persistent ignorant as possible to minimize the work required should we change OR/M tools.
At this point, I am also limited to using the partial class approach to extend the Linq-to-SQL classes so they implement interfaces defined in my business layer. I cannot go with POCOs because management insists that we leverage all built-in tooling, etc. so I must support the Linq-to-SQL designer.
That said, my service interface has a StartSession method that accepts a template identifier in its signature. The operation flows like this:
If a session already exists in the database for the current user and specified template, update the record to show the current activity. If not, create a new session object.
The session is associated with an instance of the template, call it the "form". So if the session is new, I need to retrieve the template information to create the new "form", associate it with the session then save the session to the database. On the other hand, if the session already existed, then I need to also load the "form" with the data entered by the user and stored in the session previously.
Finally, the session (with form definition and data) is returned to the caller.
My first objective is to create clean separation between the logical layers of my application. The second is to maintain persistence ignorance (as mentioned above). Third, I have to be able to test everything so all dependencies must be externalized for easy mocking. I am using Unity as an IoC tool to help in this area.
To accomplish this, I have defined my service class and data contracts as needed to support the service interface. The service class will have a dependency injected from the business layer that actually performs the work. And here's where it has gotten messy for me.
I've been try to go the Unit of Work and Repository route to help with persistance ignorance. I have an ITemplateRepository and an ISessionRepository which I can access from my IUnitOfWork implementation. The service class gets an instance of my SessionManager class (in my BLL) injected. The SessionManager receives the IUnitOfWork implementation through constructor injection and will delegate all persistence to the UoW but I find myself playing a shell game with the various logic.
Should all of the logic described above be in the SessionManager class or perhaps the UoW implementation? I want as little logic as possible in the repository implementations because changing the data access platform could result in unwanted changes to the application logic. Since my repository is working against an interface, how do I best go about creating the new session (keeping in mind that a valid session has a reference to the template, er, form being used)? Would it be better to still use POCOs even though I have to support the designer and use a tool like AutoMapper inside the repository implementation to handle translating the objects?
Ugh!
I know I am just stuck in analysis paralysis so a little nudge is probably all I need. What would be ideal would be if someone could provide an example how you would you would solve the problem given the business rules and architectural constraints I've defined.
If you don't use POCOs then your not really going to be data store agnostic. And using POCOs will allow you to get your system up and running with memory based repositories which is what you'll likely want to use for your unit tests anyhow.
The AutoMapper sounds nice but I wouldn't consider it a deal breaker. Mapping POCOs to EF4, LinqToSql, nHibernate isn't that time consuming unless you have hundreds of tables. When/If your POCOs begin to diverge from your persistence layer then you might find that an AutoMapper wont really fit the bill.
We have a composite application built using the Composite UI Application Block (CAB)/Smart Client Software Factory (SCSF). To date, each module in our composite app has used its own set of DTO's, and business logic has been duplicated throughout the module, both in the UI layer and the Service layer. I would like to pursue more Domain-Driven approach in order to encapsulate business logic in a domain layer that can be distributed to the UI tier and the Service tier, and (ideally) across modules.
We have multiple modules in our composite application under development at one time, and we need to be able to deploy them in any order. Ideally, I would like for our modules to share a common domain model, but I'm afraid that when we deploy a new version of the domain model along with a module, that we will need to regression test the other modules against the domain model.
The alternative seems to be duplicating the domain model in each module, but all that code duplication smells funny to me. Has the industry developed any best practices for this type of situation?
I've used a single domain model, but one which allows versioning on every individual definition. Code generation provides both the interfaces per-service, and mapping code that can cross service and version boundaries.