I am trying to understand the concept of layered architecture correctly. I think I understand most of the concepts, but there is one or two things that are still unclear to me...
So I understand the purpose of each individual layer: Presentation layer is for UI and user interaction, application layer coordinates application logic and domain objects, domain layer is where all of the business logic goes, infrastructure layer is for persistence and access to external services such as network capabilities etc.
I understand that each layer should only depend on the layers below it, and that layers should be separated with interfaces. What I have a hard time understanding though is the interaction between domain layer and the infrastructure layer...
I am developing a database centric desktop application. My initial approach was to pull all of the information out of the database into an in-memory domain model. The model would then live there for the duration of the application, and then be persisted to the database only once the user saves. I am having my doubts about this approach though... From what I have read, it seems that I should only pull data out of the database when as needed. In other words, pull a information into an object, edit the object, and push the object back into the database once done. What confuses me though, is that the objects become disconnected from their state once they leave the database. So lets say I pull object A out of the database, and edit it. Before I commit it back to the database, another part of the application also pulls object A out of the database. The same two objects are now living in my domain layer, with different references, and with different states. This just seems like a nightmare to me. Am I missing something? Should I only allow one transaction at a time to avoid this issue?
Was my initial approach of pulling all of the data into an in-memory model wrong? My application could have as many as 10 000 records, so I suppose it could lead to memory issues?
A side note: I am not using any ORM at this stage.
Any help or insight would be appreciated!
infrastructure layer is for persistence and access to external services such as network capabilities etc.
Persistence layer is for persistence and infrastructure layer is usually a cross-cutting layer (i.e. it spans vertically across different layers) which may include stuff like logging or what you mentioned.
Your concerns regarding data updates are a classic concurrency problem, not specific to a layered architecture. There are many solutions, a typical one is optimistic concurrency control, which checks that the old values haven't been modified before updating with new values. A similar approach is using an auto-incremental value instead the old values.
Related
I have a desktop app using Onion Architecture where the user should be able to create and edit local files on disk.
I have the following layers:
Presentation->Application->Domain Infrastructure->
I want to be able to present issues when working with files to the user so the user can take some action like choosing whether to overwrite a file if it already exists. But as I understand DDD the application layer should have no knowledge of the persistence specifics in the Infrastructure layer. My thought was that an exception (e.g. FileExistsException) could be a part of the Interface contract in the Application layer and thrown from the implementation in the Infrastructure layer but then the Application layer would know about the storage type.
Is this perhaps ok since working with files is a part of the Application scope?
My questions is mainly regarding exception handling but I see that there could be other information to be shared as well.
Update:
To expand the question a bit and be more specific the user works with data models that is saved as a JSON file so I have those data models in the domain and the concept of File is only used in the infrastructure layer when actually persisting/changing the file.
If, in the future, I would like to give the user an option to change the storage from local disk to a database where they would get completely different types of exceptions to handle, would the necessary database specific info also be added to the domain?
In other words can implementation details be added to the domain if it is necessary for the user to interact with even though it is not necessarily a part of the actual business?
In my mind the way the user stores information is implementation detail and should stay out of the domain?
Since File is a concept in your Domain the FileExistsException should be inside your domain.
But the actual persistence mechanism should be in your infrastructure layer.
You can achieve that using repository: in the Domain layer you can define a FileRepository which is an interface with some method and in the infrastructure you define the actual implementation for example LocalDriveFileRepository.
Update:
How you persist your data is important only in the infrastructure layer so your application layer cannot handle an exception of type FileExistsException because it should not have knowledge of something outside domain and application layers.
You should remap your infrastructure exception into domain exception.
For example a FileExistsException could be remapped to a UserAlreadyPresentException or some other exception that have some meaning in your domain.
So I'll explain the problem through the use of an example as it makes everything more concrete and hopefully will reduce ambiguity.
The Architecture is pretty simple
1 MicroService <=> 1 Aggregate <=> Transactional Boundry
Each microservice will be using CQRS/ES design pattern which implies
Each microservice will have its own Aggregate mapping the domain of a real-world problem
The state of the aggregate will be rebuilt from an event store
Each event will signify a state change within the aggregate and will be transmitted to any service interested in the change via a message broker
Each microservice will be transactional within its own domain
Each microservice will be eventually consistent with other domains
Each microservice will build there own view models, from events being emitted by other microservices
So the example lets say we have a banking system
current-account microservice is responsible for mapping the Customer Current Account ... Withdrawal, Deposits
rewards microservice will be responsible for inventory and stock take of any rewards being served by the bank
air-miles microservice will be responsible for monitoring all the transaction coming from the current-account and in doing so award the Customer with rewards, from our reward micro-service
So the problem is this Should the air-miles microservice take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Drawbacks of taking decisions on local view models;
Replicating domain logic on how to maintain these views
Bugs within the view might propagate the wrong rewards to be given out
State changes (aka events emitted) on corrupted view models could have consequences in other services which are taking their own decisions on these events
Advantages of taking a decision on local view models;
The system doesn't need to constantly query the microservice owning the domain
The system should be faster and less resource intense
Or should it use the events coming from the service to trigger queries to the Aggregate owning the Domain, in doing so we accept the fact that view models might get corrupt but the final decision should always be consulted with the aggregate owning the domain?
Please, not that the above problem is simply my understanding of the architecture, and the aim of this post is to get different views on how one might use this architecture effectively in a microservice environment to keep each service decoupled yet avoid cascading corruption scenario without to much chatter between the service.
So the problem is this Should the air-miles microservice take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Yes. In fact, you should revise your architecture and even create more microservices. What I mean is that, being a event-driven architecture (also an Event-sourced one), your microservices have two responsibilities: they need to keep two different models: the write model and the read model.
So, for each Aggregate should be a microservice that keeps only the write model, that is, it only processes Commands, without building also a read model.
Then, for each read/query use case you should have a microservice that build the perfect read model. This is required if you need to keep the Aggregate microservice clean (as you should) because in general, the read models needs data from multiple Aggregate types/bounded contexts. Read models may cross bounded context boundaries, Aggregates may not. So you see, you don't really have a choice if you need to fully respect DDD.
Some says that domain events should be hidden, only local to the owning microservice. I disagree. In an event-driven architecture the domain events are first class citizens, they are allowed to reach other microservices. This gives the other microservices the chance to build their own interpretation of the system state. Otherwise, the emitting microservice would have the impossible additional responsibility/task of building a state that must match every possible need that all the microservices would ever want(!); i.e. maybe a microservices would want to lookup a deleted remote entity's title, how could it do that if the emitting microservice keeps only the list of non-deleted-yet entities? You may say: but then it will keep all the entities, deleted or not. But maybe someone needs the date that an entity was deleted; you may say: but then I keep also the deletedDate. You see what you do? You break the Open/closed principle. Every time you create a microservice you need to modify the emitting microservice.
There is also the resilience of the microservices. In the Art of scalability, the authors speak about swimming lanes. They are a strategy to separate the components of a system into lanes of failures. A failure in a lane does not propagate to other lanes. Our microservices are lanes. Components in a lane are not allowed to access any component from other lane. One down microservice should not bring the others down. It's not a matter of speed/optimisation, it's a matter of resilience. The domain events are the perfect modality of keeping two remote systems synchronized. They also emphasize the fact that the data is eventually consistent; the events travel at a limited speed (from nanoseconds to even days). When a system is designed with that in mind then no other microservice can bring it down.
Yes, there will be some code duplication. And yes, although I said that you don't have a choice, you have. In order to reduce the code duplication at the cost of a lower resilience, you can have some Canonical read models that build a normal flat state and other microservices could query that. This is dangerous in most cases as it breaks the swimming lanes concept. Should the Canonical microservices go down, go down all dependent microservices. Canonical microservices works best for CRUD-like bounded context.
There are however valid cases when you may have some internal events that you don't want to expose. In other words, you are not required to publish all domain events.
So the problem is this Should the air-miles micro service take decisions based on its own view model which is being updated from events coming from the current-account, and similarly, on picking which reward it should give out to the Customer?
Each consumer uses a local replica of a representation computed by the producer.
So if air-miles needs information from current-account it should be looking at a local replica of a view calculated by the current-account service.
The key idea is this: micro services are supposed to be isolated from one another; you should be able to redesign and deploy one without impacting the others.
So try this thought experiment - suppose we had these three micro services, but all saving snapshots of current state, rather than events. Everything works, then imagine that the current-account maintainer discovers that an event sourced implementation would better serve the business.
Should the change to the current-account require a matching change in the air-miles service? If so, can we really claim that these services are isolated from one another?
Advantages of taking a decision on local view models
I don't particularly like these "advantages"; first, they are dominated by the performance axis (please recall that the second rule of performance optimization is "not yet"). And second, that they assume that the service boundaries are correctly drawn; maybe the performance issue is evidence that the separation of responsibilities needs review.
So I started my 2nd developer job after spending 10 years at my first company and not really feeling I earned the title of a senior developer. It was java development but we were working with an anemic domain model and the application in my opinion was a huge difficult-to-test mess. Unfortunately the code base that I'm working with now is exactly the same and I recently had another interview where the interviewer described their Hibernate model as being light weight and just containing setters getters as well. So it appears this is quite common across the industry.
There's plenty of articles describing the anemic domain model as an anti-pattern and also some articles where it is described as perfectly fine for simple systems. But I haven't seen any examples of making the best out of working with a large enterprise system with an ADM.
Has anyone had experience with this? Are there any best practices for creating a loosely coupled system that contains unit tests that are readable and actually valuable? I really want to take pride in my work but am losing hope.
Edit:
To the developers that advocate business logic being contained in services:
how do you limit the dependencies on other services within each service? i.e. OrderCancelService needs CustomerAccountService and PaymentService and RefundCalculatorService and RewardsAdjustmentService, etc This tends to lead to several mock objects in tests making tests way too tied into implementation
how do you limit the number of parameters in each service's method? Since everything needs to be passed around and objects don't operate on their own data, this seems to lead to very large and confusing method signatures
Do you apply tell, don't ask principle to service objects? I see a lot of services that return values which are then used by the calling service to make decisions in execution flow.
You may consider your persistence model, which you now think about as the anemic domain model, as what it is - the persistence model for your domain model state.
If you do that, you probably can create a real domain model, which will have its state stored inside the persistence model objects (State pattern). Then, you can have your logic inside this new domain model. Reading your comment above, I would say you can convert your "manager/service" classes to state machines, you can call them aggregates if they match with transaction boundaries, and have their state in POJOs persisted by Hibernate as it is done now.
I am developing an application that displays a list of current work-orders to users. This list is 'live' in that it should automatically update whenever changes are made behind-the-scenes.
I am at the point where I need to implement the synchronization logic to keep the data in the list in-sync. I am abstracting away the actual mechanism driving synchronization (e.g. polling, event-driven, etc.) so we can change approaches as needed but am stuck determining if this logic belongs in the domain layer or data layer.
Should data synchronization as described be 'hidden' in the data layer or is it a domain concern and belongs in that layer?
Not domain layer in my personal expierence. Because it's highly coupled with the ui interface. Do you still need this mechanism if the work-orders list doesn't need to be 'live'? Domain models should be relatively stable (unless the domain changes), not be driven by ui and applications.
If I am developing an application using DDD, where do the infrastrucure and behavior components go? For example, user management, user specific configuration, permissions, application menuing, etc.
These components really have nothing to do with the business requirements being fullfilled by my domain, but they are still required elements of my application. Many of them also require persistance.
It's pretty normal to have non-domain components along with the domain in your project - after all not everything is business domain oriented. Where they belong actually depends on how you structure your solution. In most cases I tend to follow Onion Architecture, so all of my logic is provided by Application Services, regardless if it's domain or non-domain oriented.
Well if you find that your usecases rarely demands information from your core domain joined with application specific, you can probably split that into a separate database. Access this information through Application Service layer, since this layer is suppose to serve your application needs. If that includes user profile persistence etc, that's fine.
But you remember that if you got infrastructural failure and you want to do a rollback with some transaction logs or database backups, you'd probably want all persisted data be roll-backed. So then it's easier to have these domains share a database. Pros and cons - always compromise...
If I know that this application would have minor interaction with it's environment, I would put this in one database and let the application service layer interact with clients.
If I know that there will be several applications/clients I may consider to split database so that Webb application user specifics are stored in separate database. Very hard to say, since I have no overview of all the requirements.
/Magnus