Whole question was too long so this is edited shorter one. One of my aggregate roots handles projects and members in service A and other aggregate handles budgets and transactions in service B. To add transaction user must be a given project member. Can I ask a simplify member aggregate created in service B from domain events(MemberAdded/Removed) from service A if user is a member before modifying transactions aggregate? This simplify member aggregate would function as a domain service. Is it the DDD way of doing it and is it ok with CQRS to ask memberAggregate.IsMember() and then proceed to modify TranscationAggregate in single command?
My business model allows for delay between real state of service A Member aggregate and recreated simplify version in service B.
From what I understand if:
domain service is injected into aggregate root I'm losing domain logic purity
domain service is used in command I'm losing completeness because part of domain logic is outside of domain layer
if I remodel my domain to have members and transactions in single aggregate root I'm losing performance
Related
Let's say we have a business rule that states that once a product is purchased, you should update the inventory and generate an invoice. Which of the following should you use?
Should you use application services where you call something like UpdateInventory() followed by GenerateInvoice()? Should you call the same methods but put them in a domain service instead? Or should you just call UpdateInventory() which raises a domain event to generate the invoice?
What are the pros and cons of each?
I have been assigned to think of a layered microservices architecture for Azure Service Fabric. But my experience mostly been on monolithic architectures I can't come up with a specific solution.
What I have thought as of now is like...
Data Layer - This is where all the Code First entities resides along with DBContext.
Business Layer - This is where all the Service Managers would be performing and enforcing the Business Logic i.e. UserManager (IUserManager), OrderManager (IOrderManager), InvoiceManager (IInvoiceManager) etc.
WebAPI (Self Hoted Inside Service Fabric) - Although this WebAPI is inside Service Fabric but does nothing except to receive the request and call respectic Services under Service Fabric. WebAPI Layer would also do any Authentication and Authorization (ASP.NET Identity) before passing on the call to other services.
Service Fabric Services - UserService, OrderService, InvoiceService. These services are invoked from WebAPI Layer and DI the Business Layer (IUserManager, IOrderManager, IInvoiceManager) to perform it's operation.
Do you think this is okay to proceed with?
One theoretical issue though, while reading up for several microservices architecture resources, I found that, all of them suggests to have Business Logic inside the service so that the specific service can be scaled independently. So I believe, I'm violating the basic aspect of microservices.
I'm doing this because, the customer requirement is to use this Business Layer across several projects, such as Batch Jobs (Azure Web Jobs), Backend Dashboard for Internal Employees (ASP.NET MVC) etc. So If I don't keep the Business Layer same, I have to write the same Business Logic again for Web Jobs and Backend Dashboard which I feel is not a good idea. As a simple change in Business Logic would require change in code at several places then.
One more concern is, in that case, I have to go with Service to Service communication for ACID transactions. Such as, while creating an Order, a Order and Invoice both must be created. So in that case, I thought of using Event Driven programming i.e. Order Service will emit an event which the Invoice Service can subscribe to, to create Invoice on creation of Order. But the complications are if the Invoice Service fails to create invoice, it can either keep trying do that infinitely (which is a bad idea I think), or emit another event to Order Service to subscribe and roll back the order. There can be lots of confusion with this.
Also, I must mention that, we are using a Single Database as of now.
So my questions are...
What issue do you see with my approach? Is it okay?
If not, please suggest me a better approach. You can guide me to some resources for implementation details or conceptual details too.
NOTE : The requirement of client is, they can scale specific module in need. Such as, UserService might not be used much as there won't be many signups daily or change in User Profile, but OrderService can be scaled along as there can be lots of Orders coming in daily.
I'll be glad to learn. As this is my first chance of getting my hands on designing a microservices architecture.
First of all, why does the customer want to use Service Fabric and a microservices archtecture when it at the same time sounds like there are other parts of the solution (webjobs etc) that will not be a part of thar architecture but rather live in it's own ecosystem (yet share logic)? I think it would be good for you to first understand the underlying requirements that should guide the architecture. What is most imortant?
Scalability? Flexibility?
Development and deployment? Maintinability?
Modularity in ability to compose new solutions based on autonomous microservices?
The list could go on. Until you figure this out there is really no point in designing further as you don't know what you are designing for...
As for sharing business logic with webjobs, there is nothing preventing you from sharing code packages containing the same BL, it doesn't have to be a shared service and it doesn't mean that it has to be packaged the same way in relation to its interface or persistance. Another thing to consider is, why do you wan't to run webjobs when you can build similar functionality in SF services?
For the sake of question, let's say i have 2 microservices.
Identity management
Accounting
I know that each microservice should not be tightly coupled and it should have it's own database.
Let's say that accounting has invoices and each invoice has issuing agent.
Agent from accounting also exists as User in Identity microservice.
If i understood well, data from identity management (users), should be copied to accounting (agents), and should copy only data which are needed for that bounded context (first and last name), so the invoice can have proper issuingAgentId.
Is this correct way to keep data consistent and shared between contexts?
Each time when user is created in identity microservice, event "UserCreated" will be published and accounting or any other service interested in this event should listen and process it by adding corresponding agent?
Same goes for updating user information.
This is one way to handle it yes and usually the preferred method. You keep a cache locally in your service that holds copies of the data from another service. In an event-driven system, this would involve listening to events of interest and using them to update your local cache. The cache could be in-memory, or persisted. An example for your use case would be when raising an invoice, the Accounting context would look in it's local cache for a user/agentid before creating the Invoice.
Other options:
Shared database
I know it is frowned upon (for good reason) but you can always share a database schema. For example, the Identity context can write to a user table and the Accounting context can read from it when it needs an AgentId to put in an invoice. The trade-off is you are coupling at the database level, and introducing a single point of failure.
RPC
You can make a RPC call to another service when you need information. In your example, the Accounting context would call the Identity Management context for the AgentId/User information before raising an invoice. Trade-off with this approach is again a coupling to the other service. What do you do when it is not available? You cannot raise an Invoice.
Reporting domain
Another option is to have a completely separate service that listens for data from other services and maintains view models for UIs. This would keep your other services ignorant of other service's concerns. When using an event-driven system, you'd be listening for events form other services that allow you to build a view model for the UI. This is usually a good option if all you are doing is viewing the data
Application Service fulfills the commands issued by clients ( ie presentation layer ) by making and coordinating calls to the Workflows, Infrastructure Services, Domain Services and Domain Entities.
Is it a common practice to also have few Domain Services that do similar job as Application Services, meaning they also make and coordinate calls, only difference being they do it at a more fine-grained level ( ie they only make and coordinate calls to other Domain Services and Domain Objects )?
If yes, any ideas how fine-grained should these Domain Services be?
Thank you
Domain Services contain domain logic that doesn't particularly fit into any Entity, or that spans across several entities.
One often quoted example is a FundsTransferService. Transferring funds doesn't seem like the responsibility of the BankAccount entity, because that would mean the source account can modify the target account's balance (or the other way around) which seems awkward and might be dangerous. A dedicated TransferFunds() method in a FundsTransferService allows for better separation of concerns and channels all funds movements in a single place.
In that regard, you could say Domain Services coordinate calls on entities and other services, but not in the same sense that Application layer services do IMO. Oftentimes, Application layer services are just boilerplate procedural code while Domain services contain real business rules.
only difference being they do it at a more fine-grained level
I wouldn't say Domain services are more fine-grained than Application services. They are essentially just in different layers. It's like saying that a Repository is finer grained than a Controller... usually granularity is a measure of to what extent one cohesive operation is split up in smaller parts vs just a big procedure.
I have 3 separated services using different databases with REST interfaces:
First service: Information about Customers
Second service: Information about Customer Trades
Third service: Information about Customer Documentation
Problem:
Every customer has a Status that should be evaluated based on his trades and documents.
Which service should be responsible for this evaluation and how should I implement the orchestration between the other services?
If you can, I'd create a 4th service. This way you have a service that returns what you need, avoiding the problem (and over chattiness) of calling 2 services and merging the result set. Otherwise, if you don't have access to be able to create a 4th service, maybe write a proxy service that through one call, calls the other 2 services and uses data caching to cache data where possible, to try help cut down on multiple calls in the future for commonly queried customers.