I wonder if you can help. I am writing an order system and currently have implemented an order microservice which takes care of placing an order. I am using DDD with event sourcing and CQRS.
The order service itself takes in commands that produce events, the actual order service listens to its own event to create a read model (The idea here is to use CQRS, so commands for writes and queries for reads)
After implementing the above, I ran into a problem and its probably just that I am not fully understanding the correct way of doing this.
An order actually has dependents, meaning an order needs a customer and a product/s. So i will have 2 additional microservices for customer and products.
To keep things simple, i would like to concentrate on the customer (although I have exactly the same issue with products but my thinking is that if I fix the customer issue then the other one is automatically fixed also).
So back to the problem at hand. To create an order the order needs a customer (and products), I currently have the customerId on the client, so sending down a command to the order service, I can pass in the customerId.
I would like to save the name and address of the customer with the order. How do I get the name and address of the customerId from the Customer Service in the Order Service ?
I suppose to summarize, when data from one service needs data from another service, how am I able to get this data.
Would it be the case of the order service creating an event for receiving a customer record ? This is going to introduce a lot of complexity (more events) in the system
The microservices are NOT coupled so the order service can't just call into the read model of the customer.
Anybody able to help me on this ?
If you are using DDD, first of all, please read about bounded context. Forget microservices, they are just implementation strategy.
Now back to your problem. Publish these events from Customer aggregate(in your case Customer microservice): CustomerRegistered, CustomerInfoUpdated, CustomerAccountRemoved, CustomerAddressChanged etc. Then subscribe your Order service(again in your case application service inside Order microservice) to listen all above events. Okay, not all, just what order needs.
Now, you may have a question, what if majority or some of my customers don't make orders? My order service will be full of unnecessary data. Is this a good approach?
Well, answer might vary. I would say, space in hard disk is cheaper than memory or a database query is faster than a network call in performance perspective. If your database host(or your server) is limited then you should not go with microservices. Moreover, I would make some business ideas with these unused customer data e.g. list all customers who never ordered anything, I will send them some offers to grow my business. Just kidding. Don't feel bothered with unused data in microservices.
My suggestion would be to gather the required data on the front-end and pass it along. The relevant customer details that you want to denormalize into the order would be a value object. The same goes for the product data (e.g. id, description) related to the order line.
It isn't impossible to have the systems interact to retrieve data but that does couple them on a lower level that seems necessary.
When data from one service needs data from another service, how am I able to get this data?
You copy it.
So somewhere in your design there needs to be a message that carries the data from where it is to where it needs to be.
That could mean that the order service is subscribing to events that are published by the customer service, and storing a copy of the information that it needs. Or it could be that the order service queries some API that has direct access to the data stored by the customer service.
Queries for the additional data that you need could be synchronous or asynchronous - maybe the work can be deferred until you have all of the data you need.
Another possibility is that you redesign your system so that the business capability you need is with the data, either moving the capability or moving the data. Why does ordering need customer data? Can the customer service do the work instead? Should ordering own the data?
There's a certain amount of complexity that is inherent in your decision to distribute the work across multiple services. The decision to distribute your system involves weighing various trade offs.
Related
Lets say you have an application where you can create a bet on a coin toss. Your account has a balance that was funded with your credit card.
The sequence of events is the following:
POST /coin_toss_bets { amount: 5 USD }
Start transaction/acquire locks inside the Bet subdomain useCase
Does the user have enough balance? (check accounting aggregate balance projection of the users deposits)
Debit the users account for the amount for 5 USD
Create bet/flip the coin to get a result
Payout the user if they bet on the correct side
Commit transaction
UI layer is given the bet and displays an animation
My question is how this can be modeled with 2 separate BoundedContexts (betting/accounting). Its said that database transactions should not cross a BoundedContext since they can be located on different machines/microservices, but in this scenario, the use case of creating a bet heavily relies on a non-dirty read of the users projected account balance (strong consistency).
There is also no way to perform a compensating action if the account is overdebited, since the UI layer is requiring that the bet is created atomically.
Is there any way to do this with CQRS/Event Sourcing that doesn't require asking for the users account balance inside the betting subdomain? Or would you always have to ensure that the balance projection is correct inside this transaction (they must be deployed together)?
Ensuring that the account has sufficient balance for a transaction seems to be an invariant business rule in your case. So let us assume that it cannot be violated.
Then the question is simply about how to handle "transactions" that span across boundary contexts.
DDD does say that transactions (invariant boundaries) should not cross a Bounded Context (BC). The rule is applicable even at the level of aggregates. But the correct way to read it would be transaction as part of a "Single Request."
The best way to deal with this scenario is to simply accept the request from UI to place a bet and return a "202 Accepted" status message, along with a unique job tracker ID. The only database interaction during request processing should be to persist the data into a "Jobs" table and probably trigger a "BET_PLACED" domain event.
You would then process the Bet asynchronously. Yes, the processing would still involve calling the Accounts bounded context, but through its published API. Since you are not in the context of a request anymore, the processing time need not fit into usual constraints.
Once the processing is completed, either the UI would refresh the page at regular intervals and update the user, or you can send a Push Notification to the browser.
Here's an abstract question with real world implications.
I have two microservices; let's call them the CreditCardsService and the SubscriptionsService.
I also have a SPA that is supposed to use the SubscriptionsService so that customers can subscribe. To do that, the SubscriptionsService has an endpoint where you can POST a subscription model to create a subscription, and in that model is a creditCardId that points to a credit card that should pay for the subscription. There are certain business rules that say whether or not you can use said credit card for the subscription (expiration is more than 12 months away, it's a VISA, etc). These specific business rules are tied to the SubscriptionsService
The problem is that the team working on the SPA want a /CreditCards endpoint in the SubscriptonsService that returns all valid credit cards of the user that can be used in the subscriptions model. They don't want to implement the same business validation rules in the SPA that are in the SubscriptionsService itself.
To me this seems to go against the SOLID principles that are central to microservice design; specifically separation of concerns. I also ask myself, what precedent is this going to set? Are we going to have to add a /CreditCards endpoint to the OrdersService or any other service that might use creditCardId as a property of it's model?
So the main question is this: What is the best way to design this? Should the business validation logic be duplicated between the frontend and the backend? Should this new endpoint be added to the SubscriptionsService? Should we try to simplify the business logic?
It is a completely fair request and you should offer that endpoint. If you define the rules for what CC is valid for your service, then you should offer any and all help dealing with it too.
Logic should not be repeated. That tend to make systems unmaintainable.
This has less to do with SOLID, although SRP would also say, that if you are responsible for something then any related logic also belongs to you. This concern can not be separated from your service, since it is defined there.
As a solution option, I would perhaps look into whether I can get away with linking to the CC Service since you already have one. Can I redirect the client with a constructed query perhaps to the CC Service to get all relevant CCs, without actually knowing them in the Subscription Service.
What is the best way to design this? Should the business validation
logic be duplicated between the frontend and the backend? Should this
new endpoint be added to the SubscriptionsService? Should we try to
simplify the business logic?
From my point of view, I would integrate "Subscription BC" (S-BC) with "CreditCards BC" (CC-BC). CC-BC is upstream and S-BC is downstream. You could do it with REST API in CC-BC, or with a message queue.
But what I validate is the operation done with a CC, not the CC itself, i.e. validate "is this CC valid for subscription". And that validation is in S-BC.
If the SPA wants to retrieve "the CCs of a user that he/she can use for subscription", it is a functionality of the S-BC.
The client (SPA) should call the the S-BC API to use that functionality, and the S-BC performs the functionality getting the CCs from the CC-BC and doing the validation.
In microservices and DDD the subscriptions service should have a credit cards endpoint if that is data that is relevant to the bounded context of subscriptions.
The creditcards endpoint might serve a slightly different model of data than you would find in the credit cards service itself, because in the context of subscriptions a credit card might look or behave differently. The subscriptions service would have a creditcards table or backing store, probably, to support storing its own schema of creditcards and refer to some source of truth to keep that data in good shape (for example messages about card events on a bus, or some other mechanism).
This enables 3 things, firstly the subscriptions service wont be completely knocked out if cards goes down for a while, it can refer to its own table and work anyway. Secondly your domain code will be more focused as it will only have to deal with the properties of credit cards that really matter to solving the current problem. Finally if your cards store can even have extra domain specific properties that are computed and materialized on store.
Obligatory Fowler link : Bounded Context Pattern
Even when the source of truth is the domain model and ultimately you must have validation at the
domain model level, validation can still be handled at both the domain model level (server side) and
the UI (client side).
Client-side validation is a great convenience for users. It saves time they would otherwise spend
waiting for a round trip to the server that might return validation errors. In business terms, even a few
fractions of seconds multiplied hundreds of times each day adds up to a lot of time, expense, and
frustration. Straightforward and immediate validation enables users to work more efficiently and
produce better quality input and output.
Just as the view model and the domain model are different, view model validation and domain model
validation might be similar but serve a different purpose. If you are concerned about DRY (the Don’t
Repeat Yourself principle), consider that in this case code reuse might also mean coupling, and in enterprise applications it is more important not to couple the server side to the client side than to
follow the DRY principle. (NET-Microservices-Architecture-for-Containerized-NET-Applications book)
I am developing an application utilizing the microservices development approach with the mean stack. I am running into a situation where data needs to be shared between multiple microservices. For example, let's say I have user, video, message(sending/receiving,inbox, etc.) services. Now the video and message records belong to an account record. As users create video and send /receive message there is a foreign key(userId) that has to be associated with the video and message records they create. I have scenarios where I need to display the first, middle and last name associated with each video for example. Let's now say on the front end a user is scrolling through a list of videos uploaded to the system, 50 at a time. In the worst case scenario, I could see a situation where a pull of 50 occurs where each video is tied to a unique user.
There seems to be two approaches to this issue:
One, I make an api call to the user service and get each user tied to each video in the list. This seems inefficient as it could get really chatty if I am making one call per video. In the second of the api call scenario, I would get the list of video and send a distinct list of user foreign keys to query to get each user tied to each video. This seems more efficient but still seems like I am losing performance putting everything back together to send out for display or however it needs to be manipulated.
Two, whenever a new user is created, the account service sends a message with the user information each other service needs to a fanout queue and then it is the responsibility of the individual services to add the new user to a table in it's own database thus maintaining loose coupling. The extreme downside here would be the data duplication and having to have the fanout queue to handle when updates needs to be made to ensure eventual consistency. Though, in the long run, this approach seems like it would be the most efficient from a performance perspective.
I am torn between these two approaches, as they both have their share of tradeoffs. Which approach makes the most sense to implement and why?
I'm also interested in this question.
First of all, scenario that you described is very common. Users, videos and messages definitely three different microservices. There is no issue in how you broke down system into pieces.
Secondly, there are multiple options, how to solve data sharing problem. Take a look at great article from auth0: https://auth0.com/blog/introduction-to-microservices-part-4-dependencies/
Don't restrict your design decision to those 2 options you've outlined. The hardest thing about microservices is to get your head around what a service is and how to cut your application into chunks/services that make sense to be implemented as a 'microservice'.
Just because you have those 3 entities (user, video & message) doesn't mean you have to implement 3 services. If your actual use case shows that these services (or entities) depend heavily on each other to fulfil a simple request from the front-end than that's a clear signal that your cutting was not right.
From what I see from your example I'd design 1 microservice that fulfills the request. Remember that one of the design fundamentals of a microservice is to be as independent as possible.
There's no need to over complicate services, it's not SOA.
https://martinfowler.com/articles/microservices.html -> great read!
Regards,
Lars
Our application sends/receives a lot of data to/from a third party we work with.
Our domain model is mainly populated with that data.
The 'problem' we're having is identifying a 'good' candidate as domain identity for the aggregate.
It seems like we have 3 options:
Generate a domain identity (UUID or DB-sequence...);
Use the External-ID as domain identity that comes along with all data from the external source.
Use an internal domain identity AND External-ID as a separate id that 'might' be used for retrieval operations; the internal id is always leading
About the External-ID:
It is 100% guaranteed the ID will never change
The ID is always managed by the external source
Other domains in our system might use the external-id for retrieval operations
Especially the last point above convinced us that the external-id is not an infrastructural concern but really belongs to the domain.
Which option should we choose?
** UPDATE **
Maybe I was not clear about the term '3th party'.
Actually, the external source is our client who is active in the Car industry. Our application uses client's master data to complete several 'things'. We have several Bounded Contexts (BC) like 'Client management', 'Survey', 'Appointment',
'Maintenance' etc.
Our client sends us 'Tasks' that describe something needs te be done.
That 'something' might be:
'let client X complete survey Y'
'schedule/cancel appointment for client X'
'car X for client Y is scheduled for maintenance at position XYZ'
Those 'Tasks' always have a 'task-id' that is guaranteed to be unique.
We store all incoming 'Tasks' in our database (active record style). Every possible action on a task maps with a domain event. (Multiple BCs might be interested in the same task)
Every BC contains one or more aggregates which distribute some domain events to other BCs. For instance, when an appointment is canceled a domain event is triggered, maintenance listens to that event to get some things done.
However, our client expects some message after every action that is related to a Task. Therefore we always need to use the 'task-id'.
To summarize things:
Tasks have a task-id
Tasks might be related to multiple BCs
Every BC sends some 'result message' to the client with the related task-id
Task-ids are distributed by domain events
We keep every (internally) persisted task up-to-date
Hopefully, I was clear enough about the use of the external-id (= task-id) and our different BCs.
My gut feeling would be to manage your own identity and not rely on a third party service for this, so option 3 above. Difficult to say without context though. What is the 3rd party system? What is your domain?
Would you ever switch the 3rd party service?
You say other parts of your domain might use the external id for querying - what are they querying? Your internal systems or the 3rd party service?
[Update]
Based on the new information it sounds like a correlationId. I'd store it alongside the other information relevant to the aggregates.
As a general rule, I would veto using a DB-sequence number as a identifier; the domain model should be independent of the choice of persistence; the domain model writes the identifier to the database, rather than the other way around (if the DB wants to be tracking a sequence number for its own purposes, that's fine).
I'm reluctant to use the external identifier, although it can make sense in some circumstances. A given entity, like "Customer" might have representations in a number of different bounded contexts - it might make sense to use the same identifier for all of them.
My default: I would reach for a name based uuid, using the external ID as part of the seed, which gives a simple mapping from external to internal.
I have a Meeting Object:
Meeting{id, name, time, CreatedBy, UpdatedBy}
and a
MeetingAssignee{id, MeetingID, EmployeeId, CreatedBy, UpdatedBy)
Meeting, as Aggregate root, has a method AssignEmployee.
I was about to pass in the current user to the Meeting object as I call AssignEmployee, so that it can update its audit fields accordingly.
But this doesn't seem right - is it? Obviously I can keep the audit fields public and change them later - perhaps at service level?
What is everyone's else preferred method for updating these fields?
Please note: We are not using Nhibernate, but a custom ORM which does not have anything automatic in place.
Thanks.
Auditing and logging are fun as they are usually needed everywhere in the application and they are both requirements (logging is a requirement from the OPs guys).
Without knowing much of your model, and since auditing must be a requirement, I would pass the current user to AssignEmployee and instead of having a line there that says AuditBlahBlahBlah, I would add an event (maybe MeetingUpdated or AssigneeAdded... you'll find a good name) and that event gets dispatched to the class that does the auditing. In this way the Meeting class has no clue about auditing and dispatches business events for auditing purposes (which, in my view, is very DDDish).
I wonder what other people might say (hopefully, I can learn something new!)
Consider using domain events.
Everything interesting in your domain model should raise an event shouting aloud of what just have happened. From outside, just attach log handler that dumps them in db or somewhere else.
That way - you don`t need to mess up your domain with some kind of IAuditService's.
Even better - domain model can use eventing as a way to communicate inside of itself.
To show why that`s a good idea - visualize that we are describing domain model of morning, sunrise and flowers.
Is it responsibility of sun to tell all the flowers that they should open? Not really. Sun just needs to shine brightly enough (to raise an event), light must travel down to earth (there must be some kind of infrastructure that makes eventing possible) and flowers must react themselves when receiving light (other domain models should handle events).
Another analogy - it's responsibility of driver to see what's the color of traffic lights.
You could possibly make the call to the audit service from the service layer when persisting or updating the entities, with the audit service having being injected into any services that require audit functionality, and persist the newly created entities as quickly as possible.
I see how it could be hard to work out how-and-when to audit, especially if your entities are going to exist as usable entities in the system sometime before being persisted. Even if they exist for some time before being persisted, maybe you could create in-memory audit data, containing the details of their creation and then persist that when the entities are eventually persisted. Or have the created-by, created-on, modified-by, modified-on, etc. data set as private fields in the entity and write it out to an audit log when the entity is persisted?
I'd be interested in what the trade-offs would be.
I think that the auditing properties are not a concern of your domain model. If all the use cases available in your application services layer use the domain model to make changes in the system, and the aggregate roots publish anything that has happened as Domain Events later on you can implement a handler for any event and it saves them in the audit log format you need.