CQRS and CRUD screens - domain-driven-design

One of the basic tenets of CQRS, as I understand it, is that commands should be behaviour-centric, and have a value in the business or the UL, and not data-centric, ie., CRUD. Instead of focusing on updating a customer, we have commands like CustomerHasMoved. What if you have CRUD screens which are there to correct certain data. For example, we need to change the name of a customer which is misspelled. This doesn't really have much value in the business. Should this just be under the umbrella of an UpdateCustomer command?

I just want to put a comment on this quickly as it popped up.
It is important to note that some objects are actually CRUD and thats ok. I may not really care why a name is changing in my domain where I ship products to people and only need that data to print mailing labels. The trick is in making behavior the default and THEN reverting to a CRUD interface once you are sure you really don't care about the reasons as opposed to vice versa.
Greg

Actually, there could be various reasons to update the name of a customer. As you were saying, it could be misspelled or... you could get married and change your name to your husband's.
If you had only an UpdateCustomer command, you would loose the original intent and you would not be able to have different behaviours for each of them. If the name was misselled it could be as simple as updating the database, whereas if your customer got married you might need to notify the marketing departement so tthat they can offer a discount.
In the case that your entity is purely CRUD, that is there is no intent that you can associate with modifying the properties, then it's OK to have an UpdateEntityCommand. You can then transition slowly to something more task based

CustomerHasMoved is the event that is fired after you have updated the customers location. This event updates the read databases/cache databases. The command from the gui should be MoveCustomer or something like that. I think I would put the update of the customer name in a command like UpdateCustomer.

Related

How to ensure data consistency between two different aggregates in an event-driven architecture?

I will try to keep this as generic as possible using the “order” and “product” example, to try and help others that come across this question.
The Structure:
In the application we have 3 different services, 2 services that follow the event sourcing pattern and one that is designed for read only having the separation between our read and write views:
- Order service (write)
- Product service (write)
- Order details service (Read)
The Background:
We are currently storing the relationship between the order and product in only one of the write services, for example within order we have a property called ‘productItems’ which contains a list of the aggregate Ids from Product for the products that have been added to the order. Each product added to an order is emitted onto Kafka where the read service will update the view and form the relationships between the data.
 
The Problem:
As we pull back by aggregate Id for the order and the product to update them, if a product was to be deleted, there is no way to disassociate the product from the order on the write side.
 
This in turn means we have inconsistency, that the order holds a reference to a product that no longer exists within the product service.
The Ideas:
Master the relationship on both sides, which means when the product is deleted, we can look at the associated orders and trigger an update to remove from each order (this would cause duplication of reference).
Create another view of the data that shows the relationships and use a saga to do a clean-up. When a delete is triggered, it will look up the view database, see the relationships within the data and then trigger an update for each of the orders that have the product associated.
Does it really matter having the inconsistencies if the Product details service shows the correct information? Because the view database will consume the product deleted event, it will be able to safely remove the relationship that means clients will be able to get the correct view of the data even if the write models appear inconsistent. Based on the order of the events, the state will always appear correct in the read view.
Another thought: as the aggregate Id is deleted, it should never be reused which means when we have checks on the aggregate such as: “is this product in the order already?” will never trigger as the aggregate Id will never be repurposed meaning the inconsistency should not cause an issue when running commands in the future.
Sorry for the long read, but these are all the ideas we have thought of so far, and I am keen to gain some insight from the community, to make sure we are on the right track or if there is another approach to consider.
 
Thank you in advance for your help.
Event sourcing suites very well human and specifically human-paced processes. It helps a lot to imagine that every event in an event-sourced system is delivered by some clerk printed on a sheet of paper. Than it will be much easier to figure out the suitable solution.
What's the purpose of an order? So that your back-office personnel would secure the necessary units at a warehouse, then customer would do a payment and you start shipping process.
So, I guess, after an order is placed, some back-office system can process it and confirm that it can be taken into work and invoicing. Or it can return the order with remarks that this and that line are no longer available, so that a customer could agree to the reduced order or pick other options.
Another option is, since the probability of a customer ordering a discontinued item is low, just not do this check. But if at the shipping it still occurs - then issue a refund and some coupon for inconvenience. Why is it low? Because the goods are added from an online catalogue, which reflects the current state. The availability check can be done on the 'Submit' button click. So, an inconsistency may occur if an item is discontinued the same minute (or second) the order has been submitted. And usually the actual decision to discontinue is made up well before the information was updated in the Product service due to some external reasons.
Hence, I suggest to use eventual consistency. Since an event-sourced entity should only be responsible for its own consistency and not try to fulfil someone else's responsibility.

Inferring the user intention from the event stream in an event store. Is this even a correct thing to do?

We are using an event store that stores a single aggregate - a user's order (imagine an Amazon order than can be updated at any moment by both a client or someone in the e-commerce company before it actually gets dispatched).
For the first time we're going to allow our company's employees to see the order's history, as until now they could only see its current state.
We are now realizing that the events that form up the aggregate root don't really show the intent or what the user actually did. They only serve to build the current state of the order when applied sequencially to an empty order. The question is: should they?
Imagine a user that initially had one copy of book X and then removed it and added 2 again. Should we consider this as an event "User added 1 book" or events "User removed 1 book" + "User added 2 books" (we seem to have followed this approach)?
In some cases we have one initial event that then is followed by other events. I, developer, know for sure that all these events were triggered by a single command, but it seems incredibly brittle for me to make that kind of assumptions when generating on the fly this "order history" functionality for the user to see. But if I don't treat them, at least in the order history feature as a single action, it will seem like there were lots of order amendments when in fact there was just one, big one.
Should I have "macro" events that contain "micro events" inside? Should I just attach the command's id to the event so I can then easily inferr what event happened at the same and which ones not (an alternative would be relying on timestamps.. but that's disgusting).
What's the standard approch to deal with this kind of situations? I would like to be able to look at any time to the aggregate's history and generate this report (I don't want to build the report incrementally every time the order is updated).
Thanks
Command names should ideally be descriptive of intent. Which should mean it's possible to create event names which make the original intent clear. As a rule of thumb, the events in the event stream should be understandable to the relevant members of the business. It's a good rule of thumb. It should contain stuff like 'cartUpdated' etc.
Given the above, I would have expected that the showing the event stream should be fine. But I totally get why it may not be ideal in some circumstances. I.e. it may be too detailed. In which case maybe create a 'summeriser' read model fed the events.
It is common to include the command’s ID in the resulting events’ metadata, along with an optional correlation ID (useful for long running processes). This then makes it easier to build the order history projection. Alternatively, you could just use the event time stamps to correlate batches in whatever way you want (perhaps you might only want one entry even for multiple commands, if they happened in a short window).
Events (past tense) do not always capture human - or system - user intent. Commands (imperative mood) do. As all command data cannot always be easily retraced from the events it generated, keeping a structured log of commands looks like a good option here.

Use Case Diagram Website (UML)

I have a couple of questions regarding if the following process can be considered use-cases.
Website where establishments can post events.
User can "follow" establishment, "attend" event.
etc...
On my index page, i have the following sections:
Recommended Events, Events recently created, Events from establishments that the user "follows", Top 10 establishments, Recent comments, Popular events and so on.. (all which i am pulling from a database)
Would the index page be considered a use case? And would all the sections i named be individual use-cases? Considering i already have a Consult establishment, and consult event use-case, would all the section fall into this category?
I have on the establishment page a button where the user can click and the user will follow the establishment and receive notifications. All the button does once clicked, is adds the user to a table (User_Preferences), pretty much like a "like" button or a follow button.
Would this be considered a use-case(Add to Preferences use case)?
When i visit an establishment page, i am pulling data from many tables, such as: beverages, music, artists_attended, food, etc.
On the use-case Consult establishment, would i need to include every individual information? consult_beverage, consults_music,consult_artist, consult food all included to consult establishment? or are they considered already in consult establishment?
Finally, would every page i create, Index,Establishment,Events,UserProfile, etc... would they all be considered a use-case? Consult Establishment, Consult Events, Manage Profile
thank you, any tips or help would be appreciated, i understand the concept of use cases, but i sometimes tend to overthink some uses cases. thanks for the help.
The index page itself is not a use case. A use case represents some interaction between an actor and the system, but the page and its sections are part of the system design. If you were to replace the web browser with a custom-written GUI application, the use cases should be essentially the same.
In this case, you seem to be creating the use cases after you've designed the system, which is probably what's tripping you up -- use cases are usually determined before the system is designed.
"Add to Preferences" seems like a good use case. How much work goes into realizing the use case is normally of little importance; what matters is whether the interaction provides some value to the actor. The complete set of use cases describes what the user can do with the system, not how many engineering hours were spent constructing it.
You should not incorporate details on the stored data in your use cases. If you find yourself doing that you need to take a step back and try to think a little more abstractly. What does the use case do for the actor, what does the actor want? To get information about an establishment? Then that's enough, you don't need to specify the exact information stored in the system. The important thing is that the actor wants the information and that the system provides it.
Use cases are part of system analysis, not system design. As such, there is no problem with having the same design component (page) realize several use cases. So you could for instance have use cases for "see recommended events", "see coming events for 'followed' establishments", "see coming 'attended' events", all being realized in different sections on the same page.
A page is never any use case. A use case is what brings value to an actor. Simple as that. If you can name the value then you got the name of the use case. If you can't name the value then you don't have a use case.
E.g. your 1st events page: I would assume that the use case behind it will be Find Event. Similarly you have to think of the other cases. On the opposite Login to Site is not a use case because it does not bring any value to the actor.

CQRS aggregates

I'm new to the CQRS/ES world and I have a question. I'm working on an invoicing web application which uses event sourcing and CQRS.
My question is this - to my understanding, a new command coming into the system (let's say ChangeLineItemPrice) should pass through the domain model so it can be validated as a legal command (for example, to check if this line item actually exists, the price doesn't violate any business rules, etc). If all goes well (the command is not rejected) - then the appropriate event is created and stored (for example LineItemPriceChanged)
The thing I didn't quite get is how do I keep this aggregate in memory to begin with, before trying to apply the command. If I have a million invoices in the system, should I playback the whole history every time I want to apply a command? Do I always save the event without any validations and do the validations when constructing the view models / projections?
If I misunderstood any part of the process I would appreciate your feedback.
Thanks for your help!
You are not alone, this is a common misunderstanding. Let me answer the validation part first:
There are 2 types of validation which take place in this kind of system. The first is the kind where you look for valid email addresses, numeric only or required fields. This type is done before the command is even issued. A command which contains these sorts of problems should not be raised as commands (for belt and braces you can check at the domain side but this is not a domain concern and you are better off just preventing this scenario).
The next type of validation is when it is a domain concern. It could be the kind of thing you mention where you check prices are within a set of specified parameters. This is a domain concept the business people would understand, do and be able to articulate.
The next phase is for the domain to apply the state change and raise the associated events. These are then persisted and on success, published for the rest of the app.
All of this is can be done with the aggregate in memory. The actions are coordinated with a domain service which handles the command. It loads the aggregate, apply's all it's past events (or loads a snapshot) then issues the command. On success of the command it requests all the new uncommitted events and tries to persist them. On success it publishes the new events.
As you see it only loads the events for that specific aggregate. Even with a lot of events this process is lightning fast. If performance is a problem there are strategies such as keeping aggregates in memory or snapshotting which you can apply.
To your last point about validating events. As they can only be generated by your aggregate they are trustworthy.
If you want more detail check out my overview of CQRS and ES here. And take a look at my post about how to build aggregate roots here.
Good luck - I hope they help!
It is right that you have to replay the event to 'rehydrate' the domain aggregate. But you don't have to replay all events for all invoices. If you store the entity id of the root aggregate in the events, you can just select and replay the events that with the relevant id.
Then, how do you find the relevant aggregate root id? One of the read repositories should contain the relevant information to get the id, based on a set of search criteria.

Creating a N:1 Relationship on Order Product Entity

I am trying to create a N:1 relationship off another entity to Order Product. It is not an option in the pick list. I then tried to go to Order Product and create a 1:N relationship and it also does not allow it.
I am sure this is by design from Microsoft, but is there a way to achive this? I perfer not to to a 1:N or N:N as a work around since it will create grids on the form (and that does not make much sense from a UI perspective when there will only be one record).
Thank for the help!!!!
I am going to add a single line of text field and format it as a url. Then link it to the related entity by dynamically populating a URL to the entity. It is a work around but of all the possible scenarios its the best for my situation
We faced the same problem during building a solution for a client. It was a heavy restriction so in the end we just created our own order product entity and linked it via a one to many to order.
This gave us complete control over it and could add relationships as we wish.
This came at a cost unfortunately as you lose the auto calculation on order for example. This wasn't an issue however as we didn't need it or any of the price list functionality.
If this is an option for you I'd recommend doing it this way.
I think that everybody had to face the same problem in his CRM life.
For CRM, the entities, salesorderproduct... are entities used only to enumerate the products of the entity related in its name, and you can not do almost nothing, that's another problem with a workaround, that I'll try to explain, just to see if this could be the solution to creating relations with them, but I don't think so.
The problem is that you can not use the assign functionality as in other relations to copy data from one entityproduct to the lower-level entityproduct when you create custom fields, and you want to copy throught the entire workflow of sales section. In this case, there is no option to enter the "Assign" window (I use Assign because I have always worked in Spanish) and create the field mappings between them.
This could be done by searching the GUID of the "Assign" window, and copyign into any of the URLs of the "Assign" window, the window showed up, and you could do your custom mappings.
I hope this could help, although this question is too old, so I hope other that arrive here, could see more opinions :)
See you

Resources