I'm new to Domain driven design. I have a web application where user would be able to save intermediate results of progress through a task i.e. saving as data on a form as draft and coming back to fill it later. If the form represents an entity and its the root of the aggregates, is it ok to save the entity in half-baked state based on status?
Depends, there really is no correct general answer to this.
While one can go this route it could interfere with another principle I tend to follow which is that no Domain Object can be in an invalid state.
Since the Domain of your subsystem is a submission of a form though it might be logical to do this by state - that is the domain itself does not exclude half filled in forms, only on submission does the rule that all mandatory fields need to be completed really comes into affect.
For example it might make alot of sense for a half filled in form to be valid - especially if the form needs to go through a workflow (such as get supervisor signoff) till it could be counted as being complete
Related
We are using an event store that stores a single aggregate - a user's order (imagine an Amazon order than can be updated at any moment by both a client or someone in the e-commerce company before it actually gets dispatched).
For the first time we're going to allow our company's employees to see the order's history, as until now they could only see its current state.
We are now realizing that the events that form up the aggregate root don't really show the intent or what the user actually did. They only serve to build the current state of the order when applied sequencially to an empty order. The question is: should they?
Imagine a user that initially had one copy of book X and then removed it and added 2 again. Should we consider this as an event "User added 1 book" or events "User removed 1 book" + "User added 2 books" (we seem to have followed this approach)?
In some cases we have one initial event that then is followed by other events. I, developer, know for sure that all these events were triggered by a single command, but it seems incredibly brittle for me to make that kind of assumptions when generating on the fly this "order history" functionality for the user to see. But if I don't treat them, at least in the order history feature as a single action, it will seem like there were lots of order amendments when in fact there was just one, big one.
Should I have "macro" events that contain "micro events" inside? Should I just attach the command's id to the event so I can then easily inferr what event happened at the same and which ones not (an alternative would be relying on timestamps.. but that's disgusting).
What's the standard approch to deal with this kind of situations? I would like to be able to look at any time to the aggregate's history and generate this report (I don't want to build the report incrementally every time the order is updated).
Thanks
Command names should ideally be descriptive of intent. Which should mean it's possible to create event names which make the original intent clear. As a rule of thumb, the events in the event stream should be understandable to the relevant members of the business. It's a good rule of thumb. It should contain stuff like 'cartUpdated' etc.
Given the above, I would have expected that the showing the event stream should be fine. But I totally get why it may not be ideal in some circumstances. I.e. it may be too detailed. In which case maybe create a 'summeriser' read model fed the events.
It is common to include the command’s ID in the resulting events’ metadata, along with an optional correlation ID (useful for long running processes). This then makes it easier to build the order history projection. Alternatively, you could just use the event time stamps to correlate batches in whatever way you want (perhaps you might only want one entry even for multiple commands, if they happened in a short window).
Events (past tense) do not always capture human - or system - user intent. Commands (imperative mood) do. As all command data cannot always be easily retraced from the events it generated, keeping a structured log of commands looks like a good option here.
I would like to implement CQRS and ES using Axon framework
I've got a pretty complex HTML form which represents recruitment process with six steps.
ES would be helpful to generate historical statistics for selected dates and track changes in form.
Admin can always perform several operations:
assign person responsible for each step
provide notes for each step
accept or reject candidate on every step
turn on/off SMS or email notifications
assign tags
Form update (difference only) is sent from UI application to backend.
Assuming I want to make changes only for servers side application, question is what should be a Command and what should be an Event, I consider three options:
Form patch is a Command which generates Form Update Event
Drawback of this solution is that each event handler needs to check if changes in form refers to this handler ex. if email about rejection should be sent
Form patch is a Command which generates several Events ex:. Interviewer Assigned, Notifications Turned Off, Rejected on technical interview
Drawback of this solution is that some events could be generated and other will not because of breaking constraints ex: Notifications Turned Off will succeed but Interviewer Assigned will fail due to assigning unauthorized user. Maybe I should check all constraints before commands generation ?
Form patch is converted to several Commands ex: Assign Interviewer, Turn Off Notifications and each command generates event ex: Interviewer Assigned, Notifications Turned Off
Drawback of this solution is that some commands can fail ex: Assign Interviewer can fail due to assigning unauthorized user. This will end up with inconsistent state because some events would be stored in repository, some will not. Maybe I should check all constraints before commands generation ?
The question I would call your attention to: are you creating an authority for the information you store, or are you just tracking information from the outside world?
Udi Dahan wrote Race Conditions Don't Exist; raising this interesting point
A microsecond difference in timing shouldn’t make a difference to core business behaviors.
If you have an unauthorized user in your system, is it really critical to the business that they be authorized before they are assigned responsibility for a particular step? Can the system really tell that the "fault" is that the responsibility was assigned to the wrong user, rather than that the user is wrongly not authorized?
Greg Young talks about exception reports in warehouse systems, noting that the responsibility of the model in that case is not to prevent data changes, but to report when a data change has produced an inconsistent state.
What's the cost to the business if you update the data anyway?
If the semantics of the message is that a Decision Has Been Made, or that Something In The Real World Has Changed, then your model shouldn't be trying to block that information from being recorded.
FormUpdated isn't a particularly satisfactory event, for the reason you mention; you have to do a bunch of extra work to cast it in domain specific terms. Given a choice, you'd prefer to do that once. It's reasonable to think in terms of translating events from domain agnostic forms to domain specific forms as you go along.
HttpRequestReceived ->
FormSubmitted ->
InterviewerAssigned
where the intermediate representations are short lived.
I can see one big drawback of the first option. One of the biggest advantage of CQRS/ES with Axon is scalability. We can add new features without worring about regression bugs. Adding new feature is the result of defining new commands, event and handlers for both of them. None of them should not iterfere with ones existing in our system.
FormUpdate as a command require adding extra logic in one of the handler. Adding new attribute to patch and in consequence to command will cause changes in current logic. Scalability is no longer advantage in that case.
VoiceOfUnreason is giving a very good explanation what you should think about when starting with such a system, so definitely take a look at his answer.
The only thing I'd like to add, is that I'd suggest you take the third option.
With the examples you gave, the more generic commands/events don't tell that much about what's happening in your domain. The more granular events far better explain what exactly has happened, as the event message its name already points it out.
Pulling Axon Framework in to the loop, I can also add a couple of pointers.
From a command message perspective, it's safe to just take a route and not over think it to much. The framework quite easily allows you to adjust the command structure later on. In Axon Framework trainings it is typically suggested to let a command message take the form of a specific action you're performing. So 'assigning a person to a step would typically be a AssignPersonToStepCommand, as that is the exact action you'd like the system to perform.
From events it's typically a bit nastier to decide later on that you want fine grained or generic events. This follows from doing Event Sourcing. Since the events are your source of truth, you'll thus be required to deal with all forms of events you've got in your system.
Due to this I'd argue that the weight of your decision should lie with how fine grained your events become. To loop back to your question: in the example you give, I'd say option 3 would fit best.
Scenario: (sort of Call center) (1) Customer Requests technician. (2) Request goes into queue for technicians to see. (2b) Customer gets confirmation email about submitting data (3) Technicians Process request (3b) everyone gets email (4) Request is completed (5) Technician submits data for completed request (6) Closed Request.
So two actors on the left. Not everything has to connect right? So for customer getting emails and submitting data is drawn.
For Technician actor they have processing interaction and submitting data and getting email.
I am reading about UML here: http://www.soberit.hut.fi/T-76.115/01-02/palautukset/groups/Fireball/t2/docs/UseCaseMethod.html
Was wondering if there should be an actor on the right side of the diagram representing the database? Am I missing anything? How do you know you are completed with a use-case diagram?
Actors are not included in the system, they are extern to the system. Usually, the DB is in the system and it is not an actor.
For example, in your case, a secondary actor could be google map if the technician has to know how to go the customer and, for that he has to see a map whith the travel. In this case, during a use case, google map is reached to get the map.
The only way I know to be sure that UCs are completed is to review them and/or to get a list of customers needs and to trace customers needs with UCs.
Hope this help.
More :
The remark of #Kilian about function is a real good one. Usually when we start we thought use case as "workflow to achieve a feature" or as the set of all user interface menus and this is not that.
So #Waren, I could suggest:
First try to define the system with a title and a paragph deifning the main mission of the system. System is not only the code you are going to write but all what will deployed for it (machine, virtual machine, db, bays, swicht, procedures, DDL, configuration files and so on)
Then define the needs, high level needs that the system must fulfilled (iso term is shall see enter link description here )
Then define the actors/stackeholder and the inheritance hierarchy to figure the needed roles and rights. Do not forget all operational needs (monitoring, backup/restore, DRS procedure, reports, deployment and so on)
Then define your use cases thinking features or single added values and check the whole coherency. A good point about UC is to describe "error/exception" scenarios.
Then an interesting point could be to define the mode of system : installation, tests before production live, production, update/patch, maintenance, system stop and removing. Like that you will be sure to cover the whole system lifecycle.
I'm new to the CQRS/ES world and I have a question. I'm working on an invoicing web application which uses event sourcing and CQRS.
My question is this - to my understanding, a new command coming into the system (let's say ChangeLineItemPrice) should pass through the domain model so it can be validated as a legal command (for example, to check if this line item actually exists, the price doesn't violate any business rules, etc). If all goes well (the command is not rejected) - then the appropriate event is created and stored (for example LineItemPriceChanged)
The thing I didn't quite get is how do I keep this aggregate in memory to begin with, before trying to apply the command. If I have a million invoices in the system, should I playback the whole history every time I want to apply a command? Do I always save the event without any validations and do the validations when constructing the view models / projections?
If I misunderstood any part of the process I would appreciate your feedback.
Thanks for your help!
You are not alone, this is a common misunderstanding. Let me answer the validation part first:
There are 2 types of validation which take place in this kind of system. The first is the kind where you look for valid email addresses, numeric only or required fields. This type is done before the command is even issued. A command which contains these sorts of problems should not be raised as commands (for belt and braces you can check at the domain side but this is not a domain concern and you are better off just preventing this scenario).
The next type of validation is when it is a domain concern. It could be the kind of thing you mention where you check prices are within a set of specified parameters. This is a domain concept the business people would understand, do and be able to articulate.
The next phase is for the domain to apply the state change and raise the associated events. These are then persisted and on success, published for the rest of the app.
All of this is can be done with the aggregate in memory. The actions are coordinated with a domain service which handles the command. It loads the aggregate, apply's all it's past events (or loads a snapshot) then issues the command. On success of the command it requests all the new uncommitted events and tries to persist them. On success it publishes the new events.
As you see it only loads the events for that specific aggregate. Even with a lot of events this process is lightning fast. If performance is a problem there are strategies such as keeping aggregates in memory or snapshotting which you can apply.
To your last point about validating events. As they can only be generated by your aggregate they are trustworthy.
If you want more detail check out my overview of CQRS and ES here. And take a look at my post about how to build aggregate roots here.
Good luck - I hope they help!
It is right that you have to replay the event to 'rehydrate' the domain aggregate. But you don't have to replay all events for all invoices. If you store the entity id of the root aggregate in the events, you can just select and replay the events that with the relevant id.
Then, how do you find the relevant aggregate root id? One of the read repositories should contain the relevant information to get the id, based on a set of search criteria.
This is a rather generic question, but I'll try to be as precise as possible:
quite often I'm asked by customers for proper implementations of LotusScript's
continue = false
in Notes' Query* events. One quite common situation is a form's QueryOpen event where we actually can stop the process of opening the document in question based on some condition, e.g based on the response from a user dialog.
For some Xpages events like querySaveDocument there are quite obvious solutions, whereas with others I only can recommend re-thinking the entire logic like preventing code execution at a much earlier stage. But of course most people in question would prefer a generic approach like "re-write those codes using...". And - to be honest - I'd like to know myself ;)
I'm more or less familiar with the Xpages / JSF lifecycle, but have to admit that I don't have a proper idea how I could stop execution at any given phase.
As always, any hint is welcome.
EDIT (to clarify my question, but also in response to Tim's answer below):
It's not just the QuerySave but also the QueryModeChange and QueryRecalc that somehow need to be transformed together ith an existng application's logic but that don't have their equivalent in the Xpages logic. Are both concepts (forms based and xpages based) just too different at this point?
As an example think of a workflow application where we need to check certain conditions before we allow opening an existing doc in edit mode for a potential author. In my Notes client application I add some code to 2 events, i.e. QueryOpen, where I check the "mode" arg, and 2nd QueryModeChange, where I check the current doc mode. In both cases I can prevent the doc from being edited by adding my continue = false, if necessary. Depending on the event the doc will either not change its mode, or not open at all.
With an Xpage I can use buttons for changing a doc's edit mode, and I can "hide" those buttons, or just add some checking code or whatever.
But 17 years of Domino consulting have tought me at least one lesson: there'll always be users that'll find the hidden ways to reach their goals. In our case they might find out that a simple modification of the page's URL will finally allow them to edit the doc. To prevent this I could maybe use the "beforeRenderResponse" event, I assume. But then, beforeRenderResponse is also called in other situations as well, so that we have to investigate the current situation first. Or I could make sure that users don't have author rights unless the situation allows it.
Again, not a huge problem, but when making the transition from a legacy Notes application this means re-thinking its entire logic. Which makes the job more tediuos, and especilly more expensive.
True? Or am I missing some crucial parts of the concept?
Structure your events as action groups and, when applicable, return false. This will cause all remaining actions in the group to be skipped.
For example, you could split a "Save" button into two separate actions:
1.
// by default, execute additional actions:
var result = true;
/* execute some logic here */
if (somethingFailed) {
result = false;
}
return result;
Replace somethingFailed with an evaluation based on whatever logic you have in place of the block comment to determine whether it's appropriate to now save the document.
2.
return currentDocument.save();
Not only does the above pattern cause the call to save() to be skipped if the first action returns false, but because save(), in turn, returns a boolean, you could theoretically also add a third action as a kind of postSave event: if the save is successful, the third action will automatically run; if the save fails, the third action will be automatically skipped.
All queryModeChange logic should be moved to the readonly attribute of a panel (or the view root of an XPage or Custom Control) containing all otherwise editable content... you would basically just be flipping the boolean: traditionally, queryModeChange would treat false (for Continue) as an indication that the document should not be edited (although this also forces you to check whether the user is trying to change from read to edit, because if you forgo this check, you're potentially also preventing a user from changing the mode back to read when it's already in edit), whereas readonly should of course return true if the content should not be editable.
Since the queryModeChange approach was nearly always an additional layer of "fig leaf" security, in XPages it's far better to handle this via actual security mechanisms; the readonly attribute is explicitly intended for enforcing security. Additionally, in lieu of using readonly, you could instead use the acl complex property that is also available for panels, XPages, and Custom Controls to provide different permissions to different subsets of users; anyone with a certain role, for instance, would automatically have edit, whereas the level for the default entry can be computed based on item values indicating the current "status" and/or "assignee". With either (or both) of these mechanisms in place, it doesn't matter what the user does to the URL... the relevant components cannot be editable if the container is read only. They could even try to hack in by running JavaScript in Chrome Developer Tools, attempting to emulate the POST requests that would be sent if they could edit the content... the data they send will still not get pushed back to the model, because the targeted components are read-only by virtue of the attributes of their container.
Attempting to apply all Notes client patterns directly to the XPages context is nearly always an exercise in frustration -- and, ultimately, futility. While I won't divulge specifics here, I (and some of the smartest people I know) learned this lesson at great cost. While users may say (and even believe) that they want exactly what they already have... if they did, they would be keeping what they already have, not paying you to turn it into something else. So any migration from a Notes client app to an XPage "equivalent" is your one opportunity to revisit the reason the code used to do what it did, and determine whether that even makes sense to retain within the XPage, based not only on the differential between Notes client and XPage paradigms, but also on any differential between what the users' business process was when the Notes client app was developed and what their process is now.
Omitting this evaluation guarantees that the resulting app will be running code it doesn't need to and fail to make the most of the target platform.
queryRecalc is a perfect example of this: typically, recalculation was blocked to optimize performance when the user's desktop and network resources were responsible for performing complex and/or network-intensive recalculations. In XPages this all happens on the server, so a network request from the browser that returns a page where everything has changed is typically no more expensive for the end user than a page where nothing has changed (unless there's an extreme differential in the amount of markup that is actually sent). Unless the constituent components are bound to data that is expensive for the server to recalculate, logical blocking of recalculation offers little or no performance benefit for the user. Furthermore, if you're trying to block recalculation in an event, you're too late: XPages uses a "lifecycle" that consists of 6 phases, so by the time your event code runs, any recalculation you're trying to block has already occurred. So, if the reason for blocking recalculation was to optimize performance, implement a scope caching strategy that ensures you're only pulling fresh data when it makes sense to do so, and the end user experience will be sufficiently performant without trying to prevent the entire page from recalculating. If, on the other hand, queryRecalc was being used as another fig leaf (something has changed, but we don't want to show the user the updates yet), that logic should definitely be revisited to determine whether it's still applicable, still (if ever) a good idea, and which portions of the platform are now the best fit for meeting the business process objectives.
In summary, use the security mechanisms unique to XPages for locking down portions or all of a page, and use the memory scopes that we didn't have in the Notes client to ensure the application performs well. Porting an event that used to contain this logic to an XPage event that continues to contain this logic will likely fail to produce the desired result and squander some of the benefits of migrating to XPages.