Encapsulating processes within Domain Services - domain-driven-design

Note - all quotes are from DDD: Tackling Complexity in the Heart of Software
First quote ( page 222 ):
Processes as Domain Objects
Right up front let's agree that we do not want to make procedures a
prominent aspect of our model. Objects are meant to encapsulate the
procedures and let us think about their goals or intentions instead.
What I am talking about are processes that exist in the domain, which
we have to represent in the model. When these emerge, they tend to
make for awkward object designs.
The first example in this chapter described a shipping system that
routed cargo. This routing process was something with business
meaning. A Service is one way of expressing such a process explicitly,
while still encapsulating the extremely complex algorithms.
Second quote ( pages 104,106 ):
Sometimes, it just isn't a thing. In some cases, clearest and most
pragmatic design includes operations that do not conceptually belong
to any object. Rather than force the issue, we can follow the natural
contours of the problem space and include Services explicitly in the
model.
When a significant process or transformation in the domain is not a
natural responsibility of an Entity or Value Object, add an operation
to the model as a standalone interface declared as a Service. Define
the interface in terms of the language of the model and make sure the
operation name is part of the Ubiquitous language.
I can't figure out whether in first quote author is using the term "processes" to describe the same type of behavior ( which should also be encapsulated within a Service ) as in the second quote, or is the term "processes" used to describe a rather different kind of behavior than one he's describing on pages 104, 106?
Thank you

The concepts are pretty close but to me, the first quote looks more like it's about large real-world domain processes that would exist without the software (e.g. "a cargo routing process").
Second one is less clear but I think it describes smaller operations/processes/transformations taking place in the modelled version of the domain.
While the first kind should immediately click as "Service" right from early analysis stages, the latter is more subtle and could take more time to be distinguished from regular entity behavior - you could have included it in an entity at first but realize it doesn't fit that much in it as you refine the model.

I think in p. 222 he's talking about a specific kind of process. So like in the p. 102 quote, they can be implemented as services. However, some processes refer to actual domain processes and can benefit from explicit representation in the model. This may not be immediately obvious and can call for more sophisticated object-models beyond services.

Related

How to handle different implementations in SysML/UML?

Imagine that we are building a Library system. Our use cases might be
Borrow book
Look up book
Manage membership
Imagine that we can fulfill these use cases by a librarian person or a machine. We need to realize these use cases.
Should we draw different use case realizations for different flows?
If not, it is very different to borrow a book from a machine and a person. How can we handle it?
Moreover, what if we have updated version of library machines some day? (e.g. one with keyboard and the other is with touch screen) What should we do then? The flow stays the same, however the hardware and the software eventually be different.
What would you use to realize use cases and why?
It might be a basic question, but I failed to find concrete examples on the subject to understand what is right. Thank you all in advance.
There is no single truth or one way you "should" do it. I will give you my approach, based on the Unified Process.
The use case technique is primarily used to describe a dialog between a human user (actor) and an application. It is modeled as an ellipse and further specified as an activity diagram or just a list of steps: 1 The actor does A, 2 The system does B, 3 The actor does C etc. In this approach, the application is regarded as a black box.
A "use case realization" describes how the system performs its steps (white box), e.g. in terms of collaborating components, transparent to the user.
It is possible, but much less common, to have so-called business use cases. In that case, the "system" represents an enterprise or a business unit. In your case, it would be the library. The "actor" represents an external person or organization, e.g. a client or a supplier. In your case, it would be a client. With business use cases, the library is regarded as a black box. The steps are still in format "actor does A; system does B", but here, it is not specified which of the library's actions are performed by humans and which by applications. The system is the organization, interfacing with external actors, regardless of whether this is implemented by employees or by applications.
A "business use case realization" specifies how the system performs its steps (white box) and specifies which parts are done by employees and which parts by machines.
Now, let me answer you questions one by one.
Question 1.
If you have described your use case as a business use case, and it is at such a high level of abstraction that the steps for client-librarian interaction are the same as for client-machine interaction, then you will have one business use case "Borrow book" and two business use case realizations for this business use case.
However, it is more common practice to have only use cases for user-application interaction. If the client interacts with the system in the same way as a librarian would do on behalf of the client, then you will have only one use case "Borrow book", with actor "Person". This actor has two specializations: "Client" and "Librarian". There will be only one use case realization per use case.
Otherwise, you would have one use case "Borrow book online" describing the flow of events when a client interacts directly with the application, connected to actor "Client" and another use case "Borrow book for client" describing the flow of events when a librarian interacts with the application while talking to the client. The latter use case has "Librarian" as its actor. Again, there will be only one use case realization per use case.
You may choose to model the Client-Librarian interaction separately, or not at all, depending on the purpose of your model.
Question 2.
Let's take use case "Borrow book online". You may have two use case realizations for this use case: one for the keyboard machine and one for the touch screen machine. If these use case realizations are very similar, then I would just make only one use case realization and describe the fact that there are two possible input devices inside that single realization.
Question 3.
For a business use case realization, I would use BPMN 2.0 or a UML activity diagram. These are well suited for business workflow specification.
For a normal use case realization, I usually make a sequence diagram, where the lifelines in those diagrams refer to components defined in a common component diagram. In the left margin of the sequence diagrams, I usually write the steps of the use case in UML note symbols. The sequence diagram focuses on the interaction between components, using their interfaces. This gives a nice overview of the collaboration between components in the context of a particular use case.
For more information, please refer to my white paper Which UML models should we make?. The use case realization is described on page 19.
UML is method-agnostic. Even when there are no choices to make, there are different approaches to modeling, fo example:
Have one model and refine it succesfully getting it through the stages requirements, analysis (domain model of the problem), design (abstraction to be implemented), implementation (classes that are really in the code).
Have different models for different stage and keep them all up to date
Have successive models, without updating the previous stages.
keep only a high level design model to get the big picture, but without implementation details that could be found in the code.
Likewise, for your question, you could consider having different alternative models, or one model with different alternatives grouped in different packages (to avoid naming conflicts). Personally, I’d go for the latter, because the different alternatives should NOT be detailed too much. But ultimately, it’s a question of cost and benefits in your context.
By the way, Ivar Jacobson’s book, the Object advantage applies OO modeling techniques to business process design. So UML is perfectly suitable for a human solution. It’s just that the system under consideration is no longer an IT system, but a broader organisational system, in which IT represents some components among others.
UML has collaboration elements to show different implementations. The use cases are anchors since the added value for the actors does not change. However, you can realize these use cases in different ways. And that is where the collaborations come into play. A collaboration looks like a use case but has a dashed border. And you draw a realize relation from one or many collaborations towards a use case. Inside the collaborations you show how the different implementation's classes collaborate (hence the name).
P.213 of UML 2.5 in paragraph 11.7 Collaborations:
The primary purpose of Collaborations is to explain how a system of communicating elements collectively accomplish a specific task or set of tasks without necessarily having to incorporate detail that is irrelevant to the explanation. Collaborations are one way that UML may be used to capture design patterns.
A CollaborationUse represents the application of the pattern described by a Collaboration to a specific situation involving specific elements playing its collaborationRoles.

Can I say Axon Commands and Events are considered as anemic models?

My question here is quite straight as mentioned in the subject.
However, please allow me to give some brief explanation here about my innocent thoughts.
I've been using Axon for approximately 10 months now. I used to design my project structure based on the Hexagonal architecture with two top level packages respectively for domain and infrastructure.
Furthermore, domain package will contain different domain objects (as explained in the DDD concept) such as follow:
Aggregate (this will be an Axon aggregate class).
Repository (in my case, this will be a Spring Data Repository interface).
Entity (in my case, this contains any lookup entity that i used for set-based consistency validation as written here).
Service Port (collection of Input and Ouput port interfaces).
Commands (representing Axon Command object).
As for Events, I used to put them on a different module that I compiled as a jar file, so I can share it to other developers whom going to use the same event in their project.
I've noticed recently that all of my commands and events were basically anemic models (an anti pattern that we should avoid).
Is there any good practice on this ? Or, Is it something that intentionally used by design ?
I've been thinking to put my Command classes within my Aggregate class (as an inner classes). At least by using this approach I won't end-up with having so many anemic models scattered outside. Any thoughts ?
Commands are designed to be behavior and input structures mirroring the external world. They don't necessarily mirror an aggregate's structure.
They are not even connected clearly to one single aggregate, at times. Enclosing them within aggregates can be a code smell because you are then thinking in terms of resources and UI organization, instead of transaction boundaries and entity groups.
You are also violating the open-closed principle. Changes in volatile layers like user interface and request structures will make you edit the Aggregate class, and that is not good design.
On a more general note...
At times, this debate of anemic vs. non-anemic (or dry vs. non-dry) can push you in the direction of premature - and incorrect - optimization. Try avoiding this trap because you will end up optimising at the code level, but your domain will suffer.
DDD and CQRS guidelines align with principles that help you keep complexity at bay over the long term. Things kept distinct and separate help you achieve this.
First of all, in DDD, your domain had to be free of any frameworks, just use pure language library.
Then, mixing Commands and Aggregates cannot be a good solution. I think Commands belongs to Port while Aggregates belongs to the Hexagone.
Finally, DDD highlights the discovery of the domain thanks to the experts. Did you do that ? If not, if you're only using the Tacticts pattern, you'll miss one of the most important part of DDD.

How to parameterize a UML sequence diagram and apply it to multiple object instances?

I would like to to create a sequence diagram to show some interaction, and then use that sequence diagram as an interaction occurrence (sub-sequence) on other sequence diagrams. The point is I would like to apply the sub-sequence each time to a different object instance that is involved in the interaction in the sub-sequence. In my case the instances are simply various file artifacts. Is there any legitimate way of doing this prescribed by UML?
EDIT: some more clarification of my context:
I have 2 main sequence diagrams where I want to reuse the sub-sequence as an interaction occurrence
on the 1st main sequence there is one file for which the sub-sequence has to be applied 3 times
on the 2nd main sequence there are 3 different files for which the sub-sequence has to be applied 3 times
the files are read by the same object instance
I model reading from a file by a call arrow stereotyped as <<read>> to a on object instance which represents the file.
I need to reference the file somehow in the sub-sequence, but I haven't found a good and simple way of doing this.
Complicated, but formally (almost) correct solution with Collaborations
Just using InteractionUses is not enough, because this doesn't allow you to assign the actual roles in the main interaction to the generic roles of your used interaction.
Collaborations, CollaborationUses and Role Bindings can be used for this.
See my example here:
This defines a Collaboration with generic roles sender, relay and receiver and shows the interaction between them.
You can now use this collaboration in a concrete situation:
Class S uses the Collaboration two times with different role bindings to its parts (A, B and C are assumed to be able to send and receive Sig1).
With these definitions you can now create your main sequence diagram:
Unfortunately, this is not correct UML, even though there is an example in the specification (I filed an Issue https://issues.omg.org/browse/UMLR-768). You will have to fake this notation until the taskforce comes up with a fix. How to fake it, depends on how strict your tool implements the specification.
Advantage: formally correct and versatile solution, backed by an example in the specification
Disadvantage: complicated and difficult to explain, not completely usable, because of a bug in the specification
Basically there are three different ways to specify such situations.
Using a gate. Whith gates you specify the sequence with messages that start or end at a gate that is defined and in most tools (if usable at all) not shown explicitly. Instead it is modelled with messages starting or ending at the interaction border.
Similar as gates are lost and found messages. These are special messages that pass out the control to another sequence or returns from one. Such as in the case before you can define a set of further diagrams specifying the interaction in more details.
Using abstraction, which is my favorit for most of the cases. This means you extract the common interface from the classes and specify the interaction against the interface instead of the concrete classes.
Use an Interaction with Parameters:
Now we would like to reference the Lifelines of the main Interaction in the arguments of the InteractionUse. Unfortunately, in UML this is not possible, since arguments are ValueSpecifications and they cannot reference another modelelement.
However, NoMagic suggested and implemented an additional ValueSpecification, called ElementValue, that does exactly this. I think this would be a valuable addition to UML and hopefully it will be added some day. Up to then, only MagicDraw users can use this solution (as far as I know).
With this non standard element, we can model this:
The connection between the lifelines is now via the arguments for the parameters of the generic interaction. Technically the lifelines would not need to be explicitely covered by the Interaction Use, but I think that it makes sense to do it (shown in my tool with a non standard circle on the border of the Interaction Use).
Advantage:
compact and versatile solution, almost conformant to the standard
Disadvantage:
uses a non standard model element, currently only available to MagicDraw users.
pragmatic non conformant solution with covered lifelines:
The collaboration and parameter solutions allow to specify it (almost) formally correct. However, in many cases, a simplified model would be sufficient. In your case, for example, you only have two participants and they have different types. So, even though there is no formal connection between the lifelines of the used interaction, and those of the main interaction, there would be no ambiguity. You could use the covered attribute of the InteractionUse to specify, which of the lifelines (files) you are targeting at a specific InteractionUse. Could that be the pragmatic solution, you are looking for?
Advantage:
compact solution
Disadvantage:
not conformant to UML, ambiguous in more complicated situations

Is System Sequence Diagram part of Analysis or Design?

I'm wondering if System Sequence Diagram (SSD) belongs to design part or analysis part?
A System Sequence Diagram (SSD) is be a special type of UML sequence diagram that intends to document for one specific use case the the sequence of exchanges between the system under consideration and the outside actors.
It is not a standard UML diagram, but build upon such diagrams. The book "System analysis and design in a changing world" seem to have popularized this approach, but I could find articles dating back to the early 2000' (like this or this).
The above mentioned book places the SSD in the analysis activities. The reason is that analysis is about understanding the requirements, which often start with use-case. The SSD is then a fine-tuning of this analysis.
However, one could argue that it's part of the design activities, since the use case are the requirements, but how these requirements are addressed through a sequence of exchanges is already the start of the design of a solution, exactly as when you start to sketch an UI: more than one SSD could satisfy the needs and you have the choice.
So the answer depends on the purpose for which you're using the model.
My own point of view is that you're already drafting a solution, so it should be design, unless you do some reverse engineering of an existing application, or your client has very detailed requirements
Elaborating a little on Christophe's answer:
I would add that analysis and design are two highly intertwined activities, so you would probably see these SSDs in both contexts and it would be perfectly fine and acceptable. Use Cases, those that involve a system, are necessarily a design artefact (they are a design of what the system does in relation to external actors) although you can of course see that same thing as a pure analysis output (telling you what the system is required to do). These things are very hard to separate. The point may seem philosophical (it is somewhat), but it is useful to think in these terms.
When you see people creating "Login" Use Cases you can bet they already stepped into pure design, in other words: functional decomposition. In analytical terms the state of a User being logged in is a constraint on a Use Case, not a Use Case itself. Having a Use Case called Login therefore represents only a design choice (incidentally, if you see this in contexts where there is a division of responsibilities between the people performing analysis and design, then you'd do well to consider it an analysis fail: the analyst is essentially designing the system and that's not their role). Analysts sometimes use Use Cases to model layers of requirements that relate only to business processes, usually referred to as "Business Use Cases", that don't involve any system per se. But the origins of Use Cases from 20-odd years ago was in the system space.

Granularity of Use Case. Should sort/search be included?

How do I determine what should I add to my use case diagrams? 1 for each button/form? Should things like sort and search be included? Or are they under "list items" for example? Though, a list of items seems understood?
The Use Case diagram is intended to help define the high-level business tasks that are important, not a list of functions of the system. For example, a system for use in customer service might involve a research task of looking up information to help someone on a support call.
Most of the literature describes Use Cases as a starting point for defining what the system needs to accomplish. The temptation has always been to be as complete as possible; adding ever more details to define the use case down to a functional (code-wise) level. While it is useful to have a comprehensive understanding of the requirements, the Use Case diagram is not intended to provide that level of documentation.
One thing that makes the issue worse is the syntax which I've never seen used in a working project. It isn't that the terms aren't useful, it's due to the lack of consensus over when to use either term for a given use case. The UML artifacts expect a process that is more focused on the business language instead of the implementation language - and by that I do not mean a computer language. The tendency by some has been to approach the diagrams with a legalistic bent and worry about things like when to use for related use cases or how to express error-handling as exceptions to a defined list of process tasks.
If you have ever tried to work through the Automated Teller Machine (ATM) example, you'll know what I mean. In the solar system of UML learning, the ATM example is a black hole that will suck you into the details. Avoid using it to understand UML or the Object Oriented Analysis and Design. It has many of the problems, typical of real-world domains, that distract from getting an overall understanding even though it would make for a good advanced study.
Yes, code will eventually be produced from the UML artifacts, but that does not mean they have to be debated like a treaty in the Senate.
The OMG UML spec says:
Use cases are a means for specifying required usages of a system. Typically, they are used to capture the requirements of a system, that is, what a system is supposed to do. The key concepts associated with use cases are actors, use cases, and the subject. The subject is the system under consideration to which the use cases apply. The users and any other systems
that may interact with the subject are represented as actors. Actors always model entities that are outside the system.
The required behavior of the subject is specified by one or more use cases, which are defined according to the needs of actors. Strictly speaking, the term “use case” refers to a use case type. An instance of a use case refers to an occurrence of the
emergent behavior that conforms to the corresponding use case type. Such instances are often described by interaction specifications.
An actor specifies a role played by a user or any other system that interacts with the subject. (The term “role” is used
informally here and does not necessarily imply the technical definition of that term found elsewhere in this specification.)
Now most people would agree that business and user level interactions are the sweet spot, but there is no limitation. Think about the actors/roles being outside of the main system/systems you are focusing on. But in one view a system could be an actor, but in another the implementer of other use cases.

Resources