What is the semantic of having an item flow between Blocks rather than Parts in SysML 1.4? - uml

From my understanding, SysML 1.4 allows to have itemFlows between Block as well as Part
Here is an excerpt from pag 75 of the SysML 1.4 specs
which shows that it is possible to have itemFlow(s) between Blocks.
I am not sure about the semantic of this.
For example, referring to the excerpt from the SysML 1.4 specs, does it mean that every instance of Engine block requires an "itemFlow" connection to an instance of a Transmission block and that a Torque will flow between every Instance of Engine block to the associated instance of Transmission Block?

Yes, of course. At least if the Engine/Transmission are blocks instantiated from this model.
You are free to define other Engines/Transmissions where not Torque is transported (e.g. if you see a copper cable as transmission where current is transported rather than torque).
An item flow in general tells that "something physical" is moved from source to target. The above transports torque. You can also transport current, gas, fluid, etc. Even abstract information can be transported, though SysML is designed to map physical objects, rather than abstract things (where UML will be sufficient).

There is an association between Engine and Transmission. Since we don't see any multiplicity, we may assume that it is 1. That means every Engine instance must be linked to a Transmission instance and vice versa. This is not realistic, but hey, who wants models of reality ;-). In the real world the multiplicity is 0..1.
The item flow just says, that Torque can potentially flow across a link between the two instances.
By the way: This is also not realistic, since torque is the potential to flow, not the item flowing. The item is rather angular momentum. For reasons I don't understand, the potential (e.g. Torque) or the rate (e.g. Current) is often used in place of the item that is flowing in reality.

Related

How to keep domain model pure using ES approach?

I have flicked through few popular Event Sourcing frameworks written in a variety of different common languages. I have got the impression all of them affect the domain models to a really high degree. As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state. Of course, it facilitates message driven inter-context integration but in core domain's point of view is negligible. I consider commands and events to be part of the domain itself so it looks perfectly fine that aggregate creates events (but not publishes them) or handles commands.
The problem is that all of DDD building blocks tend to be polluted by ES framework. Events must inherit from some base class. Aggregates at least are supposed to implement foreign interfaces. I wonder if domain models should be even aware of using ES approach within an application. In my opinion, even necessity of providing apply() methods indicates that other layer shapes our domain.
How you approach this issue in your projects?
My answer applies only when CQRS is involved (write and read models are split and they communicate using domain events).
As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state
Event sourcing is indeed an infrastructure concern, a kind of repository but event-based Aggregates are not. I consider them to be an architectural style, different from the classical style.
So, the fact that an Aggregate, in reaction to an command, generates zero or more domain events that are applied onto itself in order to build its internal (private) state used to decide what events to generate in the future is just a different mode of thinking and designing an Aggregate. This is a perfect valid style, along with classical style (the one not using events but only objects) or functional programming style.
Event sourcing just means that every time a command reaches an Aggregate, its entire internal state is rebuild instead of being loaded from a flat persistence. Of course there are other huge advantages (!) but they do not affect the design of an Aggregate.
... but not publishes them ...
I like the frameworks that permit us to just return (or better yield - Aggregate's command methods are just generators!) the events.
Events must inherit from some base class
It's sad that some frameworks require that but this is not necessarily. In general, a framework needs one mean of detecting an event class. However, they can be implemented to detect an event by other means instead of using marker interfaces. For example, the client (as in YOU) could provide a filter method that rejects non-event classes.
However, there is one thing that I couldn't avoid in my framework (yes, I know, I'm guilty, I have one): the Command interface with only one method: getAggregateId.
Aggregates at least are supposed to implement foreign interfaces.
Again, like with events, this is not a necessity. A framework could be given a custom client event-applier-on-aggregates function or a convention can be used (i.e. all event-applier methods have the form applyEventClassNameOrType.
I wonder if domain models should be even aware of using ES approach within an application
Of ES not, but event-based YES, so the apply method must still exists.
As far as I understand ES is just an infrastructure concern - a way of persisting aggregate state.
No, events are really core to the domain model.
Technically, you could store diffs in a domain agnostic way. For example, you could look at an aggregate and say "here is the representation before the change, here is the representation after, we'll compute the difference and store that.
The difference between patches and events is the fact that you switch from a domain agnostic spelling to a domain specific spelling. Doing that is normally going to require being intimate with the domain model itself.
The problem is that all of DDD building blocks tend to be polluted by ES framework.
Yup, there's a lot of crap framework in the examples you find in the wild. Sturgeon's Law at work.
Thinking about the domain model from a functional perspective can help a lot. At it's core, the most general form of the model is a function that accepts current state as an input, and returns a list of events as the output.
List<Event> change(State current)
From there, if you want to save current state, you just wrap this function in something that knows how to do the fold
State current = ...
List<Event> events = change(current)
State updated = State.fold(current, events)
Similarly, you can get current state by folding over the previous history
List<Event> savedHistory = ...
State current = State.reduce(savedHistory)
List<Event> events = change(current)
State updated = State.fold(current, events)
Another way of saying the same thing; the "events" are already there in your (not event sourced) domain model -- they are just implicit. If there is business value in tracking those events, then you should replace the implementation of your domain model with one that makes those events explicit. Then you can decide which persisted representation to use independent of the domain model.
Core of my problem is that domain Event inherits from framework Event and aggregate implements some foreign interface (from framework). How to avoid this?
There are a couple of possibilities.
1) Roll your own: take a close look at the framework -- what is it really buying you? If your answer is "not much", then maybe you can do without it.
From what I've seen, the "win" of these frameworks tends to be in taking a heterogeneous collection of events and managing the routing for you. That's not nothing -- but it's a bit magic, and you might be happier having that code explicit, rather than relying on implicit framework magic
2) Suck it up: if the framework is unobtrusive, then it may be more practical to accept the tradeoffs that it imposes and live with them. To some degree, event frameworks are like object relational mappers or databases; sure, in theory you should be able to change them out freely. In practice? how often do you derive benefit from the investment in that flexibility
3) Interfaces: if you squint a little bit, you can see that your domain behaviors don't usually depend on in memory representations, but instead on the algebra of the domain itself.
For example, in the domain model, we deposit Money into an Account updating its Balance. We don't typically care whether those are integers, or longs, or floats, or JSON documents. We can satisfy the model with any implementation that satisfies the constraints of the algebra.
So you can use the framework to provide the implementation (which also happens to have all the hooks the framework needs); the behavior just interacts with the interface it defined itself.
In a strongly typed implementation, this can get really twisty. In Java, for instance, if you want the strong type checks you need to be comfortable with the magic of generics and type erasure.
The real answer to this is that DDD is overrated. It is not true that you have to have one model to rule them all. You may have different views on the state of your world, depending on your current needs. One part of the application has one view, another part - completely different view.
To put it another way, your model is not "what is", but "what happened so far". The actual data model of your application is the event stream itself. Everything else you derive from there.

UML: guideline for visibility

I am reading Real Time UML: Advances in the UML for Real-Time Systems (3rd Edition) 3rd Edition
by Bruce Powel Douglass
In section 10.5 when talking about guidlines on detailed design on visibility. he says:
Only make semantically appropriate operations visible.
This guideline
seeks to avoid pathological coupling among classes. For example,
suppose a class is using a container class. Should the operations be
GetLeft() and GetRight() or Prev() and Next()? The first pair makes
the implementation visible (binary tree) while the latter pair
captures the essential semantics (ordered list).
I am unable to understand what he is trying to say here and especially last line.
Can someone elaborate his point ?
Well, it's a bit subtile. GetLeft and -Right have the directions in their name which are derived from an internal implementation as binary tree. So the internal data structure is sort of visible in the interface. And that should not be the case. It is better to keep this knowledge inside for several reasons. First, the outer world must not care how things are implemented. Second, if you decide to implement it in a different way (e.g. via a ring buffer) the GetRight would be odd from an internal view if you reach the right border of the buffer. Prev and Next clearly target the business/outer usage aspect of the operations.

UML component diagram named association to show dependency between components

Can I use directed named association on component diagram to show fact that "sys A" sends data to "sys B"?
Example:
No, you should use general purpose dependency instead, with optional title.
However, the title is not very common in this context. Better use some other diagrams (sequence for example) to show the communication details (e.g. open connection, send data, close connection, etc).
If there is a well defined interface between those systems, you can indicate that as well like this:
Association is used between two classes to show that their instances are potentially connected (again, not for data flow indication).
In UML 2.0 the concept behind association is vague, read this article: http://www.uml-diagrams.org/uml-core.html (search for "Semantic Relationship"). Association denotes a "semantic relationship" between two components, and I think it wouldn't be appropriate for data flow.
I think that even dependency isn't appropriate for data flows: maybe the client depends from the supplier, maybe the opposite is true... so the arrow can be very confusing.
The lollipop notation is the best, IMHO: it shows clearly that there is a component providing an interface, and another one requiring it. You can use stereotypes on the interface to show the type of communication/data transfer, and labels to make clear what data is transferred.
The book "Documenting Software Architectures" adopts another style, using prevalently associations: see p.145. It's similar to your initial proposal, but with explicit roles and without arrows. I think isn't a really satisfactory solution, without stereotypes...
If sys A sends information to sys B and you're not interested in how exactly the transmission takes place, then that is a classic application of the Information Flow connector.
A Dependency would in this case say that sys A needs (is dependent on) sys B for something. An Information Flow often (but not always) goes in the opposite direction of a Dependency, since it is typically the receiver that needs the sender.
There are many different ways of showing these types of relationships, and the best one depends on the situation. If your focus is on the type of information being transmitted, then Information Flow is the best fit. If your focus is on the way the transmission takes place, something with an Interface, possibly an Assembly, is better.
EA actually allows you to specify an Information Flow over an Assembly, so you could even combine the two. It's all down to what exactly you want to express.

Encapsulating processes within Domain Services

Note - all quotes are from DDD: Tackling Complexity in the Heart of Software
First quote ( page 222 ):
Processes as Domain Objects
Right up front let's agree that we do not want to make procedures a
prominent aspect of our model. Objects are meant to encapsulate the
procedures and let us think about their goals or intentions instead.
What I am talking about are processes that exist in the domain, which
we have to represent in the model. When these emerge, they tend to
make for awkward object designs.
The first example in this chapter described a shipping system that
routed cargo. This routing process was something with business
meaning. A Service is one way of expressing such a process explicitly,
while still encapsulating the extremely complex algorithms.
Second quote ( pages 104,106 ):
Sometimes, it just isn't a thing. In some cases, clearest and most
pragmatic design includes operations that do not conceptually belong
to any object. Rather than force the issue, we can follow the natural
contours of the problem space and include Services explicitly in the
model.
When a significant process or transformation in the domain is not a
natural responsibility of an Entity or Value Object, add an operation
to the model as a standalone interface declared as a Service. Define
the interface in terms of the language of the model and make sure the
operation name is part of the Ubiquitous language.
I can't figure out whether in first quote author is using the term "processes" to describe the same type of behavior ( which should also be encapsulated within a Service ) as in the second quote, or is the term "processes" used to describe a rather different kind of behavior than one he's describing on pages 104, 106?
Thank you
The concepts are pretty close but to me, the first quote looks more like it's about large real-world domain processes that would exist without the software (e.g. "a cargo routing process").
Second one is less clear but I think it describes smaller operations/processes/transformations taking place in the modelled version of the domain.
While the first kind should immediately click as "Service" right from early analysis stages, the latter is more subtle and could take more time to be distinguished from regular entity behavior - you could have included it in an entity at first but realize it doesn't fit that much in it as you refine the model.
I think in p. 222 he's talking about a specific kind of process. So like in the p. 102 quote, they can be implemented as services. However, some processes refer to actual domain processes and can benefit from explicit representation in the model. This may not be immediately obvious and can call for more sophisticated object-models beyond services.

In an activity diagram, are there two initiating events allowed?

I want to model an activity, where there can be two several initiating events. These events has two several responsible actors. Is it allowed that a UML activity-diagram could have two initiating events and only one end? Could the action-flow be joined?
I want to know if I defy against the UML-modeling principles, if I do this.
Would be nice, if there are some hints for me.
Greetings,
Martin
The UML 2.3 superstructure specification (p389) says:
An activity may have more than one initial node.
and
If an activity has more than one initial node, then invoking the activity starts multiple flows, one at each initial node.
So according to the UML spec you're not violating the rules.
That said, #Dave is on the money - the most important thing is your model makes sense to you and those who will consume it. The UML specification is so riddled with inconsistency and ambiguity that it's questionable what 'being compliant' even means.
So long as you and all users of the model have a common understanding of what it's representing then don't get hung up on the UML's pseudo-semantics.
(Of course, this assumes you're using the model as a picture for communication, not as a formal specification that will be interpreted/compiled to code. If so, you'll need to formalise your own semantics for what it means).

Resources