Does every path in activity diagram have a finish node. Every "fork" branch need to goto a merge? - uml

Does every path in an activity diagram need to have a finish node? A similar question is does every fork branch need to be merged?
I did an activity diagram (below), but it seems wrong. Some branches (from fork) has no finish node (nor end in a merge).
My idea was the clerk will send shipment packing slip to purchashing, accounting & customer. 2 of which just seem to create/init objects (eg. enter info). They are executed in parallel so I felt I should have a fork?

Does every path in an activity diagram need to have a finish node?
Yes. But there are two kinds of finish node: ActivityFinal and FlowFinal. You need to terminate each of the packaging and shipment flows with a FlowFinal node. See section 12.4 in the spec for details. The symbol is here, the page it's on is a good reference.
Does every fork branch need to be merged?
No. But it needs to terminate - hence existence of FlowFinal node.
hth.

Related

Problem with DDD and changing only one Aggegate in one transaction

I have a problem with my personal project.
I have there Project, which has Stages, and Stages have Tasks. At first, I was trying to make Project and AggregateRoot and Stage and Tasks Entities inside that Aggregate. As there are other Entities such as Costs, Installments, FinancialData and many, many more there as well Project has started to grow into god class, so I have reconsidered in all and made Project, Stage, and Task separate AggregateRoots.
So I have started refactoring it, and all was fine, but I have a problem with one functionality. Status system. Sometimes changes of the status on Task can start a chain of changing status of Stage and then Project as well (for example, adding new Task to finished Stage should put that stage into in progress status, and then if Project was in finished status, should be moved to in progress as well). Here is my question. How to approach that?
What I was doing till now, I was loading Project from the repository in one of the first actions in application service that was marked with #Transactional and saving at the end of that method after all actions.
After refactoring sometimes there is a need that I need to change three AggregateRoots in one transaction. If that should be then one Aggregate, then Project is coming back to the state, when it has tons of methods to handle all changes on Stages and Tasks. I'm a bit lost here.
Should load all three at the very beginning of the action, pass them in the chain of actions, and at the end of the method call save on each repository?
The operations which can touch multiple aggregates are often best modeled as sagas (there's an alternative if event-driven, but there's nothing in your question indicating that the rest of the system is event-driven). The saga would operate on the various aggregates and, importantly, be able to handle a failure/rejection of an operation (e.g. depending on requirements: retry (implies a potentially arbitrarily long period of visible inconsistency), undo changes to other aggregates, or tear down the system (sacrifice availability for consistency)).
You should understand do you really need transactional consistency between your statuses, maybe eventual consistency will be a solution and you will update statuses by events. In case it requires transactional consistency then it must be one aggregate because this is the main feature of aggregate to protect true invariants. To find an answer to the question you need to ask the business about this in a real project. The important thing is true invariants, in your example, I think, you have entity-oriented aggregates, but you need more policy or process-oriented aggregates with capability is only to protect true invariants rather than being data containers. Maybe this video will be helpful Mauro Servienti - Talk Session: All Our Aggregates Are Wrong

Join Node or Merge Node?

I am trying to make an activity diagram of a user for my system and I am unsure if the flow should come down to a Join Node or Merge Node before the user can log off. Here is the one I have made sa of now. Could anyone explain to me what's the difference?
It must be a join (though I first remembered wrongly and thanks to #AxelScheithauer pointing out my error). P. 401 of UML 2.5
15.5.3.1 Executable Nodes
...
When an ExecutableNode completes an execution, the control token representing that execution is removed from the ExecutableNode and control tokens are offered on all outgoing ControlFlows of the ExecutableNode. That is, there is an implicit fork of the flow of control from the ExecutableNode to its outgoing ControlFlows.
That means that all 6 actions below will start in parallel which does not mean they must run concurrently. But all of them need completion in order to continue after the join below. (I doubt that this is desired.)
There's a (double) 2nd flaw in the top decision back-flows. They need to go back to the top merge. Otherwise neither Login nor Register would start since they expect 3 or 2 tokens where only one would arrive.

Multi-Aggregate Transaction in EventSourcing

I'm new to event sourcing, but for our current project I consider it as a very promising option, mostly because of the audit trail.
One thing I'm not 100% happy with is the lack of aggregate-spanning transcations. Please consider the following problem:
I have an order which is processed at various machines at different stations. And we have containers where workers put the order in and carry it from machine to machine.
The tracking must be done through containers (which have a unique barcode-id), the order itself is not identifiable. The problem is: the containers are reused and need to be locked, so no worker can put two orders in the same container at the same time (for simplicity just assume they can't see if there is already an order inside the container).
For clarity, a high level view:
Order A created
Order A put on Container 1
Container 1 moves to Machine A and gets scanned
Machine A generates some events for Order A
Move Order A from Container 1 to Container 2
Order B created
Order B put on Container 1
...
"Move Order A from Container 1 to Container 2" is what I'm struggling with.
This is what should happen in a transaction (which do not exist):
Container 2: LockAquiredEvent
Order A: PositionChangedEvent
Container 1: LockReleasedEvent
If the app crashes after position 1 or position 2, we have containers that are locked and can't be reused.
I have multiple possible solutions in mind, but I'm not sure if there is a more elegant one:
Assume that it won't fail more than once a week and provide a way the workers can manually fix it.
See the container tracking as a different domain and don't use event sourcing in that domain.
Implement a saga with compensation actions and stuff.
Is there anything else I can do?
I think the saga-thing is the way to go, but we will have a rest api where we get a command transfer order A from container 1 to 2 and this would mean that the API command handler would need to listen to the event stream and wait for some saga generated event to deliver a 200 to the requester. I don't think this is good design, is it?
Not using event sourcing for the tracking is also not perfect because the containers might have an influence on the quality for the order, so the order must track the used containers, too.
Thank you for any hints.
The consistency between aggregates is eventual, meaning it could easily be that AR1 changed its state, Ar2 failed to change its state, and now you should revert the state of AR1 back to bring system into a consistent state.
1) If such scenarios are happening very often and recovery is really painful, rething your AR boundaries.
2) Recover manually. Don't use saga's, they should not be used for such purpose. If your saga wants to compensate AR1 but other transaction already changed its state to another one compensation would fail.

DDD - How to modify several AR (from different bounded contexts) throughout single request?

I would want expose a little scenario which is still at paper state, and which, regarding DDD principle seem a bit tedious to accomplish.
Let's say, I've an application for hosting accounts management. Basically, the application compose several bounded contexts such as Web accounts management, Ftp accounts management, Mail accounts management... each of them represented by their own AR (they can live standalone).
Now, let's imagine I want to provide a UI with an HTML form that compose one fieldset for each bounded context, for instance to update limits and or features. How should I process exactly to update all AR without breaking single transaction per request principle? Can I create a kind of "outer" AR, let's say a ClientHostingProperties AR which would holds references to other AR and update them as part of single transaction, using own repository? Or should I better create an AR that emit messages to let's listeners provided by the bounded contexts react on, in which case, I should probably think about ES?
Thanks.
How should I process exactly to update all AR without breaking single transaction per request principle?
You are probably looking for a process manager.
Basic sketch: persisting the details from the submitted form is a transaction unto itself (you are offered an opportunity to accrue business value; step 1 is to capture that opportunity).
That gives you a way to keep track of whether or not this task is "done": you compare the changes in the task to the state of the system, and fire off commands (to run in isolated transactions) to make changes.
Processes, in my mind, end up looking a lot like state machines. These tasks are commands are done, these commands are not done, these commands have failed: now what? and eventually reach a state where there are no additional changes to be made, and this instance of the process is "done".
Short answer: You don't.
An aggregate is a transactional boundary, which means that if you would update multiple aggregates in one "action", you'd have to use multiple transactions. The reason for an aggregate to be equivalent to one transaction is that this allows you to guarantee consistency.
This means that you have two options:
You can make your aggregate larger. Then you can actually guarantee consistency, but your ability to handle concurrent requests gets worse. So this is usually what you want to avoid.
You can live with the fact that it's two transactions, which means you are eventually consistent. If so, you usually use something such as a process manager or a flow to handle updating multiple aggregates. In its simplest form, a flow is nothing but a simple if this event happens, run that command rule. In its more complex form, it has its own state.
Hope this helps 😊

When to destroy in a sequence diagram

I can't actually find a clear answer for this. In a lot of online design tools (e.g. Web Sequence Diagrams), there is an option to "activate" and "deactivate" a process, whilst there is a separate action to "destroy" the process. When is this used?
If in the diagram I am modelling I am connecting to an online stream, when I am done with it do I deactivate it or destroy it? When I use an application and it is finished, do I deactivate it or destroy it?
In UML a destroy in a sequence diagram means 'a kind of message that represents the request of destroying the lifecycle of target lifeline', i.e. the message recipient object is logical or physical deleted and not available anymore for upcoming interactions. Deactivate would mean the message recipient object would change from an active state to an inactive one, with the possibilty to reactivation in a later state. so the objects is still available in the application space, but could for example moved to an archive over time.
Destroying would make sense if you do show an instantiate step somewhere in your diagram.
For existing resource like online stream activate/deactivate will
make more sense.
For things like launching a script execution instantiate/destroy
will be better.

Resources